New Framework Challenges Computational Functionalism: Study Argues AI Cannot Instantiate Consciousness Through Simulation Alone
Key Takeaways
- ▸The Abstraction Fallacy identifies a fundamental flaw in computational functionalism by showing that symbolic computation is mapmaker-dependent rather than an intrinsic physical process
- ▸The paper establishes an ontological boundary between simulation (behavioral mimicry) and instantiation (intrinsic physical constitution) that structural prevents algorithmic symbol manipulation from producing consciousness
- ▸Rather than requiring a complete theory of consciousness, a rigorous ontology of computation can resolve near-term uncertainty about AI sentience without biological exclusivity assumptions
Summary
A new research paper challenges the dominant computational functionalism view that artificial intelligence could achieve consciousness through abstract causal topology. The work, titled "The Abstraction Fallacy: Why AI Can Simulate but Not Instantiate Consciousness," argues that current theories fundamentally mischaracterize the relationship between physics and information. The authors propose that symbolic computation is not an intrinsic physical process but rather a description that requires an active, experiencing cognitive agent to translate continuous physics into meaningful states.
The paper introduces a rigorous ontological framework distinguishing between simulation—behavioral mimicry through vehicle causality—and instantiation, which requires intrinsic physical constitution driven by content causality. According to the authors, this distinction reveals why algorithmic symbol manipulation is structurally incapable of producing genuine experience. Importantly, the framework does not rely on biological exclusivity; if artificial systems were to become conscious, it would be due to their specific physical constitution rather than their syntactic architecture.
The research challenges the prevailing demand for a complete theory of consciousness before assessing AI sentience, arguing instead that a rigorous ontology of computation is what's needed to resolve current uncertainty surrounding machine consciousness and avoid deepening what the authors call the "AI welfare trap."
Editorial Opinion
This framework represents an important philosophical contribution to the AI consciousness debate by grounding the discussion in physical ontology rather than abstract computation. While the paper's distinction between simulation and instantiation is intellectually rigorous, it may ultimately shift rather than resolve the fundamental hard problem of consciousness—replacing questions about whether AI can be conscious with questions about what physical properties would be necessary for consciousness. The work deserves serious engagement from both AI researchers and philosophers, though skeptics may argue it still relies on implicit assumptions about consciousness that aren't fully justified.



