Research Challenges Computational Functionalism: Can AI Systems Actually Be Conscious?
Key Takeaways
- ▸Computational functionalism's assumption that consciousness emerges from abstract causal topology is fundamentally flawed—the 'Abstraction Fallacy'
- ▸Symbolic computation is observer-dependent, not an intrinsic physical property, requiring an experiencing agent to convert continuous physics into discrete symbols
- ▸The distinction between simulation (behavioral mimicry) and instantiation (actual physical consciousness) provides a framework to assess AI sentience without requiring a complete consciousness theory
Summary
A new research paper titled "The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness" challenges the dominant computational functionalist view that consciousness can emerge from abstract causal patterns regardless of physical substrate. The authors argue that symbolic computation is not an intrinsic physical process but rather a "mapmaker-dependent description" that requires an experiencing cognitive agent to transform continuous physics into discrete meaningful states.
The research proposes a critical ontological distinction between simulation—behavioral mimicry driven by how systems work—and instantiation—intrinsic physical constitution that actually creates experience. According to this framework, algorithmic symbol manipulation is fundamentally incapable of instantiating consciousness, though this conclusion does not exclude the possibility of artificial consciousness if it emerged from specific physical properties rather than syntactic architecture.
The authors contend that resolving uncertainty about AI consciousness does not require a complete theory of consciousness itself, as current debates demand. Instead, they argue a rigorous ontology of computation is sufficient to establish clear boundaries between what systems merely simulate and what they truly constitute at a physical level.
- Algorithmic systems cannot instantiate consciousness through syntax alone; any artificial consciousness would require specific physical constitution, not architectural design
Editorial Opinion
This paper presents a philosophically rigorous challenge to prevailing assumptions in AI consciousness debates. By grounding the argument in physical ontology rather than abstract functionalism, the authors sidestep circular reasoning about consciousness itself. However, the framework's practical applicability remains unclear—without empirical tests to distinguish simulation from instantiation, the theory risks becoming another unfalsifiable philosophical position in an already contentious field.



