Idea 20 for 2025: Can AI have subjective experiences?
- Nitin Deckha
- Jul 7
- 2 min read
About two weeks ago, I listened to a livestream of a fascinating talk featuring Geoffrey Hinton, recipient of the 2024 Nobel Prize of Physics and a University of Toronto Professor Emeritus of Computer Science.
Hinton’s provocative argument centred on suggesting that advances in AI are accelerating at such pace that it could be said that AI has subjective experience and consciousness, characteristics we have long imagined as distinctly human.
To explain, Hinton took us through his foundational work in computational linguistics and his early generative language model that was designed to predict the third word in a phrase. Over time, and as large language models develop, they became more and more refined in these predictions. These models, Hinton suggested, began to ‘understand’ how words were interacting with other words. For Hinton, words are fairly flexible. Using a LEGO analogy, where the pieces can be interconnected in multiple ways, Hinton sees words like LEGO, connecting, deforming and forming words, which by extension, could lead to the formation of thoughts and the expressions of subjective experiences.
Hinton recognizes that most people don’t think computers can have subjective experiences. Hinton argues that this comes from the “inner theatre” argument, which suggests thoughts are formed and articulated in the “inner theatre” of one’s mind. Hinton profoundly disagrees this model of cognition and consciousness.
Hinton argued that, with the expansion and proliferation of larger and larger language models, AI will become smarter and smarter. In doing so, it would be developing a better form of intelligence, a digital intelligence that could pose a long-term existential threat, particularly if energy is cheap (for as we know, AI is highly energy-intensive).
In the second half of the talk, Hinton was joined by Nick Frosst, his former intern and co-founder of AI language processing start up, Cohere. Frosst disagreed with Hinton, arguing that large language models function differently than human minds. He argued that LLM models are static, that they do not learn new experiences, but that their “base,” that is, the trillions of data in which they have been provided, respond to and are reinforced by human feedback. To help explain, Frosst gave the analogy of a bird vs an airplane; both can fly, but they used vastly different systems and our powered by different properties. In a zinger of a line and heavy with a sense of the student surpassing the teacher, Frosst described AI’s conscious as “more than a rock but less than a tree.”
The latter bit of the conversation turned more to the risks and the need for guardrails. Frosst recognize the very real threats of misinformation, the challenges of “bad jobs” amidst growing inequality. Frosst and Hinton discussed the use of AI by “bad actors”, with scenarios of political corruption, wide-spread surveillance, and AI-invented cyberattacks. Moreover, in a scenario of AI-powered biological weapons, the bottleneck is not technological, but access to restrictive materials to conduct experiments and “wet labs”; that is, presumably a human decision to authorize.

Geoffrey Hinton, left, and Nick Frosst, right, on stage with the CBC's Nora Young (photo by Johnny Guatto)
Kalpavalle, R. 2025, June 27. https://www.utoronto.ca/news/geoffrey-hinton-discusses-promise-and-perils-ai-toronto-tech-week
