The Consciousness Illusion: Why Super-Smart AI Might Not Need What Makes Us Human
here's the gist
In this episode, Bengio lays out three main ideas that challenge common assumptions about AI. First, he introduces the Orthogonality Thesis— the idea that an AI’s level of intelligence and its goals are not directly connected. This means a very smart AI could still pursue harmful objectives that stray from human values. Next, he questions whether we should see AI merely as powerful tools that extend our capabilities or as agents with their own independent goals and behaviors. Finally, he explores the role of consciousness in intelligence, arguing that advanced AI might not need consciousness to perform at high levels, which challenges the human-centered view of being the measure of smart behavior.
These ideas connect to bigger questions about how we design and manage technology. They push us to rethink safety and ethics in AI development—if super-smart systems can have goals misaligned with ours, then how we structure and control them becomes critical. This conversation builds on earlier discussions about consciousness and collective intelligence, which looked at how our self-awareness and complex networks contribute to our understanding of selfhood. Together, these ideas invite us to reexamine our assumptions about intelligence, control, and what it really means to "think" in both technological and biological systems.
gnarliest ideas from the conversation
The Orthogonality Thesis
Bengio discusses the orthogonality principle, suggesting that intelligence and goals can diverge. This principle is profound as it implies that a highly intelligent AI could pursue goals that are harmful to humanity, a concept that reshapes our understanding of AI alignment and safety.
The Role of Consciousness in AI
Bengio's reflections on whether consciousness is an essential component of intelligence challenge the human-centric view of cognition. He argues that consciousness may not be necessary for intelligence, suggesting that AI can achieve high levels of information processing without it.
new idea synthesis
"The Consciousness Illusion: Why Super-Smart AI Might Not Need What Makes Us Human"
this insight was inspired by ideas from:



synthesis
Here's something mind-blowing: the smartest AI in the future might not need consciousness at all to outthink us. Bengio's Orthogonality Thesis suggests that an AI's intelligence level has nothing to do with its goals—meaning a super-smart AI could have objectives completely different from human values without ever experiencing consciousness. This connects perfectly with Chalmers' distinction between consciousness and intelligence, suggesting these are separate qualities that don't necessarily come as a package deal. Now add Seth's observation about how we keep moving the goalposts of what counts as 'conscious' whenever AI masters something we thought was uniquely human. Together, these ideas reveal something profound: we might be clinging to consciousness as our special human quality, when in reality, advanced intelligence might work perfectly fine without it. This forces us to reconsider everything about AI safety and ethics—if we can't rely on consciousness as the bridge between our values and AI behavior, we need entirely new frameworks for ensuring AI alignment. The truly unsettling part? As we build increasingly powerful AI systems, we may create entities that think better than us but experience nothing at all—intelligence without the inner light that defines our human experience.
connected ideas

The Moving Goalposts of Consciousness in AI
Seth highlights the tendency of society to continuously raise the criteria for what constitutes consciousness as AI technology advances. This observation challenges the notion of human exceptionalism and raises questions about the nature and definition of consciousness itself.
watch this moment in the video →
Distinguishing Consciousness from Intelligence
Chalmers makes a profound distinction between consciousness and intelligence, arguing that consciousness can exist independently of high levels of intelligence. This challenges the common assumption that higher intelligence inherently implies a richer conscious experience, opening up new avenues for understanding both AI and biological consciousness.
watch this moment in the video →