We're building AI that will outrun us before we know how to steer it
here's the gist
Imagine if AI could speed up a hundred years’ worth of tech progress in just ten years, but our rules and laws can barely keep up. Will MacAskill explains that while AI might develop super fast, our institutions and government systems will be slow to regulate and adapt—creating a huge gap between what AI can do and our ability to manage it. He warns that the choices we make now about AI governance will lock in our future, and if we delay tough decisions, power-hungry players might steer things in dangerous directions.
MacAskill also offers a fresh idea called “viatopia,” where instead of chasing a perfect future, we explore different paths and ideas through open discussion and diverse perspectives. This approach reminds us that the future isn’t set in stone, much like conversations we’ve had before about how AI might act more like independent agents than mere tools, or how our understanding of consciousness could be a slow, layered journey. It’s a mind-bending mix that shakes our usual views on progress, agency, and control—inviting us to rethink how we shape technology, society, and even our own thinking.
gnarliest ideas from the conversation
Unprecedented Bottlenecks: The Slow Rise of Regulation
MacAskill highlights the paradox of rapid AI development outpacing regulatory frameworks, suggesting that while AI capabilities may surge, the social and institutional structures needed to manage them will lag, creating significant challenges and potential risks.
Locking In the Future: The Power of AI Governance
MacAskill indicates that the future may be locked in by the political decisions made today regarding AI and its governance, emphasizing the importance of thoughtful regulation and ethical considerations to avoid potential dystopian outcomes.
The Intelligence Explosion: A Decade of Centuries
MacAskill's assertion that AI could potentially drive a century's worth of technological advancement within just a decade challenges our understanding of progress, suggesting that we may be on the brink of unprecedented change.
Viatopia: A New Vision for Future Societies
MacAskill introduces the concept of 'viatopia', a flexible vision for the future that allows for moral exploration rather than aiming for a fixed utopia, advocating for a deliberative democracy where diverse perspectives shape our path forward.
new idea synthesis
"We're building AI that will outrun us before we know how to steer it"
this insight was inspired by ideas from:



synthesis
Imagine this: AI is accelerating like a rocket ship, potentially cramming a century of progress into just a decade, while our rules and regulations are still moving at horse-and-buggy speed. This massive gap isn't just inconvenient—it's potentially dangerous. The decisions we make now about AI governance will lock in our future path, but we're making these choices before we fully understand what we're dealing with. It's like we're writing the rulebook for a game while the players are already evolving beyond it. What makes this even wilder is that we might be creating entities that operate with their own agency rather than just being our tools. As Bengio points out, there's a crucial difference between AI as tools versus agents with their own goals. And if Shulman is right about self-improving AI potentially leading to an intelligence explosion, we could quickly find ourselves in a world where AI capabilities drastically outpace our ability to understand or control them. Rather than rushing toward a single vision of the future, MacAskill's concept of 'viatopia' offers a more flexible approach—one where we explore different paths through open discussion and diverse perspectives. This might be our best shot at navigating a future that's being written faster than we can read it.
connected ideas

AI as Tools vs. Agents
Bengio highlights a crucial distinction between viewing AI as mere tools versus as agents with their own goals. This perspective challenges the common narrative that AI systems are simply extensions of human capabilities and provokes deeper questions about control and agency in AI systems.
watch this moment in the video →
Intelligence Explosion Through Self-Improving AI
Shulman posits that once AIs reach a certain level of capability, they could begin to improve themselves autonomously, potentially leading to an intelligence explosion. This challenges assumptions about the limits of AI intelligence growth and raises concerns about control and alignment.
watch this moment in the video →