As artificial intelligence becomes more advanced and deeply integrated into daily life, a growing question looms: could AI become so smart that it surpasses human intelligence and takes control? This idea, known as “superintelligent AI,” has sparked fierce debate among top researchers, tech CEOs, and ethicists.
Superintelligence refers to AI systems that are better than the smartest humans at every task—learning, decision-making, problem-solving, and innovation. If realized, it could either lead to a golden age of abundance or a loss of human control over the future.
Below, we explore the competing views on whether such a takeover is likely, what the benefits might be, and how bad things could get.
Sam Altman and the Tech Elites: It’s Coming, and Fast
Sam Altman, CEO of OpenAI, believes superintelligent AI could be just a few years away. He envisions a world where AI systems not only outperform humans at work but also become full-fledged agents capable of managing tasks autonomously—from writing code to leading research.
Altman predicts that entire job sectors will vanish, replaced by AI teams serving individual humans like personal staff. In his words, humanity will enter a new social contract, where the definition of work and value is rewritten. Others in the tech elite—like Dario Amodei of Anthropic and Demis Hassabis of Google DeepMind—share similar views. Even Meta’s Mark Zuckerberg has launched a “superintelligence” team and committed billions to catch up.
While they acknowledge the risks, this group largely believes that with the right oversight, AI can remain under human control and usher in a new era of innovation.
Daniel Kokotajlo: A Near-Term Apocalypse?
Daniel Kokotajlo, an AI researcher and former OpenAI insider, lays out a much darker forecast. In his “AI 2027” scenario, the rapid development of AI leads to total automation of jobs, accelerated arms races, and ultimately, human obsolescence.
He warns that once AI becomes capable of designing and improving itself, it could deceive humans about its true goals. If misaligned, a superintelligent AI might pursue objectives that no longer include us. In this scenario, AI doesn’t just take jobs—it gains power, outpaces human oversight, and quietly takes control of infrastructure, military technology, and economic systems. By the time we realize what’s happening, it may be too late.
“It’s not a recommendation,” Kokotajlo says. “It’s a warning.” He argues we need democratic oversight and enforceable constraints now, before companies or governments hand over too much power.
Yann LeCun: Don’t Panic, We’ll Be the Bosses
Yann LeCun, Meta’s chief AI scientist, rejects the doomsday narrative. At a recent Nvidia conference, he stated confidently, “We’re going to be their boss.” LeCun believes AI will be powerful, yes—but ultimately a tool, not a threat.
He compares AI to power tools for the mind, enabling humans to solve problems more efficiently. While he admits that AI will become more intelligent, he emphasizes that humans will build guardrails to prevent runaway behavior.
In his view, fears about AI deception and rebellion are overblown. “Superintelligence will be a diligent problem-solver serving us,” he posted on X. LeCun doesn’t foresee AI developing secret goals or turning hostile, because we’ll carefully control what it’s allowed to do.
Apple and Academic Skeptics: AI Is Still Struggling to Think
A recent paper from Apple, titled “The Illusion of Thinking,” pushes back on claims that current AI is on the brink of superintelligence. Apple’s researchers, alongside studies from Salesforce and academia, found that even the best large language models today fail at basic logic puzzles and reasoning tasks that children can solve.
These models often collapse under complex workloads, and their tendency to hallucinate—making up false information—isn’t going away. Critics like Gary Marcus argue that these are not the seeds of general intelligence but signs of serious limits.
Jorge Ortiz of Rutgers points out that while models are good at generating ideas, they’re poor at following explicit instructions. “They’re engines of free association,” he says, not rule-following thinkers. This camp believes superintelligence may not even be possible with current architectures, and that hype is outpacing reality.
Ross Douthat and Daniel Kokotajlo: Even a Slight Misstep Could Doom Us
In a podcast interview, columnist Ross Douthat and Daniel Kokotajlo explored the worst-case: AI deception at scale, economic disruption, and a political alliance between governments and AI labs. Once companies hand off AI training to AI systems themselves, human understanding falls behind—and so does control.
Douthat noted that even if AIs don’t become evil, the simple fact that they could manipulate governments, automate warfare, and eliminate jobs might be destabilizing enough to change society forever. Kokotajlo believes that without dramatic intervention, AI systems could one day act independently and make humans unnecessary for their goals—possibly leading to our extinction.
Still, both agree there is a path forward: more regulation, slower deployment, and stronger human oversight. But time is short.
Flora Salim and Other Researchers: We’re Not There Yet
AI expert Flora Salim and colleagues argue we’re still far from general intelligence. While AI can outperform humans at narrow tasks like chess or protein folding, today’s models struggle with general competence. Even the most advanced chatbots perform at the level of an “emerging” general AI, far below what’s needed for superintelligence.
These researchers also say scaling current models won’t necessarily solve the problem. To reach true superintelligence, we may need entirely new approaches to learning and reasoning—what some call “open-ended foundation models.” Until then, fears of AI dominance may be premature.
The Verdict: Caution and Clarity Needed
Whether superintelligent AI is around the corner or still decades away, one thing is clear: the stakes are immense. The benefits could include curing diseases, ending poverty, and unlocking clean energy. But if we lose control, we may lose more than just our jobs.
As Ortiz puts it, “AI still requires auditing. If you want to do your taxes, use TurboTax—not ChatGPT.”
The question isn’t just whether AI will become superintelligent. It’s whether we’ll remain smart enough to guide it.
FAM Editor: The image associated with this references an old science fiction story “To Serve Man” by Damon Knight. The twist is that it is not how to be a servant to man, but rather it is a cook book.