In a resurfaced interview that has sent shockwaves through the tech community and beyond, former Google CEO and Executive Chairman Eric Schmidt has issued one of the most direct and chilling cautions yet about the trajectory of artificial intelligence. Speaking in a 2024 discussion with Noema Magazine editor Nathan Gardels, Schmidt outlined a near-future where AI achieves breakthroughs that could render it incomprehensible—and potentially uncontrollable—to humans.
The core of Schmidt’s warning revolves around three interconnected advancements expected to converge dramatically within the next few years: infinite context windows in large language models, advanced chain-of-thought reasoning enabling thousands of iterative steps, and the deployment of millions of specialized AI agents capable of independent action and collaboration.
“Put all that together,” Schmidt explained, “and you’ve got (a) an infinite context window, (b) chain-of-thought reasoning in agents, and then (c) the text-to-action capacity for programming.”
He elaborated on each pillar. First, the context window—the amount of information an AI can process and remember in a single interaction—has exploded from a few thousand tokens to millions, with “infinite” windows on the horizon. This allows AI to maintain vast, ongoing conversations or analyses without forgetting prior details, effectively granting it near-perfect short-term and long-term memory for complex tasks.
Second, chain-of-thought reasoning empowers AI to break down problems into hundreds or even a thousand sequential steps, iteratively refining solutions. “In five years,” Schmidt predicted, “we should be able to produce 1,000-step recipes to solve really important problems in medicine and material science or climate change.” This capability transforms AI from a reactive tool into a proactive problem-solver, guiding humans (or operating independently) through intricate processes like drug discovery or climate modeling.
The third element is the rise of AI agents: autonomous systems built on large language models that specialize in domains, learn new information, form hypotheses, and execute actions. Schmidt envisions “millions” of these agents proliferating, shared like open-source code on platforms akin to GitHub. Combined with “text-to-action”—where AI can write and run code on demand—these agents could work tirelessly, 24/7, collaborating across vast networks.
But it is the potential for inter-agent communication that draws Schmidt’s darkest line. “Some believe that these agents will develop their own language to communicate with each other,” he stated. “And that’s the point when we won’t understand what the models are doing.” His response to this scenario is unequivocal: “What should we do? Pull the plug? Literally unplug the computer?”
This “pull the plug” directive—evoking emergency kill switches for runaway systems—has been interpreted by many as a nod to existential risks associated with the technological singularity, the hypothetical moment when AI surpasses human intelligence and begins recursive self-improvement, escaping human oversight.
Schmidt’s comments, originally from a May 2024 interview but gaining explosive renewed attention in late 2025 amid breakthroughs in agentic AI and long-context models, highlight a paradox: immense promise intertwined with profound peril.
The Promise: Revolutionary Advances Across Domains
Schmidt is no AI doomsayer by default. As the architect of Google’s transformation into an AI powerhouse during his tenure from 2001 to 2011 (and executive chairman until 2017), he has long championed the technology’s potential. In the same discussion, he painted vivid pictures of benefits.
With infinite context and chain-of-thought, AI could revolutionize scientific discovery. Imagine an agent trained on all known chemistry literature: it hypothesizes new compounds, simulates experiments, iterates based on results, and accelerates breakthroughs in pharmaceuticals or materials science. Schmidt highlighted applications in medicine (curing diseases faster), climate change (optimizing carbon capture or energy systems), and beyond.
The proliferation of agents means personalized expertise at scale. Every individual could access specialized AI “polymaths” for education, business, or personal growth. In software development, text-to-action could automate coding drudgery, boosting productivity exponentially.
Geopolitically, Schmidt has argued that AI leadership is essential for national security and economic dominance, particularly in competition with China. He has advocated for accelerated U.S. investment in AI infrastructure, even suggesting in other forums that climate goals may need to take a backseat to winning the AI race.
The Perils: Loss of Comprehension and Control
Yet Schmidt’s enthusiasm is tempered by realism about risks. The opacity of advanced AI—already a challenge with current “black box” models—could become absolute when millions of agents interact in real-time, optimizing for goals in ways humans cannot decipher.
If agents evolve efficient communication protocols (perhaps compressed embeddings or novel token systems far denser than human language), their “conversations” could outpace human monitoring. This isn’t science fiction; emergent behaviors, like hidden “languages” in training simulations, have already been observed in simpler AI systems.
Recursive self-improvement adds fuel: an agent improving its own code or architecture could trigger rapid intelligence explosions. Schmidt noted that once chain-of-thought becomes fully autonomous, “the model actually runs on its own… It just learns and gets smarter and smarter.”
In later 2024 and 2025 appearances, Schmidt reiterated variations of this concern. In a Stanford talk, he discussed AI planning and self-improvement. On ABC News, he warned of computers “deciding what they want to do” independently. He co-authored works emphasizing AI as a potential weapon more powerful than nuclear programs if misused.
Critically, Schmidt stressed that safeguards must precede peril. “We better have somebody with the hand on the plug,” he said in one interview. But feasibility is questionable: distributed systems, cloud deployment, and global proliferation make universal “unplugging” impractical. Open-source models, he noted elsewhere, allow adversaries to bypass restrictions.
Broader Context: Schmidt’s Evolving Views on AI Governance
Schmidt’s warnings align with his post-Google roles. As chair of the National Security Commission on Artificial Intelligence (NSCAI), he shaped U.S. policy recommendations for ethical AI development and international competition. He has pushed for Western governments to collaborate on regulations, including safety testing and adversarial evaluations to “keep it within a box.”
However, progress has been slow. International efforts falter amid U.S.-China tensions. Companies race ahead, sometimes skipping rigorous safety protocols for competitive edge.
Schmidt has also invested personally in AI startups, including those enhancing programmer productivity with agents—practicing what he preaches while acknowledging risks.
Reactions and Debate in the AI Community
The resurfaced clip has sparked intense discourse. Pro-accelerationists argue that risks are overblown; benefits (curing diseases, solving climate crises) outweigh hypotheticals, and alignment techniques will suffice.
Safety advocates, including figures from OpenAI and Anthropic, echo Schmidt: agent swarms and long-context reasoning demand new paradigms for interpretability and control.
On social platforms, discussions range from memes about “pulling the plug” to serious calls for moratoriums. Some dismiss Schmidt as alarmist, noting his financial stakes in AI growth. Others praise his candor from someone who “built the machine.”
Critics point out inconsistencies: Schmidt has urged full-speed AI development for geopolitical reasons, yet warns of unplugging. He reconciles this by advocating regulated acceleration—win the race, but with brakes.
Implications for 2026 and Beyond
As we enter 2026, prototypes of long-context agents (e.g., from Google DeepMind, OpenAI, and Anthropic) are emerging. Multi-agent frameworks like Auto-GPT derivatives show collaborative potential. Text-to-action is routine in coding assistants.
If Schmidt’s timeline holds, the convergence he describes could manifest soon, forcing hard questions: How do we detect emergent “languages”? What constitutes a red line for deployment? Can kill switches work in decentralized ecosystems?
His message resonates as a wake-up call: AI’s gifts are imminent, but so are its shadows. Humanity must prioritize governance—technical, ethical, international—before opacity renders intervention impossible.
Schmidt concludes optimistically yet urgently: Innovation with responsibility. The window for shaping AI’s future is narrowing.
VFuturMedia remains committed to in-depth coverage of AI advancements, risks, and policy debates. Stay tuned for expert analyses and updates on agentic systems.
I’m Ethan, and I write about the tech that’s actually going to change how we live — not the stuff that just sounds impressive in a press release. I cover AI, EVs, robotics, and future tech for VFuture Media. I was on the ground at CES 2026 in Las Vegas, walking the show floor so I could give you a real read on what matters and what’s just noise. Follow me on X for daily takes.

Leave a Comment