Why Superintelligence Strategy Gets AI Governance Wrong
A deep dive into why AI governance frameworks like MAIM fail — and why superintelligence can’t be contained.

Governments think they can control artificial intelligence.
They can’t.The authors of Superintelligence Strategy propose a grand vision for AI governance.
They argue that states can prevent destabilizing AI development through a three-pronged strategy: deterrence (Mutual Assured AI Malfunction, or MAIM), nonproliferation, and competitiveness.
In their view, this mirrors Cold War nuclear strategy — sabotage, chip restrictions, and military superiority will keep AI breakthroughs in check.
This framework rests on a shaky foundation.
It assumes AI development is a state-driven, hardware-dependent, and monolithic pursuit.
In reality, it’s corporate-led, software-dominated, and decentralized.
Their approach ignores the messy, unpredictable nature of AI proliferation.This essay challenges key assumptions made by the authors of Superintelligence Strategy.
Superintelligence Strategy Superintelligence Strategy is written by: Dan Hendrycks, Eric Schmidt, Alexandr Wang. Rapid advances in AI are beginning…www.nationalsecurity.aiWhy Sabotage Won’t Stop Superintelligence

The MAIM doctrine suggests that any state aggressively developing AI will be deterred by the threat of sabotage — cyberattacks, supply chain disruptions, and, if necessary, kinetic strikes. But this assumes that AI development is easily detectable, like a nuclear enrichment facility or missile test site.
That’s not how AI works.A breakthrough model might not need a sprawling government facility or a secret military bunker.
It could be trained across distributed cloud infrastructure, behind closed doors at a corporate lab, or even in an academic research setting.
AI projects are not nuclear silos — they can be hidden in plain sight.
And what happens when a state does succeed in developing a runaway AI? The framework assumes other powers will be able to react before it reaches a dangerous threshold.
That’s an enormous gamble.A nation (or private actor) could quietly develop and deploy a superintelligence before rivals even realize what’s happening.
The Flaws of MAIM: AI Deterrence is a Losing Game

The authors argue that AI deterrence will function similarly to nuclear deterrence, but this assumption ignores the fundamental differences between AI and nuclear technology.
Their argument hinges on the idea that sabotage is cheaper than AI development — that cyberattacks, insider threats, and targeted disruption can prevent a nation or private actor from achieving AI dominance.
This presupposes a level of surveillance, enforcement, and coordination that does not exist.AI is more resilient than nuclear infrastructure. A nuclear facility is a fixed target — but an AI model, once developed, can be copied and reproduced across millions of servers worldwide.
You can bomb a missile silo. You can’t bomb a GitHub repository.Unlike nuclear weapons, AI is a knowledge-based technology — once leaked, it cannot be “unmade.”
AI proliferation is not a centralized arms race — it is a decentralized, accelerating global movement.
Who Really Controls AI? Corporations vs. The State

Governments may be worried about AI, but they’re not always the ones building it.
Private companies — OpenAI, Google DeepMind, Anthropic, Meta — are leading the charge.
These firms operate across borders, employ international researchers, and make strategic decisions based on profit and market forces, not national security concerns.
The MAIM framework treats AI as a state-controlled arms race, but in the U.S. and most of the West, it’s actually a corporate arms race.
Governments struggle to regulate AI companies within their own borders, let alone predict or disrupt what’s happening in China, Europe, or open-source communities.
China, however, is different.As an autocratic, monolithic entity, China is throwing its full weight into AI development and implementation across its government, military, and economy.
Unlike the fragmented, market-driven approach of the U.S., China has clear strategic AI goals and the ability to rapidly enforce adoption at scale.
- The Chinese Communist Party (CCP) directly oversees and influences AI research, ensuring it aligns with state priorities.
- AI is being integrated across every sector, from surveillance and military strategy to economic planning and state propaganda.
- State-backed firms like Baidu, Tencent, and Alibaba are required to share breakthroughs with the government, giving China a level of coordination and oversight that Western governments simply do not have. China’s government-led AI development model allows it to move faster than the West in AI deployment, but its reliance on Western semiconductor access remains a constraint.
The U.S. negotiates with private corporations, China commands them.The MAIM framework assumes states can check each other’s AI ambitions — but what if one state is operating at a fundamentally different level of control?
Superintelligence: The AI Race No One Can Afford to Lose

The authors of Superintelligence Strategy argue that AI will redefine military power, disrupt nuclear deterrence, and create dominant economic and cyber capabilities.
They acknowledge the possibility of a state achieving strategic monopoly through AI, but they do not go far enough in recognizing how irreversible and absolute such a lead would be.
- Superintelligence will not be just another strategic asset — it will be a total power shift.
- The authors frame AI as an escalating arms race, but superintelligence does not escalate — it explodes.
- Unlike nuclear weapons, which required continuous geopolitical balancing, superintelligence is a singularity — once a state or entity develops it, it will rapidly self-improve and render all competitors permanently obsolete. The first-mover advantage in superintelligence is unlike anything in history.The authors assume nations will have time to react to AI breakthroughs.
They won’t.Once an AI surpasses human intelligence, it will self-improve at an accelerating rate. At that point, intervention will be impossible.
But even before that, AI will fundamentally shift power dynamics.
States with superior AI capabilities will gain asymmetric advantages in cyberwarfare, intelligence operations, and economic decision-making long before we reach superintelligence.
Superintelligence is not a weapon — it is the end of strategic competition.The authors compare AI to nuclear weapons, suggesting that deterrence and mutual competition will shape its trajectory.
This misunderstands the nature of intelligence explosion.
Once a nation or company crosses the threshold, there will be no “second place” — only the entity that controls the AI and everyone else.
The authors believe AI dominance will be contested, managed, and deterred through sabotage and military countermeasures.
But superintelligence is not a conventional arms race.
It is the last race.The assumption that governments will maintain effective restrictions on AI progress is pure wishful thinking, given competitive pressures, private sector acceleration, and the near-impossibility of monitoring AI development across decentralized networks.
The first-mover in superintelligence will dictate all future technological, economic, and military progress — forever.AI No Longer Needs Supercomputers — Here’s Why That Matters

The paper proposes strict chip controls — treating AI hardware like enriched uranium, restricting high-end GPUs, tracking exports, and embedding geolocation locks.
The assumption is that without top-tier chips, bad actors cannot train dangerous models.
But that’s a short-term fix, not a long-term solution.AI models are becoming far more compute-efficient. Breakthroughs in model compression, low-rank adaptation, and transfer learning mean future models won’t require cutting-edge chips.
And older chips still work!Open-source models like Meta’s LLaMA 2 already run on consumer-grade GPUs. You don’t need a 500 billion dollar data center to fine-tune an effective AI.
China is stockpiling restricted chips and investing billions in domestic semiconductor production.
And here’s the biggest flaw: even if every next-gen chip were locked down, AI progress would continue.
Software always finds a way to optimize around hardware constraints.The Limits of Compute Control

The authors claim compute access is the primary bottleneck for AI progress, arguing that AI chips should be treated like uranium in nuclear weapons — heavily restricted, tracked, and controlled.
But this analogy does not hold up.The U.S. has restricted NVIDIA’s A100 and H100 chips, but China has responded by stockpiling banned GPUs and developing domestic alternatives (e.g., Huawei’s Ascend 910B AI chips).
Black market chip smuggling is thriving, and enforcement is nearly impossible at scale.
Compute control will not prevent proliferation.Open-source AI models are already capable of powerful reasoning and deception.
Even if next-gen chips were restricted, distributed training and algorithmic improvements would allow dangerous AI models to emerge anyway.
The authors assume that without high-end chips, dangerous AI progress will stall — but history shows that when powerful knowledge is at stake, new pathways emerge.
AI Nonproliferation Is Impossible — Here’s the Proof

Superintelligence Strategy outlines a three-pronged nonproliferation strategy:
- Compute Security: Tracking AI chips and preventing unauthorized access.
- Information Security: Preventing leaks of AI model weights and research.
- AI Security: Embedding safeguards into models to restrict harmful capabilities. This sounds good in theory.In practice, it’s impossible to enforce.Information security will fail.
AI research is decentralized. Leaks are inevitable.
Just as military secrets, nuclear blueprints, and cyberweapons have repeatedly been leaked or stolen, the first truly dangerous AI model weights will inevitably end up online.
AI security measures can be bypassed.
“Refusal training” and content moderation in AI models are already being jailbroken by independent researchers.
A superintelligence would easily override any pre-programmed ethical constraints.
Open-source AI makes nonproliferation impossible.
Unlike nuclear weapons, which require access to enriched uranium, open-weight AI models can be replicated indefinitely and shared globally in seconds.
Governments have never been able to contain the spread of powerful knowledge. Once a technology reaches a certain level of openness, control is lost forever.
AI Treaties Won’t Work — The Global Arms Race Has Already Started

The authors believe that international cooperation on AI safety is possible, arguing that global agreements will prevent reckless AI development.
This assumes a level of trust and restraint that has never existed in great-power competition.The AI arms race is already underway.The U.S. and China are not negotiating AI restrictions — they are escalating their investments.
Private AI companies are also racing for dominance, beyond the reach of governments.Verification and enforcement are impossible.
Nuclear treaties worked because nuclear materials and missile sites were visible.There is no effective way to verify whether another nation is secretly developing dangerous AI.
AI regulation will collapse under competitive pressure.If one country believes its rival is cheating, it will abandon safety regulation.Even if nations sign an agreement, private actors, rogue states, and non-state groups will continue AI development unchecked.
Even if world powers reached an AI safety agreement, enforcement would require constant and intrusive surveillance of corporate, academic, and independent AI research efforts worldwide.
That level of control isn’t feasible in democratic societies and extremely difficult to maintain in authoritarian ones.This lack of enforceability means any stability framework would be obsolete before it could be meaningfully implemented.
The assumption that governments will maintain effective restrictions on AI progress is pure wishful thinking.
AI Proliferation is Unstoppable: Here’s Why

AI is software. Software scales. Software leaks.
The authors want a Cold War playbook for AI — but containment is not an option.A single open-source breakthrough, a rogue researcher, or a private company racing ahead could flip the entire power dynamic overnight.
The assumption that sabotage, chip controls, and deterrence will keep AI contained completely ignores reality.
The paper proposes that international AI cooperation — through MAIM and global governance — will prevent catastrophic escalation.
This assumes that:
- States will recognize the risks and hold back.
- Verification and transparency will keep AI development in check.
- AI arms control agreements will hold. None of these are likely.AI is not a containable technology like nuclear weapons.Geopolitical competition will always override cooperation.
The U.S. and China are already locked in an AI arms race.
The idea that both sides will voluntarily slow down or impose mutual restrictions is naïve.
Past arms control agreements show the flaws in this approach. Cold War treaties worked because nuclear weapons were highly centralized. AI is not. There is no single “button” to prevent proliferation.
Governments will pretend to cooperate while quietly accelerating development.
Private actors will work outside the system.
The first truly powerful AI will emerge before any stability framework is fully in place.No One Can Stop AI — The Only Question Is Who Gets There First

AI proliferation is moving faster than governments can regulate it.
The first state or private actor to develop superintelligence will set the course of history.
No sabotage doctrine, chip restriction, or international treaty will stop it.
The question isn’t whether AI proliferation will happen.
The question is: who will be first?What do you think? Is AI governance doomed to fail, or is there a way to contain superintelligence? Let’s discuss.About the Author
I write about the intersection of artificial intelligence, economics, and governance — exploring the real-world implications of emerging technologies on power, policy, and society. My work critically examines AI regulation, superintelligence risks, and the accelerating arms race between corporations and states.
When I’m not analyzing AI policy, I’m deep in data, music, or a good book. You can find more of my work here on Medium, where I break down complex technological debates into sharp, engaging arguments.
Let’s connect — I’d love to hear your thoughts.
— Lawton

This story is published on Generative AI. Connect with us on LinkedIn and follow Zeniteq to stay in the loop with the latest AI stories.
Subscribe to our newsletter and YouTube channel to stay updated with the latest news and updates on generative AI. Let’s shape the future of AI together!
