No Winners

No Winners

The race to build smarter-than-human AI is a race with no winners.

300+expert warnings
100+officials on record
30+congressmen

In May 2023, hundreds of scientists signed an open letter saying that AI poses a very real chance of killing us all.

Signatories included three of the four most cited living AI researchers.

AI labs are racing to build superintelligent AI as soon as possible. As United Nations Secretary-General António Guterres notes:

Alarm bells over the latest form of artificial intelligence, generative AI, are deafening. And they are loudest from the developers who designed it. These scientists and experts have [declared] AI an existential threat to humanity on par with the risk of nuclear war.

Some of the latest alarm bells have included the AI 2027 report, the book If Anyone Builds It, Everyone Dies, and the newly released documentary The AI Doc, which interviewed AI researchers, executives, and national security experts about their extraordinary concerns.

We are in uncharted waters, which makes the risk level difficult to assess. A pretty normal estimate, however, is Jan Leike’s “10–90%” chance of extinction-level outcomes.

Leike has headed the alignment research team at two different top American AI companies: Anthropic and OpenAI.

No normal engineering discipline would accept a 25% chance of killing a person. Yet Anthropic’s CEO talks about a 25% chance of “doom” for the entire world.

At least one of the leading labs is dismissive of the risk: Meta, the company behind Facebook. But even the fact that “will we kill everyone if we keep moving forward?” is hotly debated among researchers seems very clearly like more than enough grounds for governments to internationally halt the race to build superintelligent AI. This would be beyond straightforward in any field other than AI.

01Is an international halt politically feasible?

Policymakers seem to be rapidly coming around to this solution. In the UK, over a hundred parliamentarians recently signed a statement saying, in part:

Superintelligent AI systems would compromise national and global security. The UK can secure the benefits and mitigate the risks of AI by delivering on its promise to introduce binding regulation on the most powerful AI systems.

In late 2025, seven former US Congressmen endorsed a Statement on Superintelligence:

We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.

They were joined by retired US Navy Admiral and former Chairman of the Joint Chiefs of Staff Mike Mullen, former US National Security Advisor Susan Rice, and dozens of world-class scientists and political leaders.

In the US, the number of senior officials voicing dire concerns about smarter-than-human AI and loss-of-control scenarios has grown rapidly, and is strongly bipartisan:

Surviving AI is bipartisan

30+ congressmen, world leaders, and 100+ UK parliamentarians on record.
Click to see full statements →

Political feasibility is helped by polling data showing that AI is increasingly unpopular, and that voters are broadly opposed to the race to build superintelligence.

Another factor in favor of feasibility is that many different camps can share the view that a shutdown would be worthwhile, even if they’re skeptical of scientists’ claims that we’re likely to achieve superintelligent AI anytime soon:

  • AI systems short of superintelligence can still pose an existential risk, if they go rogue or are misused, e.g., to produce lethal biological weapons.
  • And many of the other harms and risks posed by AI can be addressed by pausing international development. Mass unemployment becomes much less likely; society is more likely to be able to adapt to the rise of AI scams, deepfakes, and propaganda; and issues of power concentration become more manageable.

Moreover, an even larger group can agree that it would be valuable to build the legal and physical infrastructure required for a shutdown. This is because this infrastructure overlaps heavily with what would be needed to meaningfully regulate or monitor AI, or to hit the brakes in the future.

As AI chips continue to proliferate, and as AI becomes more powerful and more integrated into society, regulation will only become more difficult. At the same time, it will become less and less possible to hit the “off switch” on frontier development (or on particular AI developers or data centers), even when there is broad international consensus that this is warranted.

Aggressive international interventions are needed in the near future just to preserve option value. If governments wait and do nothing for five years, it is reasonably likely that the window to act will already have closed.

02How would a pause even work?

A natural response here is: “The basic argument makes sense—‘smarter-than-human AI is more dangerous than nuclear weapons, so we need to treat it similarly.’ But with nuclear weapons, we have a detailed understanding of what’s required to build them, and it involves huge easily-detected infrastructure projects and rare materials.”

But the latter points are also true for AI, as it’s built today.

The most powerful AIs today rely on extremely specialized and costly hardware, cost hundreds of millions of dollars to build, and rely on massive data centers that are relatively easy to detect using satellite and drone imagery.

03Wouldn’t people just build data centers in secret?

Only a few firms can fabricate AI chips—primarily the Taiwanese company TSMC—and one of the key machines used in high-end chips is only produced by the Dutch company ASML. This is the extreme ultraviolet lithography machine, which is the size of a school bus, weighs 200 tons, and costs hundreds of millions of dollars. Many key components are similarly bottlenecked.

200 tonsWeight of ASML’s EUV lithography machine
10+ yearsTo replicate the chip supply chain

This supply chain is the result of decades of innovation and investment, and replicating it is expected to be very difficult—likely taking over a decade, even for technologically advanced countries.

This supply chain, largely located in countries allied to the US, provides a clear point of leverage. If the international community wanted to, it could easily monitor where all the chips are going, build in kill switches, and put in place a monitoring regime to ensure chips aren’t being used to build toward superintelligence.

(Focusing more efforts on the chip supply chain is also a more robust long-term solution than focusing purely on data centers, since it can solve the problem of developers using distributed training to attempt to evade international regulations.)

04But won’t AI become cheaper to build?

It isn’t likely to suddenly become dramatically cheaper overnight. If it becomes cheaper gradually, regulations can build in safety margins and adjust thresholds over time to match the technology.

Efforts to bring preexisting chips under monitoring will progress over time, and chips have a limited lifespan, so the total quantity of unmonitored chips will decrease as well.

If we treated superintelligent AI like nuclear weapons, we also wouldn’t be publishing random advances to arXiv. This would mean that the development of more efficient algorithms and more optimized compute happens more slowly. Some amount of expected algorithmic progress would also be hampered by reduced access to chips.

Additionally, a halt can be successful even if it isn’t permanent. Even the most extreme AI risks may be manageable if the world has time to prepare and adjust for them. Buying the world many additional decades makes it much more likely that humanity is scientifically and institutionally equipped to navigate smarter-than-human AI.

05But wouldn’t banning superintelligence devastate the economy?

It would mean forgoing some future economic gains, because the race to superintelligence comes with greater and greater profits until it kills you. But these profits are worth nothing if we’re dead.

There’s the separate issue that enormous investments are currently flowing into building bigger and bigger data centers, in anticipation that the race to smarter-than-human AI will continue. A ban could cause a shock to the economy as that investment dries up. However, this would be relatively easy to avoid via the US Federal Reserve lowering its rates, so that a high volume of money continues to flow through the larger economy.

06But wouldn’t regulating chips have spillover effects?

NVIDIA’s H100 chip costs around $30,000 per chip and, due to its cooling and power requirements, is designed to be run in a data center. Regulating AI-specialized chips like this would have very few spillover effects, particularly if regulations only apply to chips used for AI training and not for inference.

But also, again, an economy isn’t worth much if you’re dead. This line of argument seems to be severely missing the forest for the trees, if it’s not in outright denial about the situation we find ourselves in.

Some of the infrastructure used to produce AI chips is also used in making other advanced computer chips, such as cell phone chips; but there are notable differences between these chips. If advanced AI chip production is shut down, it wouldn’t be difficult to monitor fabs and ensure that chip production is only creating non-AI-specialized chips. At the same time, existing AI chips could be monitored to ensure that they’re used to run existing AIs, and aren’t being used to train ever-more-capable AI models.

This wouldn’t be trivial to do, but it appears comparable in difficulty to many of the tasks the world’s superpowers have achieved when they faced a national security threat. The question is not whether key actors like the US and China have good options for addressing the threat; it’s whether they wake up in time.

07Isn’t this totalitarian?

Governments regulate thousands of technologies. Adding one more to the list won’t suddenly tip the world over into a totalitarian dystopia, any more than banning chemical or biological weapons did.

The typical consumer wouldn’t even necessarily see any difference, since the typical consumer doesn’t run a data center. They just wouldn’t see fundamental improvements to the chatbots they use.

08But if the US halts, isn’t that ceding the race to authoritarian regimes?

If the US halts unilaterally (and does nothing else), this would just drive AI research to other countries. A unilateral halt could turn out to be a useful stepping stone, or it could turn out to just be a distraction. But in either case, the end goal needs to be an agreement between the US, China, and other key nations suspending development at the international level.

Some templates of agreements that would do the job have already been drafted. (See modified proposal)

Governments can create a deterrence regime by articulating clear limits and enforcement actions. It’s in no country’s interest to race to its own destruction, and a deterrence regime like this provides an alternative path.

09Won’t countries defect from the agreement?

It’s rare for countries (or companies!) to deliberately violate international law. It’s rare for countries to take actions that are widely seen as serious threats to other nations’ security. (If it weren’t rare, it wouldn’t be a big news story when it does happen!)

If the entire world is racing to build superintelligence as quickly as possible, then we’re very likely dead. Even if you think there’s a chance that cautious developers could stay in control as AI comes to dramatically surpass humanity, that chance becomes increasingly remote as the race heats up, because prioritizing safety will mean sacrificing your competitive edge.

If instead a small number of people scattered around the globe are trying to find covert ways to assemble a small researcher-starved frontier AI project here and there, while dealing with enormous international pressure and censure, then that seems like a much more survivable situation.

By analogy, nuclear nonproliferation efforts haven’t been perfectly successful. Over the past 75 years, the number of nuclear powers has grown from 2 to 9. But this is a much more survivable state of affairs than if we hadn’t tried to limit proliferation at all, and were instead facing a world where scores of nations possess nuclear weapons.

When it comes to superintelligence, anyone building “god-like AI” is likely to get us all killed—whether the developer is a military or a company, and whether their intentions are good or ill. Going from zero superintelligences to one superintelligence already seems lethally dangerous.

The challenge is to block the construction of superintelligent AI while there’s still time, not to limit proliferation after it already exists, when it’s far too late to take the steering wheel back.

So the nuclear analogy is pretty limited in what it can tell us. But it can tell us that international law and norms have an enormous amount of practical power.

10But what about China?

The US has expressed concerns that if it paused, China would just race ahead.

But this concern may turn out to be misplaced. Quoting The Economist:

The accelerationists are getting pushback from a clique of elite scientists with the Communist Party’s ear. Most prominent among them is Andrew Chi-Chih Yao, the only Chinese person to have won the Turing award for advances in computer science. In July Mr Yao said AI poses a greater existential risk to humans than nuclear or biological weapons.

The influence of such arguments is increasingly on display. In March an international panel of experts meeting in Beijing called on researchers to kill models that appear to seek power or show signs of self-replication or deceit. […]

In July, at a meeting of the party’s central committee called the ‘third plenum’, Mr Xi sent his clearest signal yet that he takes the doomers’ concerns seriously. The official report from the plenum listed AI risks alongside other big concerns, such as biohazards and natural disasters.

More clues to Mr Xi’s thinking come from the study guide prepared for party cadres, which he is said to have personally edited. China should ‘abandon uninhibited growth that comes at the cost of sacrificing safety’, says the guide. Since AI will determine ‘the fate of all mankind’, it must always be controllable, it goes on.

— The Economist

The CCP has made frequent overtures to international coordination, and has repeatedly expressed openness to slowing down development if warranted. E.g.:

  • Chinese UN Ambassador Zhang Jun, in 2023: “To ensure that this technology always benefits humanity, it is necessary […] to regulate the development of AI and to prevent this technology from turning into a runaway wild horse. […] We need to strengthen the detection and evaluation of the entire life cycle of AI, ensuring that mankind has the ability to press the stop button at critical moments.”
  • Chinese Vice Premier Ding Xuexiang, in early 2025: “If we allow this reckless competition among countries to continue, then we will see a ‘gray rhino’ […] We stand ready, under the framework of the United Nations […] to discuss the formulation of robust rules to ensure that AI technology will become an ‘Ali Baba’s treasure cave’ instead of a ‘Pandora’s Box.’”
  • Chinese Premier Li Qiang, in mid-2025: “We should strengthen coordination to form a global AI governance framework that has broad consensus as soon as possible.”

None of this establishes that China would agree to halt frontier AI development. But it establishes that it’s worth diplomatically pursuing the option and seeing what’s possible. “We can’t let China beat us at Russian roulette!” is not a very compelling pitch. Even if you suspect China might be unwilling to make a deal, there’s zero cost to making an attempt.

It seems particularly foolish to write off this possibility when an enforceable agreement would not necessarily require much trust between the US and China. Both parties are likely to be in a good position to verify whether the other is complying, at least so long as AI progress depends on vast computational resources.

The CCP is a US adversary. That does not mean that they are fools who will destroy their own country in order to thumb their nose at the US. Policies that prevent human extinction are good for liberal democracies and for authoritarian regimes, so well-informed people on all sides will endorse those policies.

The question is just whether key decisionmakers will become well-informed about the strategic situation soon enough to matter.

11What can be done to make government officials aware?

Contact your representative. If you’re in the US, you can use this template as a starting point, revising it to fit your perspective on the issue.