Does the AI/Nuclear Weapons Analogy Really Make Sense?
If so, where does it? What are the limitations?
Editor’s Note: Today’s Nukesletter continues on the theme of AI from my last post, but is somewhat different. I was asked recently by the good folks at the Federation of American Scientists and the Future of Life Institute to write a short “think piece” with some broad thoughts on the often-analogized AI-nuclear relationship. That resulted in the piece that follows (with permission from FAS), which I was hoping would be more controversial than it actually appeared to turn out to be at a recent convening to discuss these issues. Perhaps it will inspire more polarized reactions here on Substack. As always, I’m happy to hear from you. The piece was titled, somewhat grandiosely, “What To Learn—And Not Learn—From the Nuclear Age: A Framework for Twenty-first Century AI Thinkers.” It will be published elsewhere soon, I’m told.
As you read, consider that I did keep this shorter than it could have otherwise been. I have another important lesson I want to write on at some point soon: on the experience in the United States with BROKEN ARROWs and BENT SPEARs (nuclear weapons accidents/incidents) and possible lessons for the AI frontier labs today.
The spread of increasingly sophisticated narrow and general-purpose artificial intelligence (AI) technologies—and the possible arrival of artificial superintelligence (ASI)—may end up having catastrophic or existential consequences for humanity.[1] If one believes in this premise, there is a natural and understandable tendency to look for precedents in how humanity managed other periods of sharp technological discontinuity and change. The dawn of the nuclear age roughly eighty years ago presents one such example and is often evoked in contemporary debates on managing the risks associated with AI systems.[2]
AI and nuclear weapons, of course, are fundamentally different things in their essential nature and effects. Nuclear weapons, dubbed the “absolute weapon” by the American naval strategist Dr. Bernard Brodie,[3] were fundamentally forms of explosive ordnance hitherto unimaginable that functioned best as political tools. Their arrival transformed the endeavor of military organizations fundamentally; as Dr. Brodie famously put it, nuclear weapons meant that the “chief purpose” of military establishments would no longer be fighting to win wars, but to “avert them,” with “no other useful purpose.” The fundamental innovation of the bomb, in the mid-1940s, was the packaging of previously inconceivable amounts of explosive power into devices that could be delivered by a single aircraft. Whereas on March 9 and 10, 1945, more than 300 American B-29 bombers had been called to firebomb Tokyo, resulting in more than 100,000 dead,[4] three months later, on August 6, a lone B-29 armed with a single nuclear weapon obliterated a city, Hiroshima.[5] Twenty years later, megaton-class thermonuclear weapons, paired with intercontinental-range ballistic missiles, had further changed the picture.
Despite Dr. Brodie’s diagnosis of the effects of the bomb in 1946 as necessitating the end of militaries postured to fight and win wars, the reality of the nuclear age has been more complicated: nuclear-armed states continue to ask much of their military establishments beyond the mere requirements of nuclear deterrence. Humanity’s co-existence with the bomb has also been rendered somewhat more tractable by several mutually reinforcing structures beyond just nuclear deterrence; these include a nonproliferation regime, a system of international verification for the non-diversion of weapons-useable fissile materials, and a network of alliances led by the United States. Negotiated forms of restraint between nuclear-armed states (arms control) have also helped.

Below, I identify and briefly discuss observations from the nuclear age that can provide some utility for framing contemporary debates on AI and human survival. There are, naturally, important limits to the analogies that we can draw, which I also allude to below.
Is there a “secret” of the bomb? Is there a “secret” of AI?
Prior to the first nuclear test in July 1945 (Trinity), it was already well-understood by the community of scientists and engineers of the Manhattan Project that there was no essential “secret” of the bomb that, if well-kept, could prevent the spread of nuclear weaponry. The physical principles allowing for nuclear weapons were increasingly part of a body of nuclear physics that would be more widely understood around the world. While elements of nuclear weapons design (including the precise means of machining explosive “pits” and non-nuclear explosive components) were complex, there was no particular reason to believe that determined actors—especially those with the resources of a nation-state—could not succeed in this endeavor. With AI systems—including contemporary large-language models and other transformer-based neural networks—the case is similar in some ways, and dissimilar in others.
What’s the most relevant unit of analysis for preventing proliferation? (Or: “fissile material is not compute”.)
Mother Nature has been somewhat kind to humanity in ensuring that just two isotopes of two elements—one that does not occur naturally—are suitable for fueling the fissile cores of nuclear weapons. Controlling the spread of nuclear weapons, thus, has focused less on holding tight the one “secret” of the bomb, and more on ensuring that materials suitable for nuclear weapons are not amassed in significant quantities by a large number of states. The modern nonproliferation verification architecture, for instance, relies on accounting for all relevant nuclear material within a non-nuclear state’s borders and verifies that no material has been diverted to unknown uses, including a possible covert nuclear weapon program.
In the context of AI, while compute is sometimes likened to weapons-useable fissile material, the analogy has serious limitations. While it will be difficult to be systematic due to space limitations, consider that, for starters, the physical principles involved in nuclear weapons allow for the delineation (mostly a priori) of what the International Atomic Energy Agency, for instance, considers to be “significant quantities” of enriched uranium or plutonium. The equivalent in compute for meaningfully dangerous AI systems is a much less tractable problem. Beyond compute, data quality, training methods, alignment, and systems architecture matter considerably. Moreover, the centrality of non-state actors (private firms) in the development, manufacture, and distribution of compute globally represents another meaningful difference; nuclear material within weapons programs has invariably been monopolized by states.
Personnel reliability matters.
As AI systems grow more powerful and particularly as they find applications in military organizations around the world, it will be increasingly important to have in place effective procedures to ensure that the human beings charged with the operation, maintenance, and surveillance of those systems are reliable and capable. In the United States, a so-called Personnel Reliability Program (PRP) exists to vet individuals involved in the nuclear deterrent mission. While PRP programs have experienced lapses, the general principle is a sound one for AI integration in sensitive settings. Numerous publicly reported cases of human-AI interactions leading to behavioral changes in the human beings that interact with these systems suggests that more attention should be given to this matter. Within the nuclear weapons enterprise, too, PRP will need to expand in scope to manage the diffusion of AI systems into nuclear command and control systems, too.
When you know, you know—or do you know?
Certain applications of AI technologies are already revolutionizing human affairs, but detecting and responding to the potential arrival of fundamentally transformative general intelligence or superintelligence is far from a straightforward matter. The atomic bombing of Hiroshima, followed by a U.S. statement on the bomb, notified the world that atomic weaponry—bombs “harnessing the basic power of the universe”—had arrived.[6] Later, during the United States’ brief period of nuclear monopoly (1945-1949), intelligence assessments diverged considerably about the timeline for the Soviet procurement of the bomb. When the Soviet Union did carry out its first nuclear test, in August 1949, special American aircraft detected atmospheric radionuclides, but even then, analytical disagreements persisted on whether those samples indicated a weapons test or some sort of other man-made radiological incident. Prominently, key policymakers, including U.S. President Harry Truman, appeared to maintain disbelief on the spread of the bomb well past 1949.[7] When an AI breakthrough comparable in its effects (possibly AGI or ASI) first manifests—be it in the United States, China, or elsewhere—the diffusion of knowledge about its existence may not be immediate.
Will diffusion of effects take longer?
We should consider that AI may not have its proverbial “Hiroshima moment.” Instead, as the 2020s already attest, AI technologies will rapidly and iteratively improve and, in parallel, civilian and military organizations will proceed with integration. Here we must consider one of the strongest reasons to not over-learn the lessons from particularly the start of the nuclear age. In this scenario, the diffusion of AI technologies would come to resemble that of electricity, aviation, and even networked computing rather than nuclear weapons—with the effects shaping geopolitical dynamics and warfare over a more protracted period.[8] This does not preclude that AI could have massively disruptive effects on human economic life and labor markets, of course.
The bomb matters, but so do organizations (or, why it’s about more than alignment).
Some of the most famous dangerous moments of the nuclear age, to include the Cuban Missile Crisis, were not driven fundamentally by the existence of the bomb itself, but by a cocktail of misperception, human psychology, poor communication, misaligned incentives, and organizational pathologies. In the context of AI, this should prompt reflection beyond merely technical safeguards—for instance, on engineering human-aligned systems—and into the organizational safety culture associated with some of the most high-impact possible deployments of advanced AI systems. Incentives to rush deployment, cut corners on certification and testing, and broader arms-race-style motivations will render much of this difficult.
Is there a technological “grand bargain” to be had?
The Treaty on the Nonproliferation of Nuclear Weapons (or the NPT) remains a crowning achievement of the nuclear age. Its success in staunching the spread of nuclear weapons owes much to the essential tripartite bargain at its core, split across three pillars. First, states without nuclear weapons join the treaty and forswear pursuing those weapons perpetually (nonproliferation). Second, as a result of their forbearance, they receive access to peaceful nuclear technologies for their economic benefit (peaceful uses) and submit to allow for verification that their programs indeed remain peaceful. Third, to address the insecurity spurred by the presence of some nuclear-armed states (defined by the treaty as any state to have detonated a nuclear explosive before January 1, 1967), the nuclear-armed states agree to work toward nuclear disarmament.[9]
This bargain is under contemporary stress, but its history should remind us that a delicate balancing of incentives—among wealthy powerful, nuclear-possessing great powers, their allies, and resource-poor non-aligned states—can manifest powerful governance effects. Similar AI proposals—for instance, on capping compute through a treaty mechanism[10]—should look at this history, but finding mutually reinforcing analogous “pillars” that allow for a bargain between the haves, the near-havers, and the have-nots will remain a tall task.
This short essay could continue on to considerable length, but for reasons of space, I’ll leave additional observations for another setting. The list above is far from exhaustive, but represents what I consider to be a hopefully somewhat useful starting point for contemplation of the lessons and limits of using humanity’s experience so far in the eighty-one years of the nuclear age to understand the coming age of AI. There’s an obvious caveat to much of this, which is that what exactly the lessons of the nuclear age are continues to be strongly debated, with decades-long debates around key questions pertaining to the nature of nuclear deterrence, the consequences of proliferation, the controllability of escalation, and more. As we progress to a potentially far messier, more complex era for AI technologies becoming integrated with all facets of human endeavor, we should anticipate that today’s burgeoning debates about safety, alignment, and integration will persist and evolve.
[1] For reasons of space, these terms are defined herein. Narrow AI concerns machine-learning systems designed to execute on well-defined tasks; general-purpose AI is defined as systems more capable of open-ended reasoning and multi-part, complex tasks. to a hypothetical system that is capable of far exceeding human intelligence in every plausible cognitive domain.
[2] Rehman, Iskander. “An Algorithmic Loosening of the Atomic Screw? Artificial Intelligence and Nuclear Deterrence,” (West Point, NY: Modern War Institute), November 11, 2025. https://mwi.westpoint.edu/an-algorithmic-loosening-of-the-atomic-screw-artificial-intelligence-and-nuclear-deterrence/
See ibid. for an extended treatment of nuclear age analogies for AI. Some of the ideas in this short essay draw inspiration from Rehman’s observations.
[3] Brodie, Bernard. “The Absolute Weapon: Atomic Power and World Order” (Manchester, NH: Ayer Company Publishers, 1946).
[4] Rauch, Jonathan. “Firebombs Over Tokyo”, (then Boston, MA: the Atlantic, 15 August 2002). https://www.theatlantic.com/magazine/archive/2002/07/firebombs-over-tokyo/302547/
[5] “Hiroshima and Nagasaki Bombings”, (Geneva, CH: International Campaign to Abolish Nuclear Weapons, 2019). https://www.icanw.org/hiroshima_and_nagasaki_bombings
[6] Truman, Harry S. “Statement by the President Announcing the Use of the A-Bomb at Hiroshima” (Speech, USS Augusta). August 6, 1945. Harry S. Truman Presidental Library. https://millercenter.org/the-presidency/presidential-speeches/august-6-1945-statement-president-announcing-use-bomb
[7] Wellerstein, Alex. “The Most Awful Responsibility: Truman and the Secret Struggle for Control of the Atomic Age”, (New York City, NY: Harper Publishing, 2025). q.v. Chapter 15.
[8] Ding, Jeffrey. “Technology and the Rise of Great Powers: How Diffusion Shapes Economic Competition”, (Princeton, NJ: Princeton University Press, 2024).
[9] Sokolski, Henry, et al. “Fighting Proliferation”, (Montgomery, AL: Air University Press, January 1996). See Chapter 2: Weiss, Leonard. “The Nuclear Nonproliferation Treaty: Strengths and Gaps”. https://irp.fas.org/threat/fp/index.html
[10] Miotti, Andrea. “An International Treaty to Implement a Global Compute Cap for Advanced Artificial Intelligence”, (Preprint, ArXiv). November 1, 2023. https://doi.org/10.48550/arXiv.2311.10748


