Against a backdrop of conflict and global security concerns, 2023 may prove to have also been a pivotal year for automated nuclear weapons systems.

A year that began with chatbots and Artificial Intelligence (AI) as the subjects of major news stories - some with particularly concerning headlines - ended with members of the United States Congress introducing legislation to ban AI systems from nuclear weapons and US President Biden signing an Executive Order on the subject. The issue was even raised in discussions between the United States and China at the Asia-Pacific Economic Cooperation forum, which met in San Francisco in November.

Allied aircraft exercise NATO’s nuclear deterrence capability in exercise Steadfast Noon, 16-26 October 2023. Pictured: a Dutch F-16 Fighting Falcon fighter during take-off. © NATO
)

Allied aircraft exercise NATO’s nuclear deterrence capability in exercise Steadfast Noon, 16-26 October 2023. Pictured: a Dutch F-16 Fighting Falcon fighter during take-off. © NATO

We seem to be on a fast track to developing a diplomatic and regulatory framework that restrains AI in nuclear weapons systems. This is concerning for at least two reasons:

  1. There is a utility in AI that will strengthen nuclear deterrence without necessarily expanding the nuclear arsenal.

  2. The rush to ban AI from nuclear defenses seems to be rooted in a misunderstanding of the current state of AI—a misunderstanding that appears to be more informed by popular fiction than by popular science.

The policies of the United States – the NATO Ally with the largest nuclear arsenal - regarding the use of AI in nuclear defence systems will likely set the tone for the other nuclear capable NATO member states, France and the United Kingdom. This is why misunderstandings about AI, particularly in the US but across the entire Alliance more generally, must be addressed, and lawmakers should be urged to proceed more carefully with any proposed legislation. With potential geopolitical benefits to be realised, banning AI from nuclear defences is a bad idea.

Misunderstanding a new science

When people think of AI in the context of nuclear weapons, they may imagine something like the Skynet system from the 1991 film Terminator 2: Judgment Day. In the film, Skynet becomes self-aware and launches a massive global nuclear strike.

Perhaps they think of the 1983 film WarGames and its artificial intelligence system, known as WOPR, or even more niche cinema, like the 1970 film Colossus: The Forbin Project. These films, released in each of the last three decades of the Cold War, depict AI systems capable of independent thought — what is sometimes referred to as Artificial General Intelligence (AGI). The danger they portray is that systems capable of independent thought would be capable of independent objectives and ulterior motives. To be sure, it would be concerning if such systems existed. But they do not; and, while a skeptical consensus is not universal, there is serious doubt among at least some researchers as to whether such systems will ever exist.

Works of popular fiction are not always accurate representations of a new science. At its best, fiction can provide a starting point for debate and strategic thought. H.G. Wells’ The Last War, for example, was one of the first works of fiction about nuclear war; written while nuclear science was in its infancy, it is replete with misunderstandings about concepts like explosive yield and half-life. Nevertheless, Herman Kahn’s later work of non-fiction, On Thermonuclear War, takes as its starting point scenarios that one immediately recognises from the plot of The Last War. Kahn demonstrated through his writing that serious academic thought could begin with a consideration of fictional scenarios, even those with scientific inaccuracies; but arguably his more important work, developed later, was based on empirical evidence—the now ubiquitously cited On Escalation. Scientific accuracy and empirical evidence must similarly be central to our discussions of AI.

The kind of artificial intelligence that is available today is not AGI. It may pass the Turing test — that is, it may be indistinguishable from a human as it answers questions posed by a user — but it is not capable of independent thought, and is certainly not self-aware.

History as precedent – the utility of improved targeting systems

There are myriad roles for AI in our nuclear defences, including AI-based targeting systems. If we assume that AI-based targeting will make nuclear weapons more accurate — that is, more likely to hit what they should hit and not hit what they should not — then what are the geopolitical benefits of its development and deployment? It is useful to revisit historical examples to illustrate how increasing the accuracy of nuclear weapons strengthened US and NATO defences during the Cold War.

In his March 1983 Oval Office address, President Ronald Reagan presented his case for the development of a ballistic missile defence system. One of his key points was that the Soviet Union possessed more nuclear weapons than did the US. In the late 1970s, the Soviet Union did indeed overtake the US in the number of nuclear weapons it possessed, but this was largely a result of the deployment of more accurate missile systems like Polaris, Titan II, and Pershing. It was no longer necessary to target a city or military installation with many missiles, and so the US could still effectively deter the Soviet Union and meet its strategic objectives with fewer warheads. The cost savings achieved by having a smaller number of more accurate nuclear weapons allowed the US to free up valuable defense dollars to develop new systems like the stealth bomber and the cruise missile.

Thanks to the deployment of more accurate missile systems like Polaris, Titan II, and Pershing by the United States in the late 1970s, it was no longer necessary to target a city or military installation with many missiles, and so the US could still effectively deter the Soviet Union and meet its strategic objectives with fewer warheads. Pictured: Pershing II weapon system tested in February 1983. © Wikipedia
)

Thanks to the deployment of more accurate missile systems like Polaris, Titan II, and Pershing by the United States in the late 1970s, it was no longer necessary to target a city or military installation with many missiles, and so the US could still effectively deter the Soviet Union and meet its strategic objectives with fewer warheads. Pictured: Pershing II weapon system tested in February 1983. © Wikipedia

The reduction in the overall number of US nuclear weapons in the concluding decades of the Cold War, at a time when defence spending was a substantially greater share of gross domestic product than it is today, is suggestive of the idea that more accurate weapons can mean fewer weapons.

One piece of evidence that suggests how the development of more accurate nuclear weapons potentially influenced US nuclear policy comes from the recently declassified Presidential Directive 59, signed by President Jimmy Carter in 1980. Two salient points in this directive are a request for increased intelligence on targets and a push for what is referred to as a “look-shoot-look” capability — the ability to find a target, hit it, and then assess the strike. Implicit in this approach are the ideas that a nuclear strike should hit its intended target, the target should have strategic value, and that a form of nuclear carpet bombing that fails to hit an intended target is strategically pointless.

In parallel to these developments in nuclear weapons, conventional weapons also became increasingly more accurate. The Gulf War (1990-1991) was an important turning point for conventional weapons systems — accurate munitions that hit military targets and comparatively minimised civilian casualties were front and centre in the press briefings provided by US General Norman Schwarzkopf. The benefit of minimising civilian casualties has since led many NATO Allies to ban older and relatively indiscriminate weapons like cluster munitions.

The United States Army First Cavalry’s Multiple Launch Rocket System firing a rocket during the Gulf War. © Steve Elfers / The LIFE Picture Collection, via Getty Images
)

The United States Army First Cavalry’s Multiple Launch Rocket System firing a rocket during the Gulf War. © Steve Elfers / The LIFE Picture Collection, via Getty Images

Future potential – a role for AI-based targeting systems

What form a more accurate, AI-based targeting system for nuclear weapons may take is difficult to estimate at this point, with much of the technology still in the development stage. One can imagine a hypothetical scenario in which a nuclear weapon targets a naval base, but an approach pattern recognition determines that the target submarines have already put to sea, and so the missile opts for a redirected underwater strike instead of an atmospheric detonation. This is but one of many possible scenarios to consider involving AI.

If past is prologue, and the use of more accurate AI-based targeting systems leads to a reduction in the overall number of nuclear weapons, where might such reductions be made? A strategic review will, of course, answer this question. One possibility may be land-based Intercontinental Ballistic Missiles (ICBMs). While it is not currently US policy, former officials, including US Secretary of Defense William Perry, have argued for precisely that.

Potential benefits can extend beyond nation state threats. A reduction in the number of nuclear weapons will make it easier to secure the remaining stockpile and prevent the nightmare scenarios of nuclear terrorism, where poorly secured weapons fall into the wrong hands.

There is, of course, the potential for an arms race in AI-based targeting systems for nuclear weapons. But it is also important to note the role that continued research and development can play in nuclear diplomacy and a reduction of arms. Returning to the historical precedent, by the time the US deployed intermediate range Pershing missiles to Europe, they were seen as a bargaining chip in the strategic arms reduction talks that would follow. President Reagan’s ballistic missile shield was similarly viewed by the Soviets as something that could be bargained over. At the 1986 Reykjavik Summit, President Reagan found Soviet leader Mikhail Gorbachev willing to negotiate away large numbers of nuclear weapons in exchange for an agreement by the US not to deploy a ballistic missile defence system. Instead, the summit was followed by negotiations for the Intermediate-Range Nuclear Forces Treaty, which led to the removal of the Pershing missiles.

US President Reagan and Soviet leader Mikhail Gorbachev sign the Intermediate-Range Nuclear Forces Treaty, in which their two nations agree to eliminate their stocks of intermediate-range and shorter-range (or “medium-range”) land-based missiles, 8 December 1987. © The White House
)

US President Reagan and Soviet leader Mikhail Gorbachev sign the Intermediate-Range Nuclear Forces Treaty, in which their two nations agree to eliminate their stocks of intermediate-range and shorter-range (or “medium-range”) land-based missiles, 8 December 1987. © The White House

There are currently serious issues related to nuclear diplomacy that must be addressed. Russia rejects nuclear inspections and continues to develop next generation hypersonic ballistic missiles. Meanwhile, China has historically preferred to self-limit its nuclear arsenal, rarely opting for formal agreements with the US. The hope of nuclear diplomats today is for a multilateral arms reduction treaty between the US, Russia, and China. With Russia’s brutal war in Ukraine and simmering tensions in the Indo-Pacific region, the challenges of developing such a treaty are immense. Should all parties eventually agree to talks, nuclear weapons systems with AI-based targeting can, if nothing else, provide the US and its NATO Allies with a bargaining chip in those negotiations. This function in future arms control negotiations is, in effect, “building up to build down” (a strategy well established in nuclear arms negotiations); but it creates an imperative to be the first to invest in the development of the most effective systems, not to restrain their development.

Testing of the ‘Peacekeeper’ reentry vehicles: all eight (of a possible ten) were fired from only one missile. Each line shows the path of an individual warhead captured on reentry via long-exposure photography. Photo © Wikipedia
)

Testing of the ‘Peacekeeper’ reentry vehicles: all eight (of a possible ten) were fired from only one missile. Each line shows the path of an individual warhead captured on reentry via long-exposure photography. Photo © Wikipedia

And if, ultimately, it is decided that AI systems should be withheld from nuclear defenses, any proposed legislative language must carefully define artificial intelligence — a difficult task for a rapidly developing science. A proposed bill in the US Congress, for example, suggested that systems that “select or engage targets for the purposes of launching a nuclear weapon” should be banned, and defined “‘autonomous weapons system’ as a weapons system that, once activated, can select and engage targets without further intervention by an operator.” In this case, it should be pointed out that since the early 1970s, the US nuclear arsenal has used multiple independent targetable reentry vehicles (MIRVs); a system that launches and then redirects to a new trajectory for each of the multiple warheads it carries without the further intervention of a human operator. Expert legal testimony should consider whether such legislative language is so broad that it could unintentionally ban MIRVs, a proven technology that has been at the core of US nuclear defence for decades.

Conclusion

With each new decade, fear of the bomb has been entwined with fear of the transistor, the microprocessor, and the silicon wafer, and this has been reflected in our popular culture. Those who developed the nuclear arsenal, its control systems, and deterrence theory were well aware of this and studiously considered the proper role automated systems should play. While it may seem like a more sophisticated problem today, any potential risk of combining automated systems with nuclear weapons is certainly not a new problem. Legitimate concerns over a rapidly developing technology are valid; but concerns over the capabilities of AI systems must be based on the actual science of these systems, not merely their depiction in popular fiction.

AI systems offer an opportunity to strengthen nuclear deterrence by providing a more accurate and capable defensive nuclear response. The purpose of making nuclear weapons more accurate and capable is not to promote their usage. Such capabilities, instead, provide a more credible deterrence to nuclear war and are consistent with classic nuclear doctrine. AI is simply a strategic tool, like nuclear weapons themselves.

Concern over AI should not preclude the use of AI in strengthening nuclear deterrence. Nor should AI be deployed in those systems simply for the sake of deployment. Employing AI should serve a strategic objective. Where to find the right balance will be difficult because the science is still in its infancy. Expert testimony from the defence and AI communities should be heard — not just the management of AI companies, but engineers, academics, military officers, and legal counsel. In a time of major global security concerns and rapidly developing nuclear and AI technologies, legislators and political leaders should proceed carefully with any proposed legislation.