According to a new paper from the nonprofit Nuclear Threat Initiative (NTI), artificial intelligence in nuclear weapons systems is a neutral wash of possibilities — but it also appears inevitable, and as a result, immediate action is needed to balance benefits and risks while the technology is still maturing.
The report, Assessing and Managing the Benefits and Risks of Artificial Intelligence in Nuclear-Weapon Systems, was co-authored by Jill Hruby, Under Secretary for Nuclear Security at the U.S. Department of Energy and a former NTI Distinguished Fellow, as well as M. Nina Miller, a PhD student at the Massachusetts Institute of Technology and former NTI intern. Together, the two determined that two application areas were most likely to take advantage of AI advances soon: Nuclear Command, Control, and Communications (NC3) and autonomous nuclear-weapon systems.
“In NC3, AI could be applied to enhance reliable communication and early warning systems, to supplement decision support, or to enable automated retaliatory launch,” the authors wrote. “The implications vary dramatically. Enhancing communication reliability and decision-support tools with AI has recognizable benefits, is relatively low risk, and is likely stabilizing, although it still requires additional technical research to lower risk as well as deeper policy exploration of stability implications to avoid provoking an arms race. AI application to automated retaliatory launch, however, is highly risky and should be avoided.”
They added, “For autonomous nuclear-weapon systems, AI along with sensors and other technologies are required for sophisticated capabilities, such as obstacle detection and maneuverability, automated target identification, and longer-range and loitering capability. Today’s technology and algorithms face challenges in reliably identifying objects, responding in real time, planning and controlling routes in the absence of GPS, and defending against cyberattacks. Given the lack of technology maturity, fully autonomous nuclear-weapon systems are highly risky. ”
Given the riskiness and potential instability alike, the pair argued for an outright ban on fully autonomous systems until the technology could be better understood and proven. In the meantime, they also urged prioritization of research and carefully but openly published details of low technical risk approaches and fail-safe protocols for AI use in high consequence applications, cooperative research, national policies on the role of human operators and limits of AI in nuclear weapon systems, and increased international discussion of implications for AI use in nuclear weapons systems.
“Because of the high potential consequences, AI use in nuclear-weapon systems seems to be proceeding at a slower pace — or perhaps more covertly — than other military applications,” the authors wrote. “Nonetheless, the potential for AI application to nuclear-weapon systems is likely to grow as the military use of AI develops concurrently with nuclear-weapons system modernization and diversification,” they added.