KEY TAKEAWAYS

• The United Nations General Assembly adopted resolution 80/23 in December 2025 which addresses risks arising from the integration of artificial intelligence (AI) into nuclear command, control and communications (NC3) systems.

• Unsurprisingly, all nuclear weapons states either abstained or opposed the resolution, entrenching ongoing concerns about regulating their use of nuclear weapons.

• Resolution 80/23 has nevertheless established a key platform for ongoing multilateral dialogue regarding the role played by humans in the control and oversight of NC3 systems where AI is being integrated.

COMMENTARY

On 1 December 2025, the United Nations General Assembly (UNGA) passed resolution 80/23 put forward by its First Committee on Disarmament and International Security regarding possible risks arising from the integration of artificial intelligence (AI) into nuclear command, control and communications (NC3) systems.

A total of 118 states voted in favour, 9 against, and 44 abstained. Among the 9 states that voted against the resolution were France, Israel, North Korea, Russia, the United Kingdom and the United States. China, India and Pakistan were among the states that abstained.

These voting patterns are significant – all nuclear weapons states (NWS) either voted against the resolution or abstained from voting. At the same time, non-nuclear weapons states (NNWS) have continued to strive towards shaping the rules and norms around the use of nuclear weapons. This remains important in an environment where nuclear arsenals continue to grow, and arms control treaties signed in a different era of international relations have been eroded.

Although resolution 80/23 is not legally binding, it is nevertheless a significant step towards global governance of the AI–nuclear nexus. By placing the issue of risks arising from AI–NC3 integration on the UNGA agenda, there are now some normative expectations regarding responsible conduct in what has traditionally been an opaque and strategically sensitive domain.

Resolution 80/23 has not emerged in a vacuum. It builds on a growing awareness that AI–NC3 integration poses new challenges for strategic stability. In November 2024, then-US President Joe Biden and Chinese President Xi Jinping agreed that decisions regarding the use of nuclear weapons should remain under human control.

Risks Arising from AI–NC3 Integration

NC3 describes interconnected systems responsible for overseeing nuclear weapons operations. These include systems for situational awareness, planning, decision-making, force management and command execution, among other functions. Within the NC3 context, AI refers to systems trained on large datasets – rather than those based on pre-programmed rules – which enable predictive or generative outputs.

The integration of AI into NC3 systems poses risks requiring urgent attention to preserve strategic stability. The preamble to UNGA resolution 80/23 highlights the concern that AI-enabled decision-making in NC3 systems “could reduce human control and oversight, increasing the possibility of induced distortions in decision-making environments and shortened action and response windows.”

Given that NC3 systems operate within highly sensitive environments, there is a limited amount of training data and benchmarks to gauge the accuracy and effectiveness of output from AI models integrated within such systems. Even if an AI-enabled NC3 system performs well in simulated scenarios, it may struggle during actual operations. This leaves room for false positives, hallucinations or misclassifications that could have existential consequences.

Beyond data scarcity, AI-enabled NC3 systems are also vulnerable to adversarial manipulation. Poisoning of training data could distort a system’s behaviour, for example by making it ignore specific threats. Adversarial manipulation is often subtle and could remain undetected until a crisis emerges. Since NC3 systems are designed to be isolated and opaque, verifying the integrity of AI integrated within these systems is critical to maintaining confidence in their performance.

AI’s ability to rapidly generate output also reduces the window for action and response, potentially minimising the role of human deliberation and diplomatic de-escalation in nuclear weapons decision-making. Moreover, since an AI model’s recommendations can appear more objective than human assessments, system operators run the risk of overreliance on these recommendations when under pressure.

Unpacking the Resolution and Voting Patterns

Resolution 80/23’s first operative paragraph demands that human control and oversight be maintained over NC3 systems, including where AI has been integrated. It further urges the NWS to publish national policies explicitly affirming this principle. Additionally, it stresses the need for a common understanding and confidence-building measures to be developed within the context and mandate of existing disarmament platforms. Finally, it has included the issue of risks arising from AI’s integration with NC3 systems on the UNGA First Committee’s agenda for its forthcoming 81st session in 2026.

On the surface, the pattern of voting for resolution 80/23 reflects familiar divides between NWS and NNWS. The five NWS recognised by the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) – China, France, Russia, the United States and the United Kingdom – along with India, Israel, North Korea and Pakistan, either abstained or opposed the resolution.

The fact that four out of the five NPT-recognised NWS opposed resolution 80/23 suggests that even non-binding resolutions have the potential to establish normative expectations, which the NWS wanted to guard against. Russia opposed the resolution on the grounds that it was premature to discuss the issue at the UNGA without it first being discussed among the NWS. Meanwhile, France, the United Kingdom and the United States stated that they could not support the resolution since its text did not reflect AI’s potential benefits for NC3 systems. Although China abstained rather than voted against the resolution, its reasoning was similar to that of the other NWS.

Like China, India and Pakistan abstained. Pakistan argued that the resolution did not capture the full range of risks arising from the AI–nuclear intersection beyond NC3 systems. It also highlighted that the resolution overlooked how NWS differ in their practices related to managing NC3 systems and disclosing information publicly about them.

In contrast, the strong support from 118 NNWS, many of which have historically supported humanitarian approaches to nuclear disarmament, demonstrates their determination to regulate the behaviour of NWS. These states recognise that the absence of regulation creates risks for strategic stability, and their engagement with the issue at the UNGA reflects a desire to shape rules and norms in a domain dominated by NWS.

Developing Rules and Norms around AI–NC3 Integration

Resolution 80/23’s adoption by the UNGA marks a significant moment in global governance of the AI–nuclear nexus. Rather than aiming to unrealistically impose direct constraints on the NWS, the resolution has instead focused on institutionalising multilateral dialogue on the risks arising from AI–NC3 integration. This allows for the creation of an inclusive platform through which norms and confidence-building measures can gradually develop.

Furthermore, by focusing the discussion on human control and oversight, resolution 80/23 puts forward a normative reference for future multilateral discussions, even if it is not legally binding. This is particularly significant since NWS have been consistently reluctant to disclose any details regarding NC3 systems.

By focusing the discussion on human control and oversight, resolution 80/23 puts forward a normative reference for future multilateral discussions. Image credit: UN Photo / Loey Felipe.By focusing the discussion on human control and oversight, resolution 80/23 puts forward a normative reference for future multilateral discussions. Image credit: UN Photo / Loey Felipe.

Looking ahead, the main challenge will be how to practically implement human control and oversight in the design and operation of NC3 systems. Many questions still need to be considered, such as how human control should be defined, what qualifies as oversight, and how accountability can be ensured when transparency is a challenge. Addressing these questions will require sustained multilateral dialogue, for which resolution 80/23 has now created the first stepping stone.

Maÿlis Mennesson is an intern with the Military Transformation Programme at the S. Rajaratnam School of International Studies (RSIS) and a master’s student in International Affairs at King’s College London. Manoj Harjani is Research Fellow and Coordinator of the Military Transformations Programme at RSIS.

Comments are closed.