From Palantir’s precision targeting in Ukraine to Google and Amazon's $1.2 billion 'Project Nimbus' contract with Israel, artificial intelligence (AI) is rapidly transforming modern warfare.
As tech giants secure defense partnerships, the UN Security Council has yet to pronounce itself on the militarisation of AI. This asymmetry raises urgent questions about whether international institutions or companies will dictate global security. The following analyses the UN Security Council's response to the use of AI in the military domain and compares it with the private sector's accelerating role in defense.
AI is fundamentally reshaping military approaches. It enables actors to automate intelligence analysis, enhance early warning systems, identify misinformation, and minimise civilian harm. Global military spending on AI is projected to reach $38.8 billion by 2028, nearly a 750% increase from 2022. The United States and China spearhead this transformation, echoing Cold War dynamics.
Yet the eagerness to integrate AI into defense strategies is met with concerns about its unpredictable consequences. Will human-trained AI exhibit existing biases? Should lethal autonomous weapons systems (LAWS) make life-or-death decisions without human oversight? And what happens when terrorist groups exploit AI for cyberattacks? Answers to these questions may ultimately be shaped by those most willing to act first, regardless of ethical or legal norms.
Despite growing worry over the weaponisation of AI, the UN Security Council has not passed a resolution on the issue. During its two debates in July, 2023 and December, 2024 and an informal Arria-Formula meeting, most member states acknowledged the urgency of establishing a regulatory framework grounded in international law, but they disagreed on its conditions. As the UN’s peacekeeping body, the Security Council is the natural forum for AI military governance. Yet, its continued gridlock illustrates how geopolitical divisions paralyse international governance on critical issues.
Within the five permanent members, preexisting divides have resulted in stalemate. China publicly supports cooperation for responsible AI use and opposes military hegemonies. This position, however, is more nuanced in practice. While actively modernising its army with weaponised intelligence, China’s voting record reflects this complexity. It supported a General Assembly resolution on documenting broader AI military implications but abstained on a separate LAWS resolution. Similarly, China participated in the 2023 Summit on Responsible AI in the Military Domain (REAIM), but declined to endorse the 2024 blueprint for action. This pattern signals a reluctance to accept formal constraints on AI development as its technological arms race with the United States escalates.
Russia opposes an international framework initiated by the Security Council, warning about Western bias and arguing the UN Security Council is an inappropriate forum for military AI governance. Its vote against the previously mentioned UNGA Resolution and absence at the REAIM Summit underscore its dissent. Nevertheless, Russia is a leading military AI power and advocates for geopolitically diverse dialogue between all states and AI institutions, launching the BRICS AI Alliance.
France, the United Kingdom, and the United States champion international cooperation based upon international law. They supported REAIM’s call to action, voted for the aforementioned UNGA Resolution, and sponsored another resolution that aims to establish consensus on using AI to enhance global security. Like Russia and China, each state is heavily investing in developing military AI systems to maintain geopolitical power. Not surprisingly given the AI arms race with China and their larger military budget, the U.S. leads investment by partnering with Silicon Valley companies.
The ten elected members generally support establishing global norms on military AI governance, if they are equitable and abide by international law. This support has manifested through various multilateral initiatives: Ecuador and Japan have advocated for dialogue involving academia and the private sector; Switzerland initiated an Arria-Formula meeting on leveraging AI’s potential for peace; South Korea hosted the REAIM Summit; and Malta called for close monitoring by the Security Council. However, some members have expressed reservations, such as the United Arab Emirates warning of overregulation and Ghana expressing hesitation over AI reinforcing existing hierarchies.
The broader UN system has taken initial steps toward AI governance. The New Agenda for Peace and UNGA Resolution A/RES/79/239 reflect growing awareness, calling for transparency and inclusion. Yet these are non-binding and leave militaries operating in an ungoverned digital frontier. As a result, the gap between rapid technological deployment and enforceable global norms widens, leaving battlefield strategy increasingly shaped by corporations.
While the private sector initially restricted military applications of AI and advocated for joint oversight, it has since shifted to capitalise on an unregulated market. Leading tech companies once listed defense-related AI as a harmful capability and prohibited its use, but have quietly reversed these policies. Citing the need for democratic leadership in AI development, tech giants have pivoted to secure lucrative defense partnerships. The U.S. Department of Defense has signed contracts with companies including OpenAI, Anthropic, xAI, Amazon Web Services, and Palantir. Google technology is now used in both Ukrainian and Israeli military operations. Meanwhile, Meta shares intelligence with the Five Eyes alliance and released its open-source Llama model, which China’s Ministry of National Defense uses.
As the UN Security Council continues to deliberate amid gridlock, companies are deploying AI systems in battle. This creates a dangerous precedent where battlefield ethics are determined by profit margins rather than international law. Now, the question is whether international institutions can catch up to companies before AI norms become irreversibly entrenched in warfare. If not, they may be replaced as the legitimate governors of security.