The artificial intelligence (AI) arms race is afoot and must be won, according to many national security elites. In January, Secretary of War Pete Hegseth announced a strategy to “accelerate America’s military AI dominance” and become “an ‘AI-first’ warfighting force across all components.” Yet, there are good reasons to question whether plunging into an unfettered AI arms race will ultimately lead to more safety and security. A winner-takes-all approach will incur massive financial costs and could increase the risks of future AI-related conflicts or calamities over time. Alternatively, pursuing more cooperative, diplomatic approaches now may help place safeguards on emerging technologies, mitigate escalatory risks, and orient AI more toward peace and stability.
Current conflicts are foreshadowing a future of increasingly AI-powered warfighting. The Israeli military has relied on AI to identify and locate targets for airstrikes in Gaza. Ukrainian and Russian armies have reportedly used AI to guide and pilot swarms of drones over long distances and conduct strikes, overcoming air defenses. With the support of Iran, Russia is rapidly increasing its production and use of attack drones that can strike longer-distance targets in Ukraine.
The United States, China, and other countries are now actively racing to develop advanced AI-powered assets for military purposes – contributing to new record levels of military spending. This includes expanding fleets of unmanned vehicles (air, land, and underwater), as well as developing more advanced robotic systems. Last year at an exercise, China showcased quadrupedal “robot wolves” that can fire weapons. A U.S. startup company aims to build 50,000 “Phantom” humanoid robots by 2027 to support a range of battlefield tasks.
At the strategic level, the world’s largest militaries are pursuing greater roles for AI in command-and-control and decision-making – possibly even integrating AI into aspects of nuclear weapons management systems. Proponents argue that this is necessary for threat detection, deterrence, decision-making, and execution in an increasingly fast-paced, multi-domain environment. But we cannot have confidence that AI will facilitate judicious decision-making in crises. AI is susceptible to biases based on how algorithms are designed, and a recent study of military war games with AI-supported decision-making found a bias toward options for aggressive escalation.
Expanding military reliance on AI systems raises ethical, legal, and practical concerns. These systems could fuel escalatory patterns of violence, with reduced capacities for human control and accountability. A recent Human Rights Watch report outlines how autonomous weapons systems – which would select and engage targets without human intervention – threaten fundamental tenets of international human rights law: right to life; right to peaceful assembly; human dignity; non-discrimination; right to privacy; and right to remedy.
In the face of these concerns, support for more regulation is growing. More than 120 countries, the UN Secretary General, the International Committee of the Red Cross, scores of technology leaders, and over 250 civil society organizations (part of the “Stop Killer Robots” campaign) have endorsed the goal of an international treaty to prohibit and regulate autonomous weapons systems and preserve “meaningful human control” over the use of force. Regrettably, a small group of powerful countries – most of which are heavily invested in these systems, including the United States – oppose discussion of such a treaty as premature or unnecessary.
While powerful countries may chafe at new limits, they can surely recognize a shared interest in avoiding unintended escalations or disasters. If countries are not yet willing to commit to a treaty, they can still commit to initial, voluntary steps on a national or bilateral basis to increase safeguards. This could include deterrence and non-proliferation measures proposed by some technology leaders to preserve rational decision-making, augment information security, and prevent rogue actors from accessing AI capabilities.
At the same time, there needs to be more attention to how AI can be used deliberately for peace purposes: whether preventing wars or supporting their end. Civil society organizations have begun to explore ways AI can be designed and structured to reduce social harms and support peacebuilding. Fellow authors in this Peace Policy issue identify some promising tools and approaches; these efforts require significantly more financial support to be scaled for impact.
Ultimately, we must ask ourselves what we are designing algorithms and automated systems for – preparing for a future of war or creating the conditions for a future of peace? AI may well be able to help humanity limit war-related casualties, deter future great power conflicts, and identify new pathways for cooperative problem-solving. The opposite is equally feasible. Which future we forge will depend upon how powerful countries orient these technologies, the controls they put in place, and the human structures built around them. This will require far more cooperation and dialogue than is currently envisaged by a narrow focus on “winning the AI arms race.”
Download a PDF of this issue »
Peter J. Quaranto is a visiting professor of the practice and distinguished global policy fellow for 2025-2026 at the University of Notre Dame’s Keough School of Global Affairs. Quaranto concurrently serves as a senior fellow for the future of peace and security with the Alliance for Peacebuilding.




