University of Notre Dame
Kroc Institutde for International Peace Studies

The power of artificial intelligence (AI) is palpable from Bangladesh to Brussels, and from Syria to Syracuse. In this issue of Peace Policy, authors offer glimpses of how AI is already reshaping the conditions for peace and conflict. AI is altering how people relate to each other; often amplifying the worst aspects, like inequality and discrimination, and hampering the best of humanity. At the same time, AI is being used in promising – albeit nascent – ways to strengthen early warning, civic engagement, and crisis response. Peacebuilders must become savvy about the good, bad, and ugly of AI, and become more engaged in ongoing deliberations about how AI is governed. The AI policies being adopted now will have a profound influence on the conditions and constraints for future peace.

In order to understand AI’s impact on peace and conflict, it is important to understand how AI is different from a search engine. A search engine helps find information and produces lists of possible relevance to a query. An AI chatbot answers a question by digesting everything written (its training data) and identifying patterns of how words and ideas relate to each other. It then predicts the most relevant answer to a question by ranking information using math. AI’s algorithms shape what people see first, what gets amplified, and what fades from view. This ranking matters because it is like a curtain revealing some information while hiding other information. In a world that runs on information, this has vast impacts.

Eskandar Ataallah observes that algorithmic recommendation systems shape social media in Syria’s post-war transition, reinforcing conflict narratives and divisions online. Mohammad Rifat describes how AI-mediated platforms in Bangladesh fail to protect the voices of marginalized groups, silencing people too afraid to share their views online. In both instances, AI algorithms alter whose voices and what ideas get heard. This skewing of the information environment poses significant challenges for efforts to promote sustainable peace in divided societies. 

Peacebuilders must also confront AI’s growing role in militarization. Peter Quaranto describes how the global AI arms race is pushing states to adopt autonomous drones that use algorithmic targeting. These AI systems rank information about potential threats. They may literally decide who lives and who dies, potentially without a human in the decision-making loop. The widening use of these systems could fundamentally change the nature of warfare.

Sitting in Brussels, where European governments are meeting to discuss AI policy, Lena Slachmuijlder broadens this issue beyond combat zones to how humans handle a wider range of conflicts. She observes that AI assistants designed to “help” offer people comfort and affirmation in ways that are replacing human interaction. In real relationships, people disagree and have their own interests and needs that require negotiation and dialogue. AI assistants designed for human friendship might undermine human skills to talk with people who are different from us. 

Instead, AI should be designed to teach humans empathy and communication skills. Drawing on the extensive digital peacebuilding work of Build Up, Julie Hawke offers examples of how AI could serve peace goals. She describes how AI monitoring of ceasefires can identify patterns humans might not detect on their own. AI can also synthesize large amounts of public input that human teams would not be able to process. Hawke believes that instead of talking about AI systems with “humans in the loop,” we should be pursuing peacebuilding systems with “AI in the loop.” 

My own research on AI and peacebuilding explores prosocial tech design and governance. With colleagues at the Council on Technology and Social Cohesion, we propose a set of policy recommendations for tiers of prosocial technologies, research access to study platform impacts on society, and market incentives for designing prosocial technologies. 

I also served on the Scientific Panel on AI and Peacebuilding, which developed policy recommendations for leveraging AI to support peacebuilding. We urged policymakers to:

  • Center fundamental human rights and “do no harm” principles in every stage of AI design and deployment.
  • Fund evidence-based initiatives and make data on both successes and failures widely accessible.
  • Support smaller, locally relevant language models that reflect diverse contexts.
  • Promote transparency and accountability for dual-use AI tools.
  • Build partnerships across peacebuilding, development, and technology sectors. 

If we could choose a world without AI, that would probably be a better world. AI’s powers of mass persuasion and propaganda are already undermining fragile democracies around the planet. But we do not live in that world. We cannot bury our heads in the sand while global powers embrace AI. Peacebuilders must become more fluent and engaged in AI’s uses and the policies that govern them. This knowledge and engagement will ultimately be critical to our ability to meet local and global challenges ahead: from reducing polarization to responding to climate change to preventing nuclear war. 

Download a PDF of this issue »

Lisa Schirch is the Richard G. Starmann, Sr. Professor of the Practice of Peace Studies at the University of Notre Dame’s Keough School of Global Affairs and its Kroc Institute for International Peace Studies.