Not every peacebuilding challenge is a data challenge, but all data choices are peacebuilding choices. In conflict environments, what information gets collected and from whom, what categories or labels are chosen, and who benefits from the insights are technical questions with power-laden answers. Artificial intelligence (AI) tools are increasingly available to help peacebuilding practitioners face data challenges, such as analyzing media coverage across languages, conducting public consultations, or understanding how divisive narratives spread. Some of these tools can help increase inclusion in mediation and peace processes. Others simply make more operational tasks easier and faster. The utility of many more is being tested with open fields for experimentation and learning. Whether AI tools advance peacebuilding depends less on the sophistication of the data or its handling than on the mindful choices practitioners make about its use.
AI tools are most useful when they are embedded within and supporting people-centered peacebuilding processes, not the other way around. Responsible AI discourse rightly emphasizes keeping “humans in the loop,” meaning that human input and judgement is integrated at key stages of a model’s workflow. An inversion of this principle, putting AI in the loop, is an invitation for peacebuilding practitioners to intentionally use AI (or not) as a constrained support tool within the processes, practices, and values known to be effective and ethical.
De-hype and Dig In
The effective use of AI in peacebuilding starts with clarity about needs over novelty, and specificity about how AI addresses them. For example
- We need to understand how online discourse is entrenching and escalating the conflict. We are using supervised text classification to analyze content inciting violence on social media.
- We need to monitor compliance with ceasefire commitments. We are using computer vision to flag potential violations in satellite imagery for human review.
- We need to make sense of a wide public input process. We are using pattern-detection tools to organize open-ended consultation responses so facilitators can engage with dominant concerns while still identifying outlier and minority views.
When we move beyond broad discourse to de-hype AI with precision and dig into the real-world applications involved, we make our assumptions and limits visible for those we’re working with to meaningfully assess value and risks. At the same time, many assumptions and limits only become visible through practice and experimentation. Digging in means tolerating initial uncertainty, often starting small and building with, instead of for, people.
With Participation
Peacebuilding practice centers on participation, partnership, and ‘right relations.’ This includes engaging with affected communities in decisions about the collection, interpretation, and use of data. When AI tools promise efficiency, scale, or synthetic aggregation, there are potentially high costs involved for the participatory process. Many uses of AI can turn participation into a proxy rather than a practice by replacing engagement with summaries, predictions, and synthetic representations.
It is technically possible, for example, to build a hate speech classification system trained on synthetically-generated data, governed by labeling rules created without community input, and validated primarily by large language models rather than affected groups. Look at how many ‘people problems’ can be solved by removing people!
Consider an alternative example from Build Up, where Sudanese practitioners engaged in deep dialogue as they worked together to train a custom hate speech detection classifier based on social media content evolving through the current conflict in real time. In a similar process in Kenya, looking instead at what counted as ‘polarizing’ content, partners moved from skepticism about contextual AI limitations to ownership over what they called “localized” tools. One summarized: “We can build a tool that works for us.”
To Shape AI Use
Technocracy promises that complex social problems yield to better data, smarter models, and more efficient systems. Peace policy and practice suggest a different story for problem-solving. Dialogue processes work when participants feel heard, not accurately summarized in a generated report. Early warning systems fail not for lack of data, but because the political will to act on warnings is absent. These are not problems AI solves, and AI should not become the latest bolt in a technocratic gatekeeping process that deprioritizes people doing the slow, relational work that transforms conflict.
However, resisting technocracy should not mean foregoing technology. Global peace practitioners should not shy away from engaging and experimenting with AI tools. In my experience and from participation in a community of digital peacebuilding practice, practitioners know what they need to reach more people, to expend fewer resources, or to move a peace process forward. The key is to put AI in the loop of peacebuilding processes, where practitioners are not just end users of technology developed elsewhere or become beholden to AI-powered analysis. Instead, practitioners should be trained, trusted, and resourced to lead in actively defining and shaping AI tools that serve current and emerging peacebuilding needs.
Download a PDF of this issue »
Julie Hawke is pursuing a joint Ph.D. in peace studies and sociology at the University of Notre Dame through the Keough School’s Kroc Institute for International Peace Studies. She is also the peacebuilding and research lead at Build Up.




