In plural societies, peacemaking is the everyday work of fostering coexistence across ethical, religious, and political differences. Artificial intelligence (AI) is disrupting this work by reshaping how people decide what to trust, and how they negotiate boundaries, interpret misunderstandings, and perceive relationships. We can see this disruption most clearly on social media platforms, which are becoming part of modern battlefields. These platforms increasingly rely on AI to rank posts, shape feeds, and moderate content at scale. AI-powered platforms actively categorize people and claims, decide which meanings are treated as credible, and ultimately, shape what forms of peace mediation are feasible in practice.
Understandably, there is considerable focus on improving AI systems to reduce obvious threats such as fake content and misinformation. But peacemakers must also consider the broader governance architecture surrounding these systems and how to improve modalities for identifying harms, restoring relationships, and fostering safe participation. Building on my research over the last decade on AI ethics, online safety, intergroup coexistence, and minority communities’ experiences on social media, I offer the following three insights on the challenges and imperatives for peacemaking in the age of AI.
The first insight is that policy frameworks for online safety often start with the wrong unit of analysis. They begin with problematic content, and then ask how AI can detect and remove it. But in many contexts, the binding constraint is participation safety. In my work with religious minority communities in Bangladesh, fear of retaliation shapes when people engage, what they say, and whether they avoid participation altogether. A peace-oriented approach to online platforms should focus on who gets subjected to coordinated harassment, whose reports trigger backlash, and whether users have safe, reliable pathways to disengage from a conflict without being cut off from community life.
Framing the problem this way implies a different set of obligations for platforms. Transparency reports that emphasize removals and model “accuracy” miss the more important question: Are those most at risk withdrawing and becoming less visible because participation has become dangerous for them? Regulators should demand evidence about safe participation, including whether affected groups can report harms without escalating exposure, whether systems can slow the virality of targeted harassment, and whether platform design provides non-public pathways for seeking support and remedy.
The second insight is that peace depends on recognition, and recognition is increasingly shaped upstream, often from within data practices and data annotation pipelines. Many mistakes in how platforms handle harmful content are symptoms of a deeper governance gap: Online platforms have not built the cultural and political interpretive capacity needed to recognize contested harms. As an example, my work shows how annotation practices can erase faith, religion, and spirituality as legitimate interpretive lenses, even in datasets dealing with faith-sensitive violence.
The result is not merely a technical error. It is a systematic under-reading of certain communities’ harms, which then undermines trust in institutions that rely on these systems. For plural societies, auditing must include cultural intelligibility and communal wisdom. In practice, platforms should be required to document how labels were defined, where annotators disagreed, whose knowledge resolved disputes, and how minority and faith-sensitive interpretations were incorporated rather than dismissed as bias. This turns “context” from an abstract afterthought into an auditable capacity.
The third insight is that short-term enforcement can stop harmful online behavior, but it rarely addresses the underlying damage. Most major social media platforms treat moderation as rule application: remove content, suspend accounts, and publish a transparency report. Yet, coexistence is relational. It depends on restoring dignity, rebuilding trust, and re-establishing boundaries after harm. That is why my work on moderating Islamophobic content draws on restorative justice traditions, including Sulha (an Islamic restorative justice practice), to develop design principles that prioritize acknowledgment, reconciliation, and post-conflict care alongside enforcement.
This is also where the “more exposure to difference” policy instinct needs guardrails: Exposure to opposing views on social media can increase polarization for some users, especially in plural settings shaped by fear and unequal power. Policy should encourage, and in some conflict-sensitive settings require, layered response pathways, including private reporting and support, culturally literate review capacity, plural justifications for decisions, and options for restorative intervention when appropriate.
In summary, AI will not support peace efforts simply by getting better at detection and precision. We must focus on how AI-driven systems reshape the conditions of coexistence, including dynamics around fear, recognition, and legitimacy. If regulation focuses only on content categories and performance benchmarks, it will leave deeper conflict drivers untouched. A stronger agenda should treat platforms as governance institutions, treat AI as part of the infrastructure that makes harms visible or invisible, and measure success by whether plural communities can participate safely, be recognized on their own terms, and access remedies that restore trust and relationships.
Download a PDF of this issue »
Mohammad Rashidujjaman Rifat is assistant professor of tech ethics and global affairs at the University of Notre Dame’s Keough School of Global Affairs.




