news
Centre on Conflict, Development and Peacebuilding
22 October 2025

Everyday Peace Indicators: Ethics and meaning

How can we represent local voices without losing their meaning? 

In this post, Apolline reflects on early conversations with the Everyday Peace Indicators (EPI) team about the challenges of visualizing such rich, deeply contextual data. Beyond technical questions of categorization and design, these discussions revealed the ethical stakes of the project, the emotional connections researchers have built over time, and their ongoing efforts to engage communities not merely as subjects, but as co-creators of knowledge.

Everyday Peace Indicators: project at the CCDP

The Swiss National Science Foundation awarded the CCDP a new grant to share the 17,000 indicators collected by the Everyday Peace Indicators (EPI) team. In this short blog series, I want to share a behind-the-scenes look at this project.

 

Collaboration with the EPI team and Interactive Things


One thing that struck me immediately was the strong bonds among the EPI researchers. They have become close friends over the years. Our conversations were informal, often beginning with quick check-ins on how everyone was doing – personally and professionally – and how the current international climate was affecting them. I realized that this funding and this project were also an opportunity to bring the whole team back together. It was fascinating to meet the people behind this methodology and this collection of indicators, to understand their motivations, and to see the issues they care about most.

The Challenge

This project faces a major challenge: how to represent an overwhelming number of indicators. Communicating them clearly and making them accessible is not easy. On top of that, these indicators are deeply rooted in local contexts – they are neither transferable nor comparable. For the same reason, testing them for robustness in the traditional sense is nearly impossible. What emerges is a proliferation of indicators identified subjectively, rather than rigorously tested for causality, reproducibility, or reliability.

"What emerges is a proliferation of indicators identified subjectively, rather than rigorously tested for causality, reproducibility, or reliability."

Rooted in local 

From the start, we made it a priority to clarify what these indicators are not. They are not national-level metrics or generalized perceptions; their meaning is rooted in specific local contexts. Presenting them as representative of an entire country would be misleading, and ethically problematic. Each community is unique, and the data must reflect that specificity. The researchers emphasized this point repeatedly, underscoring how much they value data integrity. This concern also raises practical questions, such as how locations are labeled. A simple pop-up that says “Afghanistan,” for example, risks erasing the distinct local context it is meant to represent. But how specific – or how general – should we be when naming locations? This remains an open and important ethical question.

Ethics lie at the heart of the researchers’ work. They are committed to finding creative, meaningful ways to give the data back to communities by actively engaging them in the process. They have experimented with various approaches, discovering that games and art are often the most effective. These forms of engagement allow community members not merely to receive results, but to enter into dialogue with them – to reflect, respond, and see how the data might serve their own needs. Engaging communities in this way, supporting people-powered change, requires time and funding, but it can yield powerful results, especially when grassroots, civil resistance is needed to foster sustainable political change in conflict settings. Giving back also means acknowledging participants’ time and emotional labor, and sharing information that is often deeply personal.

Questions about categorization also emerged. Could predefined categories help in our research, and if so, how? Initially, we thought recoding everything would be useful – for instance, to see which indicators relate to security or governance. But we soon realized that this didn’t always provide meaningful insights. Categories only make sense when they help answer a specific question. In the end, we decided to explore using large language models to help navigate the indicators – for example, to retrieve all those relating to sports.

 

What’s next?

The next post will explore dilemmas and decisions ahead: How do we visualize these indicators? How can we preserve their uniqueness and humanity while conveying analytical rigor? Should we treat them as data points, or use metaphors instead? Perhaps indicators are like books that open up to reveal more about a situation – or like seeds, traveling and taking root in different contexts.


 

Learn More

About the series 

This blog series itself is a collaborative effort, shaped by CCDP Head of Research Eliza Urwin, Postdoctoral Researcher Apolline Foedit, and envisioned through the communications lens of CCDP specialist Jennifer Thornquest.