Humanizing PeaceTech?
I do not come from the center of PeaceTech.
I have not built large platforms or led major digital systems. I did not enter through engineering, data science, or product design. My work has been quieter, less visible: meaning, political harm, participation, and the daily practice of peacebuilding here in Myanmar—for more than a decade.
So this is not an attempt to define PeaceTech. It is an attempt to describe what becomes visible when you approach it from the edges.
Let me start with a simple observation.
Technologies do not enter empty space. They arrive in real places: villages, conflict zones, displaced communities, fragile institutions, unequal relationships. They enter conversations already shaped by fear, history, power, and survival.
But many PeaceTech conversations start elsewhere. They start with tools. Mapping platforms. Reporting systems. Dashboards. AI summaries. Digital consultations.
These tools are often presented as solutions to problems of scale, speed, and coordination. And they do solve some of those problems.
Yet when I look at these tools from where I work, I do not first see efficiency. I see translation.
Every tool translates something.
A reporting app translates an experience into a category. A survey translates an opinion into a selectable option. A dashboard translates conflict into indicators. An AI system translates narratives into summaries.
In each step, something is gained. But something is also lost. And what is lost is rarely technical. It is meaning.
A few years ago, I reviewed a conflict mapping exercise in a township in southeast Myanmar. The data looked clean relatively. Incidents were categorized. Trends were visible. It showed where tensions were rising.
But when I spoke to local actors, I realized that what the system labeled as a "security incident" meant very different things to different people.
For one group, it meant the presence of armed actors. For another, the absence of protection. For some, fear of movement. For others, economic disruption.
The system had one category. Reality had many categories.
The map was not wrong. But it was incomplete in a specific way. It had stabilized meaning too quickly.
This is where my concern begins.
PeaceTech often assumes that once we collect enough data, we understand the situation better. But understanding does not come from data alone. It comes from how meaning is negotiated, contested, and lived. And that part does not scale easily. Human life-experiences matter.
I have also seen this in participation.
Digital tools promise wider reach. More voices. Faster input. Lower cost.
In one consultation process I encountered, hundreds of responses were collected through an online survey. The report highlighted diversity of input. It looked inclusive.
But when being inquired about who shaped the questions, who defined the categories, and who decided how responses would be interpreted, the answer became clearer. Those decisions were made before the participants ever entered the system.
Participation had expanded. Power had not shifted. People who did not have access to internet and people who did not have enough digital skills never answered. In fact, people who were left out by the algorithm are also invisible.
From the edge, PeaceTech does not look like a technical field. It looks like a meeting point of several tensions.
Between visibility and safety.
Between data and meaning.
Between participation and power.
Between efficiency and understanding.
And these tensions do not disappear when technology improves. They intensify.
I am not arguing against Tech. Tech is a tool. Digital tools can reveal patterns, support coordination, expand communication, and document realities that might otherwise be ignored. But tools do not decide what matters. People do. And people bring assumptions, incentives, limitations, and power. We construct them.
So the question is not only what technology can do. It is also what it makes visible, what it hides, who benefits, and who carries risk.
Working from the edge means accepting a limitation. I cannot define PeaceTech as a field. But I can ask what happens when peace work is translated into digital systems.
And I can offer this as a starting point.
PeaceTech is not only about improving how we collect and process information. It is also about how we interpret, distribute, and act on that information—in contexts where meaning is contested and consequences are real.
If this series has a direction, it is this. Not to challenge the existence of PeaceTech. But to ask "What happens to peace when it becomes digital?"
And what must we pay attention to, so that in making peace more efficient, we do not make it less human?