How did you choose your research topic?
Two main factors shaped my decision. First, after working as a legal adviser in the field of IHL, I wanted to use my shift to academic research to challenge and refine my understanding of IHL. Second, I saw a long-term PhD research project as an opportunity to connect IHL with other areas I have always had a strong interest in, such as science and technology.
As I started to delve into the growing debate about AI and IHL, I realised that at the heart of the matter wasn’t so much AI itself, as one might initially assume, but its effect on the very normative concept of the human.
What is important to highlight here is that by human I mean not the individual military decision-maker, but the Human as a normative ideal (which I denote with a capital H). It was this realisation that changed my focus from examining AI systems to exploring how IHL defines and frames the human role, and whether that concept needs to adapt to today’s reality of technologically mediated military decisions.
To illustrate this point, discussions often centre on the concept of “meaningful human control” over AI systems in military decisions on the use of force. While this concept is widely accepted as essential, its practical meaning remains unclear. Many view this as a gap or ambiguity in IHL about the role of humans deploying or activating technology in their decisions on the use of force.
However, I sensed that there was a more profound issue: a widening gap between legal concepts rooted in the normative conception of the Human and the intricate relationships between humans and AI systems in military decision-making. It is this gap that seems to be the root cause of the ongoing impasse of legal discussions on the use of AI systems in military decisions.
But rather than a concern, I saw this impasse as an opportunity to challenge the dominant normative conception of the Human within IHL. In essence, instead of examining how the military use of AI systems in military decision-making aligns with traditional IHL structures, I decided to look at how the use of AI systems can help us to reimagine who or what the Human under IHL is and ought to be.
How did you proceed, from a methodological standpoint?
My project developed along two intertwined paths. On the one hand, I pursued a critical approach, questioning the assumptions that shape how the Human is conceptualised in IHL. On the other, I took a creative approach, exploring how the Human could be reimagined in ways that better reflect the complexities of modern military decision-making, especially in the age of AI.
To do this, I needed a framework capable of holding critique and imagination together. Critical posthumanism provided that foundation. It challenges the Enlightenment’s liberal humanist model — supposedly neutral and universal, but in reality rooted in a masculine, rational ideal — and invites us to rethink the Human without erasing it altogether. This approach rejects rigid binaries like human/nonhuman or rational/emotional, and treats the Human as something fluid, shaped by social interaction, power structures, and the narratives we tell.
To bring this into legal analysis, I turned to the narrative method. This allowed me to treat IHL as a living body of meaning — constantly constructed and contested — rather than a fixed set of rules. I examined the stories, voices, and interpretive choices within the community of IHL experts that shape the Human’s role in IHL, especially in relation to AI in military decision-making.
What did you uncover about the current conception of the Human in IHL?
My analysis uncovered that dominant IHL narratives remain deeply anthropocentric and humanistic. First, they reflect an Enlightenment ideal of the human decision-maker — one that is characterised by rationality, autonomy, and a form of “gentlemanly masculinity”, typically envisioned as a man: one strong and courageous enough to confront the perils of combat, yet exhibiting the ethically noble virtues of honour, restraint, discipline, and self-control. Second, they embrace the reality that the Human uses technology to advance the purpose IHL has set, namely to limit the impact of armed conflicts on protected persons. Yet, the manner in which they conceive that relationship is profoundly anthropocentric: technology is presented as a mere tool in the service of the Human.
This vision of the Human and its relationship to technology is increasingly out of step with the realities of technologically mediated military decision-making processes. This conception not only struggles to address the complexities of AI-assisted military decisions but also risks reinforcing forms of marginalisation and exclusion in determining what counts as human under IHL.
What do you propose to rethink this conception?
My research proposes a re-narration of the Human that can help bridge current tensions in debates on the military use of AI systems. I outline practical ways in which this new narrative can foster a more constructive dialogue in that field.
For instance, the current debate often frames the role of emotion and reason in binary opposition. Competing IHL narratives are casting human emotions either as “angelic” or “beastly”, while AI systems are presumed to operate through pure reason, devoid of emotions. The proposed re-narration of the Human moves beyond such rigid binaries, rendering discussions over whether human emotions are “good” or “bad” compared to supposedly emotionless AI systems irrelevant. Instead, the re-narration invites us to examine how humans and AI systems mutually influence each other within military decision-making processes.
What I also discovered in that process of re-narration is that it offered me a way to speak about IHL that builds on faith in human moral capacities. Instead of seeking external, technological fixes to what some perceive as human moral failings in the “fog of war”, it helped me to shape my legal narratives by trusting that ethical conduct originates within humans themselves.
What political implications might arise for IHL from this faith in human moral capacities?
This ideal, even if aspirational, is, I believe, vital for rebuilding trust in decision-makers and reaffirming IHL’s role in guiding conduct amid complex realities. My concern is that if we relinquish this faith, we open the door to technological developments that erode rather than enhance human decision-making skills.
However, beyond this, I found that the potential of the proposed re-narration of the Human extends beyond issues related to the use of AI systems in military decision-making: I see in this framework the potential for reconciling many of the contemporary IHL challenges that often find their roots in the delicate balance between humanitarian imperatives and military necessity. By offering a more inclusive and adaptable conception of the Human, this approach can help us navigate many of those challenges, including those related to urban warfare, the protection of the environment, or the growing involvement of civilians in armed conflicts through cyber activities.
Do you plan to continue this research in your professional endeavours?
What had begun as a theoretical inquiry ultimately became profoundly practical, shaping my legal practice not only as part of the discussion on AI systems but also in my teaching, activities as a practitioner, and even in my everyday life engagements.
Currently, I am particularly interested in further exploring concrete ways to apply this framework in real-world contexts of AI applications, specifically by armed forces and law enforcement personnel. Fundamentally, I am interested in identifying ways technology can be developed and used to support militaries in addressing their operational challenges, all while safeguarding the humanitarian ideals of IHL.
* * *

Anna Rosalie Greipl defended her PhD thesis in international Law, titled “Demystifying the Human in International Humanitarian Law: Artificial Intelligence and the Evolving Role of Humans in Military Decision-Making”, on 26 June 2025. In a post on her LinkedIn page, she expresses “a heartfelt thank you” to her supervisor, Professor Andrea Bianchi (right), “whose guidance over the past six years has been invaluable”, and to Professor Andrew Clapham (left) and Professor Nils Melzer (second from the right), Member of the Council of the International Institute of Humanitarian Law, Sanremo, Italy, “for their challenging questions and thoughtful insights during the defence”.
Citation of the PhD thesis:
Greipl, Anna Rosalie. “Demystifying the Human in International Humanitarian Law: Artificial Intelligence and the Evolving Role of Humans in Military Decision-Making.” PhD thesis, Graduate Institute of International and Development Studies, Geneva, 2025.
Access:
An abstract of the PhD thesis is available on this page of the Geneva Graduate Institute’s repository. As the thesis itself is embargoed until July 2028, please contact Dr Greipl for access.
Banner image: TSViPhoto/Shutterstock.
Interview by Nathalie Tanner, Research Office.