news
Student Works
03 December 2019

Artificial Intelligence and the Rules of Warfare: Are Humans Necessary?

The world we live in is increasingly technological. In particular, automated technology and artificial intelligence (AI) impact nearly every aspect of our life, from spam filters deciding which emails finish in our inbox, to determining our credit when we apply for a bank loan.

The vast majority of people welcome these new technologies, presented as tools with no strings attached; however, not all are designed for innocent tasks. Indeed, following this global trend, warfare is a realm where technology and AI are increasingly present. A wide range of weapons and computer-aiding systems with functional autonomous capabilities are already operating in the battlefield. Yet when we analyse warfare, can we claim that automated technology and AI are helping humans better comply with the rules of war? 

Although existing weapons and aiding systems require that human operators make final targeting decisions, technological autonomy has an impact on those operators. For instance, human operators tend to rely on machines’ suggested outcomes, suppressing doubt or healthy scepticism. Who has not reluctantly followed a Google Maps route, despite thinking that there must be a different way to reach a desired destination?

Moreover, technology’s intrinsic characteristics may jeopardise the respect for international humanitarian law’s targeting principles. For example, although machines are better suited to conduct repetitive tasks, they inherently leave out a contextual assessment when it comes to producing outcomes. This assessment is critical for compliance with the principle of distinction, an underpinning rule of warfare prohibiting direct attacks against civilians. Thanks to this fundamental principle, only military objects and military personnel can be the object of a lawful attack.

After closely studying how technology impacts human operators, as well as the limitations of current computer aiding systems, it can be concluded that technological autonomy and AI pose novel and unique challenges to human operators’ compliance with international humanitarian targeting principles.

Human judgement is essential in most of the crucial aspects of the targeting principles, which current technology is not fit enough to replace. Therefore, as it happens with the principle of distinction, human operators are necessary in the targeting process in order to conduct a lawful assessment of military objectives. 

Instead of understanding technology as a tool that will pave the way for better compliance with the rules of warfare, legislators, military commanders and politicians should take an extremely cautious approach when considering whether to further automate the targeting process.  

This article is part of “Student Works”, a news series highlighting the best student papers from the Graduate Institute. 

Keywords: international law     
 

Andrea Farrés graduated from the Graduate Institute with a Master in International Law. She wrote her thesis on "Unravelling Technological Autonomy and Why Its Development Will Not Ensure Compliance with the International Humanitarian Law Targeting Principles".

During her time at the Institute, she won the Sanremo New Voices in International Humanitarian Law essay competition. Currently, she is working at the Norwegian Refugee Council’s Geneva office as a humanitarian policy intern.