news
Students & Campus
12 January 2021

Artificial Intelligence, a Technology in Need of Answers

In recent years, the increased reliance on artificial intelligence (AI) has shaped our societies in many ways. For instance, robots, machines and AI are now used in hospitals to perform surgical operations or necessary functions in sectors such as finance, the military and even education. Even self-driving cars are on the horizon. The involvement of AI in our daily activities is only expected to increase in the years to come, writes PhD candidate Victoria Priori.  

Since machines and robots can be more reliable and precise than humans, one would think that these technological advancements should be welcomed, but some very serious ethical concerns remain, particularly in relation to the attribution of responsibility. If a self-driving car is involved in an accident, who is to be blamed, the person in the car or the software developer? This type of question is far from being answered but its relevance is expected to grow, as AI intrudes our lives more and more. It is crucial for policymakers, scholars and developers to find answers and understand how current frameworks for the attribution of responsibility can be adapted and suited to the new scenarios our societies will face.

Trying to gather some provisional answers to these questions, Professor Paola Gaeta, along with a team of researchers and PhD students, launched a survey in December 2019 to see how Graduate Institute students felt in relation to AI-based weapons and their deployment in armed conflicts. Interestingly, more than 59% of the students surveyed believed that AI should not be used in weapon systems. A large number of respondents (more than 90%) were strongly opposed to the use of AI-based weapons against human targets without human supervision. Furthermore, the survey demonstrated that students felt negatively towards autonomous weapons, irrespective of whether they are used against human targets or not.  

AI-based weapons may be outperforming humans on the ground, as their assessment and evaluation of complex situations is not impaired by emotions and they are better equipped to provide faster responses. If weapons may be better equipped to make fewer mistakes and perform better, why should we oppose their deployment? Here again, the heart of the problem is the attribution of responsibility in the case of mistakes. Notably, for a majority of students, if a machine mistakenly kills a large number of civilians, they believed that responsibility for the underlying war crime should be attributed to the person deploying the weapon. This was true in cases where the soldier deploying the weapon had some control over the weapon and when no human supervision was present.

In this regard, a parallel scenario can be constructed with a self-driving car killing a person by mistake. With whom does the responsibility lie? In this case, 61% of students said the software developer should be held responsible.

Why is the software developer considered responsible when a self-driving car’s AI system commits a mistake, but not when the same happens with an AI-based weapon? Weapons are inherently programmed to kill and so neither the software nor its developer are to blame. Contrarily, self-driving cars are not designed to kill people on the street, meaning that if this happens, there must be a software bug, thus making the developer responsible for the system’s flaw.

The survey demonstrated that there is not complete trust in autonomy. In fact, when asked whether a self-driving car should autonomously decide to remain on a path and kill five people or move to another path and kill one, only 25% of the students believed that the decision should be taken on the spot by the car’s programming alone.

Is there something frightening about AI’s and machines’ ability to decide autonomously based on software and programming codes? Fear and distrust are present when dealing with AI in general and are only amplified when speaking about AI-based weapons. This makes perfect sense given that AI-based weapons will be entrusted with life or death decisions. Thus, despite the fact that societies and governments are moving towards increased autonomy in this regard, it still seems that we are not ready to blindly trust technology.

More answers are needed if we are to coexist with AI and, more importantly, trust it with critical and imperative decisions.