The advent of new technology opens up previously unthinkable possibilities. Today robots perform an array of complex functions, from precise medical surgery to smart vehicles without a human driver. Yet, technological advances also pose great dangers, even for international stability and global security. For example, the development of lethal autonomous weapons systems capable of selecting and engaging targets without human control could trigger an arms race with catastrophic implications for civilians. Considering the massive investments that some countries, including the United States, United Kingdom, Russia, China and Israel, have made in artificial intelligence for military application, the emergence of these “killer robots” is not a distant reality.

The power of autonomous weapons systems

Delegating the power to autonomous weapons to make decisions on life and death produces a series of moral, operational and legal challenges. From an ethical point of view, machines devoid of feeling and compassion are incompatible with the principle of humanity, consolidated in international humanitarian law. It requires individuals to have respect for human dignity and human life, even in situations of war.  Would a weapon system, programmed to kill, feel empathy for an enemy surrendering to it, begging for their life?

In operational terms, killer robots will function through data, information and algorithms susceptible to error and open to potential bias. The risks are real, not only in conflict zones but also in policing and border control operations.

For example, in Latin American countries, the use of facial recognition technology has disproportionately threatened black people. Many are arrested for crimes they did not commit. It is often based on inaccurate images identifying them as the perpetrator of offences. Ultimately, lethal autonomous weapons systems making use of this technology to identify targets could kill innocent individuals. In addition, killer robots, like all technology,  are vulnerable to hacking and could turn against their developers in the event of cyberattacks.

An ethical question

Another significant issue is who would be held accountable for the mistakes of killer robots. Machines cannot stand trial. It is unlikely that an individual would be legally responsible for lethal autonomous weapon systems’ unlawful killings and human rights violations. It would make it impossible, for example, for victims and their families to receive justice. They would not be compensated for damages suffered. It would be particularly concerning in countries where security forces can act with almost virtual impunity when using disproportionate lethal force. Deploying deadly autonomous weapons in such a permissive environment would equal a “license to kill”. If criminal organizations, militias and armed groups have access to this type of technology, the potential damage to civilian populations is incalculable.

Treaties are paramount

In light of the multiple threats posed by the growing automation of weapons systems, the United Nations Convention on Certain Conventional Weapons (CCW) has held regular meetings about killer robots since 2014. To date, nearly 70 countries openly support a new legal framework on autonomy in weapons systems. In parallel, more than 180 civil society organizations from 65 countries are members of the global “Campaign to Stop Killer Robots”. It advocates for creating an international treaty prohibiting lethal autonomous weapons systems that guarantee human control over the use of force. It would ensure that before a person is killed, another human would assess the situation first. It would mean that a human would judge the situation before ultimately making a life-ending decision.

There are precedents for the type of agreement proposed. Several international treaties already prohibit weapons with high destructive power, namely, chemical and biological weapons, landmines and cluster bombs. Finally, in January 2021, the Treaty on the Prohibition of Nuclear Weapons came into force, which resulted in dozens of banks and pension funds stopping their investments in companies that produce nuclear weapons.

Yet, there is a critical difference between these treaties and the goal of the global civil society movement aiming to ban killer robots. The agreements mentioned above prohibited the use of weapons, such as landmines and nuclear weapons, that have already killed hundreds of thousands of people. A ban on killer robots, on the other hand, aims to be in force before a single person needs to die.

A short time window

An opinion poll from December 2020 with nearly 19,000 people in 28 countries found that 62% of respondents opposed the use of lethal autonomous weapons systems. In addition, several world leaders and high-level executives from technology companies have already warned of the risks posed by killer robots. Among them are Pope Francis and Tesla founder Elon Musk.

Civil society recently launched a global petition. It urges government leaders worldwide to launch negotiations for new international laws on autonomy in weapons systems. The goal is to ensure human control in using force and to prohibit machines that target people, reducing humans to objects, stereotypes, and data points. In the words of the UN Secretary-General, António Guterres, machines with the power to take human lives are “politically unacceptable, morally repugnant and should be prohibited by international law“. We still have the chance to ban these weapons, but the time window is narrowing.