Killing from afar: The Ethics of Drone Warfare

Professor Asa Kasher, Professor Amir Shapira and INSS researcher Pnina Sharvit-Baruch discuss the moral dilemmas presented by the employment of unmanned aerial vehicles for targeted killings

UAV operators in a US base in Iraq (Photo: AP)

Back in 2012, New York Times reporter Elizabeth Bumiller recounted the story of Col. D. Scott Brenton, a Reaper RPA (Remotely Piloted Aircraft) pilot, whom she described performing his task while seated in an upholstered armchair in a suburb of Syracuse, New York.

Military intelligence had provided Col. Brenton, a former F-16 pilot, with hundreds of video images of targets in Afghanistan, and when the decision was made to act, he was ordered to launch a Hellfire missile from the RPA he was flying 11,000 kilometers away from where he was seated. "I see mothers, fathers and kids playing soccer," Col. Brenton said, "Sometimes I am ordered to launch when the kids are around. Sometimes I watch the same target for weeks. I have no emotional attachment to the enemy. I have a mission to perform, and I execute it."

At the end of his working day, Col. Brenton enters his car and drives home through crowded streets, to play with his children. A colleague of Col. Brenton's, another RPA pilot, provided a somewhat different account: "It is true that there is a good reason for killing those people, but I cannot forget. The memory cannot be erased."

The story of Col. D. Scott Brenton and his fellow RPA pilot regarding the launching of ordnance from long-range drones such as the Predator (also known as MALE – Medium Altitude, Long Endurance drones) is the core of a dilemma that has already been addressed by military organizations, intelligence agencies, human rights organizations, ethics and law experts. The source of this dilemma is the manner in which the technology is being used. Current attacks no longer consist of clashes of swords and bayonets, face to face on the battlefield. Instead, you launch your ordnance from a range that may extend from dozens to tens of thousands of kilometers – from a manned fighter aircraft or attack helicopter, and similarly from an RPA.

Is the launching of ordnance from RPAs permissible or forbidden by the laws of war, morality and ethics? There lies the dilemma.

Those who maintain that it is permissible will say: what is an RPA? Just another aerial platform. Whatever is permissible with regard to a fighter aircraft, is also permissible with regard to an unmanned vehicle. Those who maintain that it is forbidden will say: a remotely-controlled launch can lead to collateral damage and such damage often means harming uninvolved civilians, women and children. Reports have been published about casualties among uninvolved civilians in various theaters of operation around the globe, but such reports often tend to adapt themselves to the worldview of the reporting party or the party that had ordered the report.

The UN human rights organization published a report regarding ex-territorial killings. The situation was reviewed by the armed forces of the USA and the UK. The conclusion: there is nothing wrong about launching ordnance from RPA. The RPA can remain airborne longer than a fighter aircraft, which will enable its pilot to examine the target and its vicinity very thoroughly. This upholds the precautionary principle prescribed by international law, which requires that every measure should be taken to avoid harming civilians.

An article published by the Community of Thinkers stated that "Thousands of individuals were killed in US drone attacks. A drone is a killing machine operated remotely, from a distance of thousands of kilometers." An article by Scott Shine published in the New York Times in 2012 stated that "Reports were received from remote areas of Pakistan regarding families that were eliminated, children who perished and collateral damage that was sustained."

A research officer of the Central Intelligence Agency that had carried out remotely-controlled attacks using drones in the past, wrote: "Think of the massive fires in Dresden, Germany in World War II following the air raids by the allies. Could that be compared with whatever is happening today on the battlefields, as far as minimizing the damage to uninvolved civilians is concerned?"

Pakistan was, in fact, the world's largest trial field for attacking targets using RPA. Studies have shown that the kill rates over there were infinitely lower compared to attacks carried out using other means. Conversely, when the Pakistani Army raided terrorist strongholds, 46% (!) of the deaths turned out to be civilians.

In order to deliberate the dilemma of remotely-controlled killing using RPA, the Israel Defense editorial board sought the opinions of three academicians specializing in the moral, ethical and legal aspects of defense/national security activities: Professor Asa Kasher, a lecturer on philosophy specializing in ethics (notably the ethics of IDF); Professor Amir Shapira, head of the robotics laboratory at the mechanical engineering faculty, Ben-Gurion University of the Negev, and Pnina Sharvit-Baruch, a senior research associate and head of the law and national security program at the Institute of National Security Studies (INSS).

Software with Human Considerations

Pnina Sharvit-Baruch had this to say about the legal aspect of the dilemma: "Jurists maintain that there is no legal problem associated with the employment of RPAs, even for targeted killings, compared to manned aircraft. An RPA is just another type of platform. Today, even a pilot views the target from a long distance, sometimes he does not even see it visually, and yet he is allowed to attack. The criterion in both cases is the same: that the attack complies with the laws of war. This criterion is based on two principles: (1) distinction and (2) proportionality. The principle of distinction maintains that it is permissible to direct an attack against a military or civilian target used for fighting. The principle of proportionality maintains that even when you aim at the target, you must not go ahead with the attack if the collateral damage to civilians is expected to be excessive relative to the military benefit anticipated from the attack."

"Who Decides? Always the Commander on the Ground. That is his Responsibility."

Professor Amir Shapira: "It is true that in certain cases bugs were discovered in remotely-controlled weapon systems and uninvolved civilians were killed. There is no denying it. But don't we encounter bugs in the personal computers of each and every one of us? During an RPA demonstration in Australia, one airborne robot went crazy, turned in mid-air, launched ordnance and killed people. This was the result of a computer bug."

Professor Asa Kasher can see substantial benefits in the employment of unmanned vehicles: "The danger to the pilot is eliminated and the RPA pilot can see the target area better. The observation process is slower and more thorough, and the operator can exercise more sensitivity when facing the risk of collateral damage. The fact that there is no risk of casualties among our own forces was the reason that President Obama had authorized RPA attacks in the first place, as it is the duty of the supreme commander to protect his troops. Obama's rules were publicized and improved the transparency of RPA employment by the military, as opposed to the employment by the CIA prior to that, which had been shrouded in secrecy.

"Admittedly, errors are made with regard to the selection of targets and collateral damage is inflicted, but inflicting such damage is not completely banned according to the rules of morality. The aspect of proportionality is highly important. A person loading an RPA software with target data he/she had received from the relevant intelligence elements is convinced that the targets are real and that it is legitimate to attack them. At this point the proportionality factor comes into the picture: he/she should consider whether the attack should go ahead. It would be better, therefore, to develop a software capable of identifying children and uninvolved civilians entering the target area, and consequently having the mission aborted. So, the software will be identical to the considerations a person would have taken into account as to whether to go ahead or stop the attack. For example, if the target is a rocket storage facility and suddenly a yellow school bus draws to a stop right alongside that facility. Obviously, in such a situation the attack should be aborted. The storage facility will not go anywhere, and an attack whose damage could exceed its benefit would be avoided."

Professor Kasher believes that in an ideal situation, RPAs may be provided with the ability to consider aborting the mission owing to the presence of a school bus. During Operation Protective Edge, one IAF squadron aborted 20% of the missions assigned to it as the pilots had spotted uninvolved persons in the target area in numbers that would not have enabled them to hit the targets without killing or injuring uninvolved civilians.

Pnina Sharvit-Baruch fully understands the counter arguments of those opposing attacks using autonomous vehicles: "The opponents believe that the actual killing becomes too easy for the operator, for whom the mission becomes similar to playing a video game. The operator loses his/her sense of reality owing to the physical distance from the target. In my opinion, this argument cannot be accepted, as a pilot can divert a missile off course in order to prevent it from hitting children and the pilot of an advanced RPA can do the same."

According to Sharvit-Baruch, the real problem lies in the employment of completely autonomous vehicles that make decisions independently: "There is a question of the legitimacy of employing systems that make decisions on their own. In my opinion, as long as the principles of the laws of war we mentioned, distinction and proportionality, are upheld, the employment of autonomous systems will not be illegal, on one condition: the use of software and algorithms necessitates the presence of a man-in-the-loop element."

Professor Shapira is also concerned about autonomous robots: "The concern is about machines making decisions independently. The danger lies in an RPA that makes its own decisions as to whom to kill, where and when." Nevertheless, this robotics expert from Ben-Gurion University said that in all of the existing weapon systems operated remotely the human element makes the decisions, and the launching party can abort the mission. "There is no vehicle currently in existence that can make killing decisions completely independently."

In an article he published, Professor Asa Kasher addressed the argument according to which human operators, not computers, must be included in the loop when human lives are at stake: "You must prove that a person might err less than computers, and that a human is superior to a computer in the ability to identify and correct errors. We assume that advanced human-operator and robot-computer systems include built-in resources for identifying and correcting errors, so there is no reason to assume that either one, a human or a computer, is superior to the other. When a landmine explodes under the foot of a person, or when a bullet hits that person's head – are the landmine or bullets responsible for his death or injury? Certainly not. The party responsible is the party that had laid the landmine or fired the bullet. The case of the RPA is not different. There is always a person responsible."

Professor Shapira: "Admittedly, we are currently developing machines capable of killing and we strive to provide RPAs with more and more autonomy, but I am convinced that no system developer will leave the moral aspect to the machine."

Pnina Sharvit-Baruch told us that the opponents of the use of remotely-controlled weapons include some extremists who wish to ban the use of autonomous vehicles, "But such a ban will be difficult to enforce. Democratic countries may accept such a ban, but what about countries of other types? International treaties ban the use of 'blinding lasers'. No such weapon was developed and none exists, but it has already been banned. So it is logical that autonomous weapons will be treated in the same way as chemical and nuclear weapons – restrictions should be imposed on the operation and employment of such weapons."

The final conclusion of the INSS research associate is based on a recently published report that contains an analysis of targeted killings using RPAs executed by the Americans: "If it is permissible to kill the object of the targeted killing, what difference does the killing platform make? The question is whether the killing is legal or illegal. In responding to this question, the nature of the platform that executed the actual killing is of no relevance whatsoever."

Apparently, we can conclude that the employment of RPAs for attack purposes and even for targeted killing will not be perceived by most circles as immoral or legally flawed, provided the system includes a human element capable of making decisions, while upholding the laws of war and the proportionality of employment. Evidently, the time has not yet come to rely on a completely autonomous vehicle, even if it possesses artificial intelligence, to make decisions concerning human lives. 

 

img
Rare-earth elements between the United States of America and the People's Republic of China
The Eastern seas after Afghanistan: the UK and Australia come to the rescue of the United States in a clumsy way
The failure of the great games in Afghanistan from the 19th century to the present day
Russia, Turkey and United Arab Emirates. The intelligence services organize and investigate