The possibility of life-or-death decisions someday being taken by machines not under the direct control of humans needs to be taken seriously. Over the last few years we have seen a rapid development in the field of drone technology, with an ever-increasing degree of autonomy. While no approved autonomous drone systems are operational, as far as we know, the technology is being tested and developed. Some see the new opportunities and potential benefits of using autonomous drones, others consider the development and use of such technology as inherently immoral. Influential people like Stephen Hawking, Elon Musk and Steve Wozniak have already urged a ban on warfare using autonomous weapons or artificial intelligence. So, where do we stand, and what are the main legal and ethical issues?

Towards autonomous drones

As yet, there is no agreed or legal definition of the term "autonomous drones". Industry uses the “autonomy” label extensively, as it gives an impression of very modern and advanced technology. However, several nations have a more stringent definition of what should be called autonomous drones, for example, the United Kingdom describes them as “…capable of understanding higher level intent and direction” (UK MoD, The UK Approach to Unmanned Aircraft Systems, 2011). Generally, most military and aviation authorities call unmanned aerial vehicles "Remotely Piloted Aircraft" (RPAs) to stress that they fly under the direct control of human operators.

The ‘BAT’ by Northrop Grumman (formerly ‘KillerBee’ by Raytheon) is a medium-altitude drone able to operate at extended ranges with a variety of sensors and payloads, and probably at least electronic warfare capabilities. (Courtesy of Northrop Grumman)
)

The ‘BAT’ by Northrop Grumman (formerly ‘KillerBee’ by Raytheon) is a medium-altitude drone able to operate at extended ranges with a variety of sensors and payloads, and probably at least electronic warfare capabilities. (Courtesy of Northrop Grumman)

Most people would probably understand the concept of “autonomous drones” as something sophisticated, for instance, drones that can act based on their own choice of options (what is commonly defined as "system initiative" and "full autonomy" in military terminology). Such drones are programmed with a large number of alternative responses to the different challenges they may meet in performing their mission. This is not science fiction – the technology is largely developed though, to our knowledge, no approved autonomous drone systems are yet operational. The limiting factor is not the technology but rather the political will to develop or admit to having such politically sensitive technology, which would allow lethal machines to operate without being under the direct control of humans.

One of the greatest challenges for the development and approval of aircraft with such technology is that it is extremely difficult to develop satisfactory validation systems, which would ensure that the technology is safe and acts like humans would. In practice, such sophisticated drones would involve programming for an incredible number of combinations of alternative courses of action, making it impossible to verify and test them to the level we are used to for manned aircraft. There are also those who think of autonomy meaning ”artificial intelligence” – systems that learn and even self-develop possible courses of action to new challenges. We have no knowledge that we are close to a breakthrough on such technology, but many fear that we actually might be.

Autonomous drones – meaning advanced drones programmed with algorithms for countless human-defined courses of action to meet emerging challenges – are already being tested by a number of civilian universities and military research institutions. We see testing of “swarms of drones” (drones which follow and take tasks from other drones) that, of course, are entirely dependent on autonomous processing. We also see testing of autonomous drones that operate with manned aircraft, all from what the US Air Force calls (unmanned) "Loyal Wingman" aircraft, to the already well tested Broad Area Maritime Surveillance (BAMS) system of Poseidon P-8 maritime patrol aircraft and unmanned TRITON aircraft. We also see the further development of unmanned systems to be dispatched from manned aircraft, to work independently or in extension of the “mother aircraft”, for instance, the recently tested PERDIX nano drones, of which 100 drones were dropped from a F-18 “mother aircraft”. Such drones would necessarily operate with a high degree of autonomy. These many developments and aspirations are well described in, for example, the US planning document USAF RPA Vector - Vision and Enabling Concepts 2013-2038 published in 2014, and other documentation and even videos of such research are widely available. The prospects of autonomous technology, be it flying drones, underwater vehicles or other lethal weapon systems, clearly bring new opportunities for military forces.

The ‘PERDIX’ is a micro-drone swarm system developed for the US DoD/Naval Air Systems Command together with MIT Lincoln Laboratory. (WarLeaks)Check out this video of a test launch of a 100-drone swarm from a US F-18 ‘mother aircraft’.
)

The ‘PERDIX’ is a micro-drone swarm system developed for the US DoD/Naval Air Systems Command together with MIT Lincoln Laboratory. (WarLeaks)
Check out this video of a test launch of a 100-drone swarm from a US F-18 ‘mother aircraft’.

In the case of flying aircraft, we have learned that there are long lead times in educating pilots and operators. One of the greatest changes that will come from the development of autonomous drones is that military forces in the (near) future could develop great fighting power in much shorter timeframes than previously. It is important to note – and many have – that creating the infrastructure and educating ground crew for operating drones is no cheaper or easier than it is to educate aircrew. However, once in place, the drone crew and operation centres would be able to operate large numbers of drones. Similarly, legacy manned aircraft would be at the centre of a local combat or intelligence system extended with drones serving, for example, in supportive roles for jamming, as weapons-delivery platforms or as a system of multi-sensor platforms. Moving beyond the past limitations of one pilot flying one aircraft or one crew flying one drone to a situation where one crew could control large amounts of drones would quite simply be groundbreaking.

These perspectives for new types of high-tech weapon systems – and the fears they raise – are the background for the research we conducted on autonomous drones and weapon systems. It is almost impossible to assess when these technologies will become widespread – this will depend on the situation and the need of states. However, the technologies are becoming available and are maturing and we would argue that the difficult discussions on legal and ethical challenges should be dealt with sooner, rather than later.

The legal perspectives

  • General rules apply but it is not that simple


Autonomous drones, if and when they are used during armed conflict, would be subject to the general principles and rules of the Law of Armed Conflict. In this respect autonomous drones are not to be distinguished from any other weapons, weapon systems or weapon platforms. As with any “means of warfare”, autonomous drones must only be directed at lawful targets (military objectives and combatants) and attacks must not be expected to cause excessive collateral damage.

The X-47B Unmanned Combat Air Vehicle (UCAS), developed by Northrop Grumman in cooperation with DARPA, is popularly referred to as ‘semiautonomous’. (Courtesy of Northrop Grumman)
)

The X-47B Unmanned Combat Air Vehicle (UCAS), developed by Northrop Grumman in cooperation with DARPA, is popularly referred to as ‘semiautonomous’. (Courtesy of Northrop Grumman)

Some particular features of autonomous drones may, however, challenge the application of the Law of Armed Conflict. Autonomous drones, regardless of how one ultimately chooses to define them, would be able to operate on their own to a certain degree in time and space. This (potential) absence of human interference with the weapon or weapon system, during attacks, raises the question of when and where the law requires human presence in the decision cycle. Before providing some tentative answers to this question, we need to highlight some aspects of the legal requirements incumbent upon commanders during attack decisions.

  • The law requires a reasonable commander acting in good faith


Several of the legal obligations applicable during armed conflict are made to fit the “fog of war”. Some of these legal rules contain flexible expressions leaving military commanders with some leeway for discretion when interpreting and deciding upon, for example, what amounts to a “military advantage” and how important this advantage is for the attack as a whole. Furthermore, they have to weigh up the relative importance of this advantage compared to the collateral damage anticipated (the principle of proportionality).

This leeway for discretion is matched with an expectation that the military commander is acting in good faith and assessing the military advantage (as well as the collateral damage) based on the information reasonably available to him or her at the time. During attack decisions, military commanders engaged in the planning or execution of the attack, must take all “feasible precautions” to “verify” that the attack is not directed at a protected person or protected object and that the attack is not expected to violate the principle of proportionality. How do these discretionary notions apply to the use of autonomous drones?

  • How much human touch is required?


Autonomous drones are not capable of reasoning in the human sense. They do not possess human consciousness. So far, autonomous drones (or any autonomous system) cannot replace the human being within the law. The requirements set out above appear to presume a “human in the loop” of the decision cycle. At some point during attack decisions, a human being must decide upon what to attack and how important the target is. The key question revolves around how wide a decision cycle is.

This article is based on research which resulted in a book published in Norwegian in 2016: “Når dronene våkner: Autonome våpensystemer og robotisering av krig” (Oslo; CappelenDamm, 2016)
)

This article is based on research which resulted in a book published in Norwegian in 2016:
Når dronene våkner: Autonome våpensystemer og robotisering av krig” (Oslo; CappelenDamm, 2016)

Obviously, human operators can be assisted by autonomous machines (as well as “autonomous” animals) limited in time and space – but where are limitations required? As with any legal question concerning warfare, the answer is bound to be circumstantial. If the environment is densely populated (such as urban areas) the limitations must necessarily be tighter than in less populated areas (such as on the high seas or under water). Here, as elsewhere, the devil is buried in the details: in some circumstances an autonomous weapons system may (lawfully) be “left alone” to operate for hours or days, while in other circumstances all autonomy ought to be shut off to rely on human judgment – or error.

From law to ethics

We must also recognise the relevance of ethics in debates on autonomous drones. Compliance with the law is central to any military and political policymaking, including the development and use of autonomous drones. Although law and ethics often overlap, there may be important ethical issues at stake, particularly in the case of emerging military technologies, not properly addressed by current law. Ethical reflection may, in other words, complement the law by providing normative guidance in these “grey areas”. It may also be important in emphasising when ethical obligations should exceed legal duties in the interest of good political governance.

  • Ethical perspectives on autonomous drones


The delegation of life-and-death decisions to nonhuman agents is a recurring concern of those who oppose autonomous weapons systems. A primary concern is that allowing a machine to “decide” to kill a human being undermines the value of human life. From this perspective, human life is of such significant value that it is inappropriate for a machine ever to decide to end a life – in other words, there is something inherently immoral about developing and using autonomous drones.

It may be difficult to argue that autonomous drones can possibly satisfy the jus in bello criterion of discrimination in the “just war tradition”. To make moral judgments about who may legitimately be targeted in the “fog of war” is difficult even for human soldiers. The fear is that allowing autonomous drones to make such distinctions would most likely result in civilian casualties and unacceptable collateral damage. Even if such weapon systems would be able to discriminate between combatants and non-combatants, it is still a question whether an autonomous drone would be able to assess whether an attack is proportionate or not – that is, whether the attack would cause unnecessary suffering. However, beyond the uncertainty of what technological capabilities autonomous drones will possess in the future to make such distinctions, one can also argue that if these weapon systems are unable to operate within the requirements of jus in bello it is unlikely that they will be deployed, at least in operational environments where the risk of causing excessive harm on civilians is high.

On the other hand, it could also be argued that using autonomous drones is not just acceptable from a moral perspective but even morally preferable to human soldiers. Autonomous drones would be able to process more incoming sensory information than human soldiers and could therefore make more well-informed decisions. And since the judgments of machines would not be clouded by emotions such as fear and rage, it could possibly reduce the risk of war crimes that may otherwise have been committed by human soldiers.

Using autonomous drones may also improve certain aspects of humanitarian missions, benefiting the civilians who are being assisted and reducing risks to soldiers. Using autonomous systems to search dangerous areas or perform high-risk tasks, such as bomb disposal or clearing a house, would eliminate the risk of human soldiers being injured or killed.

The ‘TRITON’ – under development by Northrop Grumman for the US Navy to be part of the Broad Area Maritime Surveillance (BAMS) programme – is an advanced intelligence, surveillance and reconnaissance missions system, which may operate under control or autonomously.(Courtesy of Northrop Grumman)
)

The ‘TRITON’ – under development by Northrop Grumman for the US Navy to be part of the Broad Area Maritime Surveillance (BAMS) programme – is an advanced intelligence, surveillance and reconnaissance missions system, which may operate under control or autonomously.
(Courtesy of Northrop Grumman)

Then again, such developments may have implications for the jus ad bellum criteria of the “just war tradition”. Limiting the risk to soldiers by removing them from the battlefield altogether could make war too “easy”, reducing it to a low-cost technological enterprise that no longer requires any public or moral commitment.

Where do we stand – and where should we go?

It is difficult to predict the future but the technological potential of autonomous drones is already being tested and developed. To what extent they will become important military technologies will depend on what the needs of nations will be, which in turn will be determined by the future security situation. It would be better to develop a legal and ethical framework before we come to such a situation.

Clearly, autonomous drones raise important judicial and ethical issues about responsibility for unintentional harm. The technologies create some moral accountability gaps. When autonomous military systems are deployed, it becomes less clear how to apportion responsibility. And such potential responsibility gaps must be addressed properly through technical solutions and legal regulations. NATO and Allies should therefore engage in international discussions on these topics. At the same time, technological evolution will continue and an autonomous drone – no matter how technologically sophisticated it is designed – remains a product, a tool in the hands of humans. Our fundamental responsibility for war and how wars are fought can never be morally “outsourced”, least of all to machines.