Lethal autonomous weapon

A BAE Raven during flight testing

Lethal autonomous weapons (LAWs) are a type of military robot designed to select and attack military targets (people, installations) without intervention by a human operator. LAW are also called lethal autonomous weapon systems (LAWS), lethal autonomous robots (LAR), robotic weapons, or killer robots. LAWs may operate in the air, on land, on water, under water, or in space. The autonomy of current systems as of 2016 is restricted in the sense that a human gives the final command to attack - though there are exceptions with certain "defensive" systems.

Difference with existing drones

LAWs should not be confused with unmanned combat aerial vehicles (UCAV) or "combat drones", which are currently remote-controlled by a pilot: only some LAWS are combat drones. Even those combat drones that can fly autonomously do not currently fire autonomously - they have "a human in the loop".

Automatic defensive systems

The oldest automatically-triggered lethal weapon is the land mine, used since at least the 1600s, and naval mines, used since at least the 1700s. Anti-personnel mines are banned in many countries by the 1997 Ottawa Treaty, not including the United States, Russia, and much of Asia and the Middle East.

Some current examples of LAWs are automated "hardkill" active protection systems, such as a radar-guided gun to defend ships that have been in use since the 1970s (e.g. the US Phalanx CIWS). Such systems can autonomously identify and attack oncoming missiles, rockets, artillery fire, aircraft and surface vessels according to criteria set by the human operator. Similar systems exist for tanks, such as the Russian Arena, the Israeli Tropy, and the German AMAP-ADS. Several types of stationary sentry guns, which can fire at humans and vehicles, are used in South Korea and Israel. Many missile defense systems, such as Iron Dome, also have autonomous targeting capabilities.

The main reason for not having a "human in the loop" in these systems is the need for rapid response. They have generally been used to protect personnel and installations against incoming projectiles.

Automonous offensive systems

Systems with a higher degree of autonomy would include drones or unmanned combat aerial vehicles, e.g.: "The unarmed BAE Systems Taranis jet-propelled combat drone prototype may lead to an Future Offensive Air System that can autonomously search, identify and locate enemies but can only engage with a target when authorized by mission command. It can also defend itself against enemy aircraft" (Heyns 2013, §45). The Northrop Grumman X-47B drone can take off and land on aircraft carriers (demonstrated in 2014); it is set to be developed into an Unmanned Carrier-Launched Airborne Surveillance and Strike (UCLASS) system.

Ethical and legal issues

The possibility of LAWs has generated significant debate, especially about the risk of "killer robots" roaming the earth - in the near or far future. The group Campaign to Stop Killer Robots formed in 2013. In July 2015, over 1,000 experts in artificial intelligence signed a letter warning of the threat of an arms race in military artificial intelligence and calling for a ban on autonomous weapons.[1]

Current US policy states: "Autonomous … weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force."[2] Deputy Defense Secretary Robert Work said in 2016 that the Defense Department would "not delegate lethal authority to a machine to make a decision", but was considering the possibility that "authoritarian regimes" might do so.[3]

There is concern (e.g. Sharkey 2012) whether LAWs would violate International Humanitarian Law, especially the principle of distinction, which requires the ability to discriminate combatants from non-combatants, and the principle of proportionality, which requires that damage to civilians is proportional to the military aim. This concern is often invoked as a reason to ban "killer robots" altogether - but it is doubtful that this concern can be an argument against LAWs that do not violate International Humanitarian Law.[4]

Other risks are that, just like with remote-controlled drone strikes, LAWs will make military action easier for some parties, and thus lead to more killings.

According to PAX fully automated weapons (FAWs) will lower the threshold of going to war as soldiers are removed from the battlefield and the public is distanced from experiencing war, giving politicians and other decision-makers more space in deciding when and how to go to war.[5] They warn that once deployed, FAWs will make democratic control of war more difficult - something that author of Kill Decision - a novel on the topic - and IT specialist Daniel Suarez also warned about: according to him it might recentralize power into very few hands by requiring very few people to go to war.[5]

Finally, LAWs are said to blur the boundaries who is responsible for a particular killing, but Thomas Simpson and Vincent Müller argue that they may make it easier to record who gave which command.[6]

References

  1. Gibbs, Samuel (27 July 2015). "Musk, Wozniak and Hawking urge ban on warfare AI and autonomous weapons". The Guardian. Retrieved 28 July 2015.
  2. US Department of Defense (2012). "Directive 3000.09, Autonomy in weapon systems" (PDF). p. 2.
  3. "Pentagon examining the 'killer robot' threat". Boston Globe. 30 March 2016.
  4. Müller, Vincent C. (2016). "Autonomous killer robots are probably good news". Ashgate.
  5. 1 2 "Deadly Decisions - 8 objections to killer robots" (PDF). p. 10. Retrieved 2 December 2016.
  6. Simpson, Thomas W; Müller, Vincent C. (2016). "Just war and robots' killings". Philosophical Quarterly.

Further reading

This article is issued from Wikipedia - version of the 12/2/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.