killer robots in the uncanny valley

Recently, 1,ooo leading artificial intellegence experts and researchers  signed an open letter calling for a ban on the development of  “offensive autonomous weapons beyond meaningful human control.”  The letter was released at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina.  Initial signatories included Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Steven Hawking.  Since then, the number of signatories has approached 20,000.

The letter focusses on autonomous weapons – that is those over which humans have no “meaningful control”.

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.

The crucial dimension setting AW’s apart from other highly technological/cybernetic weapons such as drones and cruise missiles is the automated selection and engagement of targets.  In 2013,  Human Rights Watch in its report Losing Humanity provided a somewhat expanded version outlining the difference between autonomous weapons and others:

Unmanned technology possesses at least some level of autonomy, which refers to the ability of a machine to operate without human supervision. At lower levels, autonomy can consist simply of the ability to return to base in case of a malfunction. If a weapon were fully autonomous, it would “identify targets and … trigger itself.” Today’s robotic weapons still have a human being in the decision-making loop, requiring human intervention before the weapons take any lethal action. The aerial drones currently in operation, for instance, depend on a person to make the final decision whether to fire on a target.

Continue reading “killer robots in the uncanny valley”

Advertisements