killer robots in the uncanny valley

Recently, 1,ooo leading artificial intellegence experts and researchers  signed an open letter calling for a ban on the development of  “offensive autonomous weapons beyond meaningful human control.”  The letter was released at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina.  Initial signatories included Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Steven Hawking.  Since then, the number of signatories has approached 20,000.

The letter focusses on autonomous weapons – that is those over which humans have no “meaningful control”.

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.

The crucial dimension setting AW’s apart from other highly technological/cybernetic weapons such as drones and cruise missiles is the automated selection and engagement of targets.  In 2013,  Human Rights Watch in its report Losing Humanity provided a somewhat expanded version outlining the difference between autonomous weapons and others:

Unmanned technology possesses at least some level of autonomy, which refers to the ability of a machine to operate without human supervision. At lower levels, autonomy can consist simply of the ability to return to base in case of a malfunction. If a weapon were fully autonomous, it would “identify targets and … trigger itself.” Today’s robotic weapons still have a human being in the decision-making loop, requiring human intervention before the weapons take any lethal action. The aerial drones currently in operation, for instance, depend on a person to make the final decision whether to fire on a target.

Both the IJCAI letter and the LH report agree that while cybernetic weapons currently exist, they ane not autonomous in the sense they are using the word.  All current weapons depend on humans to at least make the final firing decision.

LH distinguishes 3 possible levels of human involvement in such weapons:

  • Human-In the Loop
  • Human-On-the Loop
  • Human-Out-Of-the -Loop

LH sees little distinction between the last two categories.  Having people monitoring the weapons while they do their work probably doesn’t mean much.  The tendency of human monitors in On-the-Loop weapons would likely be to defer to the automated decision-making the weapon provided.  Both the letter and report see the active involvement of people in the targeting and firing of these weapons as crucial.

But why?

The letter summarizes its central argument:

 The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. … There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

This is essentially a utilitarian argument.  More harm than good will result from the deployment of such weapons.

However, I’m skeptical of its claim that developing weaponized AI is distinguishable from developing AI in general.  That is by developing AI, even with the best of intentions, the barriers to weaponizing it are trivial.  Even the letter notes the low-cost of entry to these weapons.  The letter glosses over the high likelihood that this low-cost of entry would be for all the components of such weapons, including generic AI.

The report A World of Proliferated Drones: A Technology Primer notes that eventually, even hobbyist drones will be capable of autonomous flight and “could be used for stand-alone strikes or, in large numbers, saturation swarming attacks against government, military, and civilian targets.”(7)

This report describes how current drone technology can accomplish the results the drafters of the letter attribute to future weaponized AI.  Undoubtedly, adding a higher level of autonomy to current drones will expand the flexibility of how they can be used, but seems an incremental change rather than a fundamental one.

And of course, we should expand our understanding of “drone” from just aerial objects to land and sea as well.  Recently,  the Indian Express reported the Islamic State has used weaponized  remote-controlled toy cars to attack Kurdish forces.  The important innovation already fact is the expansion of ways to deliver remotely controlled weapons.   The capabilities of the delivery system are more important than the degree that target selection and firing functions are controlled by cyborgs (Human-In-The Loop) versus AI (Human-Out-of-the -Loop).

Proliferated Drones in fact describes the relative penetration different degrees of drones in the world.

Using the characteristics of how accessible a given drone technology is to any given actor, and the how sophisticated a technology base and infrastructure is needed to produce and operate it, the report identified four categories of drone systems:  hobbyist drones, midsize military and commercial drones, large military-specific drones, and stealth combat drones.
This is the terrain on which the letter’s fears of an arms race are being realized, not in a theoretical future in which contains AI, but in the present which does not.

While the letter discusses “autonomous weapons” throughout, only in the final sentence, without discussion, does it make  an implicit  distinction between offensive and defensive weapons.  “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

Military adversaries throughout history have regarded robust defensive systems of any type developed by “the other side” as tantamount to offensive weapons.    The recent back and forth between the US/NATO and Russia about US placing antiballistic missiles in Poland illustrate this dynamic.

Both offensive and defensive AWs would likely share targeting and triggering AI technology.  AI researchers who work on developing defensive AWs will in fact be working on offensive AWs.  A nation with defensive AW will be able to move quickly and secretly to developing offensive AWs.

One of the arguments the letter does not make against autonomous weapons is a legal one.  LH develops this argument in some detail identifying 4 elements that would render AWs illegal under international law.

  • Distinction – ability to distinguish combatants from non-combatant
  • Proportionality – a military attack can not harm civilians if that outweighs its anticipated military advantage
  • Military Necessity weighs the practical considerations of winning against concerns about the action’s “humanity”
  • Martens Clause the means or warfare must be evaluated in terms of “principles of humanity” and “dictates of public conscience”

It is beyond the scope of this article to address these issues in detail.  However, the obvious question is whether autonomy in weapons per se, renders them illegal.  Responding to the LH report in detail, Michael J. Schmitt of the US Naval War College writes:

While it is true that some autonomous weapon systems might violate international humanitarian law norms, it is categorically not the case that all such systems will do so. Instead, and as with most other weapon systems, their lawfulness as such, as well as the lawfulness of their use, must be judged on a case-by-case basis.

This seems a reasonable position to take and one that neither LH nor the letter address.

The letter does make one additional argument that LH does not. It expresses concern that AI researchers developing AWs will tarnish the field of AI creating a public backlash strong enough to curtail future benefits of AI to society.

In the past few years,  combat drones have been the subject of much popular and scholarly attention, much of it negative.   Until recently, when hobbyist drones became available, the word “drone” almost always referred to combat drones.  The ethics, legality of their deployments and their effectiveness have all been debated.

I suspect the letter’s drafters had something like this in mind when they included this concern.

Certainly AI itself has a mixed image in popular culture.  It’s a subject to which I feel completely inadequate to do justice.  But AI run amok is a significant subject of movies and fiction.  Add Big Data and government surveillance and you have plenty of tech angst floating around.

I have to wonder if this was the real point of the letter.  After all, its drafters are AI researchers.  They are unlikely;y themselves to regard AI as magic, or as an alien species.  In the letter they state their hope that AI will benefit humanity in many ways.

But I have to think that they hurt their position on the benefits of AI by making straw cyborg arguments against weaponized AI simply because, in a sense, all AI is weaponized AI.  All operational AI will involve target selection and triggering of something with Humans-Out-of the-Loop.

Of course, there’s a profound difference between these two scenarios: selecting the last car to go through an intersection and triggering a traffic light, and selecting which car to blow up based the behavioral signature of the occupants and triggering a missile.

But the AI won’t know that in any meaningful way.  The capabilities of AI may transform our world.  It’s values will, in the limited framework I have been using here, be the values of those with authority over its programming.

And here is the cause for the pop culture angst.  The capabilities of cybernetic technology continue to expand in reach and intensity.

Previously I have written that drones seem to inhabit the uncanny valley despite their non-human appearance.  From the viewpoint of the hunted, the drone appears to “have intelligence, agency and to be capable of highly consequential action.”

The AI researchers who wrote the letter are unlikely to experience weaponized AI in this way – the are very sophisticated about these matters, and they are not among the hunted.

Perhaps then they have a sense of Gregoire Chamayou’s idea of pragmatic co-presence.  Derek Gregory concisely describes it as “the inclusion of one within the ‘range’ or ‘reach’ of another.”

This differs from distance when technology mediates the relation.  As Gregory writes:  “What is distinctive about teletechnologies is not their capacity to act ‘at a distance’ but their indifference to and their interdigit(is)ation of ‘near’ and ‘far’.”

Drone operators have only a visual proximity to the hunted – “‘Proximity’ is contracted to the optical”.  With AW’s, the hunter’s proximity to the hunted is contracted entirely to Code Space.

In the Trolley Problem, closer proximity to the one to be sacrificed to save the many increases the resistance of most people to personally take the action to sacrifice the one.  AWs seems to eliminate proximity altogether, yet many, certainly the letter signers, perceive them as morally problematic.   Does the complete lack of proximity create a kind of hyper proximity?

Even though humans/cyborgs code AWs, the resulting invisibility of the decision making in the moment, creates enough discomfort in enough people that these leading AI researchers perceive a problem.  Their solution of seeking to conceptually separate weaponized AI from household and civic AI seems to me unlikely to succeed.  I wonder if they have run the simulations.  I wonder if they believe it is possible.

Advertisements

2 thoughts on “killer robots in the uncanny valley

  1. Nice – tweeted out a link to the piece quoting your line: “in a sense, all AI is weaponized AI.” Yet I was also having a low-tech, first-coffee of the day thought throughout: How, finally, is a land mine different from a drone? The land mine makes its “decision” to blow up based on pre-conceived objective criteria, the target’s “signature” in a crude form. The attack drone is in this sense a sophisticated mine – not that mines are completely non-controversial weapons, of course.

  2. How indeed – that pretty much reformulates the post I think.
    On a legal level, the land mine fails on all 4 points I specify here, but especially on its inability to discriminate combatants from non-combatants.
    On a information level, the LM sensory systems recognizes only the absence of pressure and pressure, while AW’s might recognize a huge number of factors and be capable of distinguishing C from NC as well as or better than humans. So yes, the level of sophistication is the difference. And then the question is does that difference make a difference.
    And of course both “Artificial” and “Intelligence” are contested terms.
    Plenty of fodder for another post!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s