killer robots in the uncanny valley

Recently, 1,ooo leading artificial intellegence experts and researchers  signed an open letter calling for a ban on the development of  “offensive autonomous weapons beyond meaningful human control.”  The letter was released at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina.  Initial signatories included Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Steven Hawking.  Since then, the number of signatories has approached 20,000.

The letter focusses on autonomous weapons – that is those over which humans have no “meaningful control”.

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.

The crucial dimension setting AW’s apart from other highly technological/cybernetic weapons such as drones and cruise missiles is the automated selection and engagement of targets.  In 2013,  Human Rights Watch in its report Losing Humanity provided a somewhat expanded version outlining the difference between autonomous weapons and others:

Unmanned technology possesses at least some level of autonomy, which refers to the ability of a machine to operate without human supervision. At lower levels, autonomy can consist simply of the ability to return to base in case of a malfunction. If a weapon were fully autonomous, it would “identify targets and … trigger itself.” Today’s robotic weapons still have a human being in the decision-making loop, requiring human intervention before the weapons take any lethal action. The aerial drones currently in operation, for instance, depend on a person to make the final decision whether to fire on a target.

Continue reading “killer robots in the uncanny valley”

Drone Strikes in the Uncanny Vallyey – Part 3

Part 2 asserts that from  the Uncanny Valley’s forest floor, the drone seems both an uncanny robot and a living nonhuman species.  Of course neither is true.

The drone is a remote appendage of a cyborg. The parts of this entity includes a human at a control panel and all the technological infrastructure the drone needs to complete its mission. Distributed across the world, it is a functional human/machine hybrid, just as a human immersed in an electronic device, or in union with a pacemaker is.

Looking down at the Valley’s forest floor for a moment, perhaps distracted by a sound, or just overwhelmed by the vigilance of looking at the sky, I see this:

atomic angel

Destroying Angels (a group of closely related Amanita species around the world) are among the most deadly mushrooms there are.  Humans eating the various species of Destroying Angel (or the closely related the Death Cap) result in up to 95% of mushroom deaths.

These visible mushrooms though are only a projectile of the underground organism, the mycelium.  This part of a fungus can be huge.  Depending on the criteria one uses, a fungus in Oregon is the largest living organism on earth.

Additionally, the fungus lives in symbiosis with the surrounding trees, fungus penetrating into tree roots cells, becoming a functional entity, becoming one thing, becoming a non-human/non-machine cyborg.

Standing on the forest floor of the Uncanny Valley, the potential of death hovers above me and stands as witness at my feet.

Drone Strikes in the Uncanny Valley – Part 2

In Part 1,  I wrote:

The visceral revulsion of many seems to indicate a sense that these drones have, or will assume a life of their own, that despite their clearly mechanical appearance, they inhabit the uncanny valley.

But how can this be?  A robot’s too/not enough human likeness is the core of the effect.  There are in fact quite a number of drones, with various appearances.  But  I can’t recall one with any visual appreciable human likeness at all.

Mori’s graph show the industrial robot as the least uncanny.  But the industrial robot’s environment is highly constrained and controlled.  Even the huge mining or tunneling machines exist in specific environments when doing their work.

The drone roams the greater world, our world, seemingly unconstrained or controlled.  Imagine  observing from the ground a drone hovering for days.  Then suddenly it launches a missile that strikes close by.  Even if one is uninjured it must be a breathtakingly frightening experience.

From that vantage point, the drone appears to have intelligence, agency and to be capable of highly consequential action.  I think,, for many of us, this empathetic understanding is at least as strong as a more rational and factual one.

Combined with drones not looking human, this leads us to metaphorically regard them as a different species.

Eliezer Yudkowsky of the Machine Intelligence Research Institute says one of the “families of unreliable metaphors for imagining the capability of smarter-than-human Artificial Intelligence” is

 Species metaphors: Inspired by differences of brain architecture between species. AIs  have magic.

Drones then become a magic species, capable of rainng death down on us.

Their  different brain architectures leave them though emotionless.  Human Rights Watch released its report Losing Humanity a few months ago arguing against the development of “fully autonomous weapons”.

Even if the development of fully autonomous weapons with human-like cognition became feasible, they would lack certain human qualities, such as emotion, compassion, and the ability to understand humans. As a result, the widespread adoption of such weapons would still raise troubling legal concerns and pose other threats to civilians. (p. 6)

The report received limited coverage.  Among the most substantive was the Spencer Ackerman’s article Pentagon: A Human Will Always Decide When a Robot Kills You The wry, ironic tone of the title was typical of the few articles that did appear.

The Pentagon wants to make perfectly clear that every time one of its flying robots releases its lethal payload, it’s the result of a decision made by an accountable human being in a lawful chain of command. Human rights groups and nervous citizens fear that technological advances in autonomy will slowly lead to the day when robots make that critical decision for themselves. But according to a new policy directive issued by a top Pentagon official, there shall be no SkyNet, thank you very much.

.Looking up from the forest floor of the Uncanny Valley, through the canopy, I’m not so sure.

Drone Strikes in the Uncanny Valley – Part 1

The debate about drone warfare is complex and beyond my capabilities or intentions here.  For a far-ranging discussion I recommend the The Quarterly DAG-3QD Peace and Justice Symposium: Drones.

The symposium participants discuss one of the core issues of the debate,  “the threshold problem.”  In the final essay of the Symposium, Reply to Critics: No Easy Answers,  Bradley Jay Strawser writes

Of course, the very notion that a threat can be justifiably blocked by killing, while sound in principle and sometimes in practice, is ripe for abuse and misuse. So the pressing moral issue for the drone campaign is how the notion of “imminent threat” is being evaluated, measured, and properly understood.

…I find it insightful of Levine to point out how the distinction between intelligence and military action in the US has all but collapsed. I agree with him that this is a serious problem. The CIA should be in the business of intelligence, not direct lethal action.[15] One wonders then, whether drones are merely a symptom of this state of affairs or a partial cause of it?

Additionally, CK MacLeod, in Further on  Pathos v the Drones Conventionalizing the Unconventionalizable  expands the focus and summarizes a discomfort with drone warfare I share.

We can explain this ancient-present predicament as follows: Those great confrontations, with their all but unimaginably great death and destruction, produced the “conventions” of war within and against which 4th Generation warriors define themselves. By design and necessity, “un-conventional” warfare cannot be handled entirely by “conventional” warmaking, “conventional” thinking about warfare, or the legal and political “conventions” that have not caught up with it and that it means to defy. We feel as though we are in a void between the former, obsolete conventions and that which has not been conventionalized and perhaps cannot be conventionalized,

Our relation to any void is not knowing.  Here, even the horrors humanity has managed to invent in the past pause, not knowing.  Is this in fact an incremental technological innovation in war, or, as it seems to feel to many, a change in direction, in type.

The visceral revulsion of many seems to indicate a sense that these drones have, or will assume a life of their own, that despite their clearly mechanical appearance, they inhabit the uncanny valley.

Certainly science fiction has from its beginning responded to, formed and fed fantasies of our creations living for themselves and threatening us. Many of the monsters of myth, from Gilgamesh on, seem on the surface as some sort of unnatural union when they really are human creations becoming alive.