In Part 1, I wrote:
The visceral revulsion of many seems to indicate a sense that these drones have, or will assume a life of their own, that despite their clearly mechanical appearance, they inhabit the uncanny valley.
But how can this be? A robot’s too/not enough human likeness is the core of the effect. There are in fact quite a number of drones, with various appearances. But I can’t recall one with any visual appreciable human likeness at all.
Mori’s graph show the industrial robot as the least uncanny. But the industrial robot’s environment is highly constrained and controlled. Even the huge mining or tunneling machines exist in specific environments when doing their work.
The drone roams the greater world, our world, seemingly unconstrained or controlled. Imagine observing from the ground a drone hovering for days. Then suddenly it launches a missile that strikes close by. Even if one is uninjured it must be a breathtakingly frightening experience.
From that vantage point, the drone appears to have intelligence, agency and to be capable of highly consequential action. I think,, for many of us, this empathetic understanding is at least as strong as a more rational and factual one.
Combined with drones not looking human, this leads us to metaphorically regard them as a different species.
Eliezer Yudkowsky of the Machine Intelligence Research Institute says one of the “families of unreliable metaphors for imagining the capability of smarter-than-human Artificial Intelligence” is
Species metaphors: Inspired by differences of brain architecture between species. AIs have magic.
Drones then become a magic species, capable of rainng death down on us.
Their different brain architectures leave them though emotionless. Human Rights Watch released its report Losing Humanity a few months ago arguing against the development of “fully autonomous weapons”.
Even if the development of fully autonomous weapons with human-like cognition became feasible, they would lack certain human qualities, such as emotion, compassion, and the ability to understand humans. As a result, the widespread adoption of such weapons would still raise troubling legal concerns and pose other threats to civilians. (p. 6)
The report received limited coverage. Among the most substantive was the Spencer Ackerman’s article Pentagon: A Human Will Always Decide When a Robot Kills You The wry, ironic tone of the title was typical of the few articles that did appear.
The Pentagon wants to make perfectly clear that every time one of its flying robots releases its lethal payload, it’s the result of a decision made by an accountable human being in a lawful chain of command. Human rights groups and nervous citizens fear that technological advances in autonomy will slowly lead to the day when robots make that critical decision for themselves. But according to a new policy directive issued by a top Pentagon official, there shall be no SkyNet, thank you very much.
.Looking up from the forest floor of the Uncanny Valley, through the canopy, I’m not so sure.