killer robots in the uncanny valley

Recently, 1,ooo leading artificial intellegence experts and researchers  signed an open letter calling for a ban on the development of  “offensive autonomous weapons beyond meaningful human control.”  The letter was released at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina.  Initial signatories included Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Steven Hawking.  Since then, the number of signatories has approached 20,000.

The letter focusses on autonomous weapons – that is those over which humans have no “meaningful control”.

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.

The crucial dimension setting AW’s apart from other highly technological/cybernetic weapons such as drones and cruise missiles is the automated selection and engagement of targets.  In 2013,  Human Rights Watch in its report Losing Humanity provided a somewhat expanded version outlining the difference between autonomous weapons and others:

Unmanned technology possesses at least some level of autonomy, which refers to the ability of a machine to operate without human supervision. At lower levels, autonomy can consist simply of the ability to return to base in case of a malfunction. If a weapon were fully autonomous, it would “identify targets and … trigger itself.” Today’s robotic weapons still have a human being in the decision-making loop, requiring human intervention before the weapons take any lethal action. The aerial drones currently in operation, for instance, depend on a person to make the final decision whether to fire on a target.

Continue reading “killer robots in the uncanny valley”

The Legal Cyborg

The recent unanimous US Supreme Court decision, Riley v California, ruled that police need a  warrant to search the cell phones of those they arrest.  At issue was whether or not searches of the cell phone of an arrested person was a search incident to the arrest.  Such searches are allowable because they can find objects harmful to the safety of the arresting officer, and prevent the destruction of evidence.  The Court found that neither concern applied to information accessible by cell phones and that police should obtain warrants to authorize such searches.

This is of course an important finding, but my purpose here is to look at some of the ideas about communication technology embedded in the decision.  The most obvious example and widely quoted is the following.

mmw_newspaper[1]These cases require us to decide how the search incident to arrest doctrine applies to modern cell phones, which are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy.

While undoubtedly an attempt at humor, the “visitor from Mars” mistaking a cell phone for a body part also introduces the concept of the cyborg, the hybrid of human and machine.  Is it too much to speculate that our Supreme Court Justices are Anxious Cyborgs too?

Cell phones differ in both a quantitative and a qualitative sense from other objects that might be kept on an arrestee’s person. The term “cell phone” is itself misleading shorthand; … One of the most notable distinguishing features of modern cell phones is their immense storage capacity. Before cell phones, a search of a person was limited by physical realities and tended as a general matter to constitute only a narrow intrusion on privacy.

I read “physical realities” here as shorthand as “non-digitally coded” realities.  The decision goes onto to discuss the file cabinets etc that one would have to cart around to have at immediate disposal the information accessible with a cell phone.

Finally, there is an element of pervasiveness that characterizes cell phones but not physical records. Prior to the digital age, people did not typically carry a cache of sensitive personal information with them as they went about their day. Now it is the person who is not carrying a cellphone, with all that it contains, who is the exception. According to one poll, nearly three-quarters of smart phone users report being within five feet of their phones most of the time, with 12% admitting that they even use their phones in the shower.

I find identifying the pervasiveness and intimacy of cell phone use especially significant.  It may begin to begin to recognize “cyborg” as a legal meaning of “person”.

Alexis Dyschkant writes about the legal importance of establishing the boundary of a person when determining if one has been wrongfully contacted.

 Historically, “one’s person” has been limited to “one’s natural body” and some, but not all, artificial attachments to one’s natural body.  The cyborg, a creature composed of artificial and natural parts, challenges this conception of a “person” because it tests the distinction between the natural body and an artificial part.  Artificial objects, such as prosthetics, are so closely attached to bodies as to be considered a part of one’s person.  However, claiming that personhood extends to things attached to our natural bodies oversimplifies the complicated interrelation between natural objects and artificial objects in the cyborg.  If our person is no longer limited to our natural body, then we must understand personhood in a way that includes the cyborg.  I argue that the composition of a body does not determine the composition of a person.  One’s person consists to the extent of one’s agency.  Cyborgs: Natural Bodies, Unnatural Parts, and the Legal Person

I doubt the Justices intended Riley to redefine the boundaries of a person as the boundaries of one’s agency.  However, their arguments based on pervasiveness and intimacy do, I argue, move in that direction.

In a Buddhist context, I have argued in the past that many people experience their communication devices as a part of the illusion of an inherently existing self.  There  I suggested extending traditional mediations on establishing the boundaries of this illusion to include cell phones for example.

For the cyborg, this meditation could be expanded to include the artifacts of technology that she has aggregated into his experience of self.  For instance, many people might experience the theft or malicious destruction of their cell phone as an assault.  Some may relate to the field of information their communication technology produces as a part of their inherently existing self. The Negated Cyborg

Dyschkant echoes and extends this meditation, creating a vision of personhood eventually eliminating the idea of mediation and consisting entirely of agency.

What the cyborg shows us is that the body can be composed of any kind of part but the person is necessarily the agent which controls, benefits from, and depends upon these parts.  Human tissue, animal tissue, or mechanical “tissue” all allow a person to exercise their agency and interact with the world.  The type of body which a person controls need not be relevant.  Hence, determining when one has made contact with “the person of another” does not necessarily depend on the naturalness or composition of one’s body, but on the relationship between the object contacted and the person’s agency.  We can imagine a technologically advanced future in which people retain control over parts detached entirely from their body or in which one’s person is dispersed across great spaces.

Perhaps at some point the concept of a legal person begins to break down.  Perhaps then the Buddhist idea of non-self, of the negation of an inherently existing self, becomes codified into law.

The Forgotten Cyborg

I read with interest about the May 14 decision by the European Court of Justice to apply a Spanish “right to be forgotten” law to Google.  A number of European countries have such laws.

The test case privacy ruling by the European Union‘s court of justice against Google Spain was brought by a Spanish man, Mario Costeja González, after he failed to secure the deletion of an auction notice of his repossessed home dating from 1998 on the website of a mass circulation newspaper in Catalonia.

Costeja González argued that the matter, in which his house had been auctioned to recover his social security debts, had been resolved and should no longer be linked to him whenever his name was searched on Google.  EU court backs ‘right to be forgotten’: Google must amend results on request  The Guardian 5-13-14

The ruling creates a process for individuals to request search engines to delete posts.  The SE would then consider the request weighing the individual’s concerns with the public’s right to know.  An individual unhappy with the SE’s decision could appeal to the ECJ.

In the last installment of my review of Code/Space, I discussed Kitchin and Dodge’s ethics of forgetting as a way to address the Everyware nature of code.  Their concern includes the internet, but also all coded objects, processes and structures.  As I quoted them in my review they state:

One path…is to construct an ethics of forgetting in relation to pervasive computing….[T]echnologies that “store and manage a lifetime’s worth of everything” should always be complimented by forgetting…So rather than focus on the prescriptive [ethics], we envision necessary processes of forgetting…that should be built into code, ensuring a sufficient degree of imperfection, loss and error. 253 Code/Space (Kitchen and Dodge)

The ECJ decision highlights the issue they present and the prescriptive approach they identify as inadequate to the task.  Various sources have identified all the challenges and dangers this ruling presents.

It’s possible, of course, that although the European regulation defines the right to be forgotten very broadly, it will be applied more narrowly. Europeans have a long tradition of declaring abstract privacy rights in theory that they fail to enforce in practice. And the regulation may be further refined over the next year or so, as the European Parliament and the Council of Ministers hammer out the details. But in announcing the regulation, Reding said she wanted it to be ambiguous so that it could accommodate new technologies in the future. “This regulation needs to stand for 30 years—it needs to be very clear but imprecise enough that changes in the markets or public opinion can be maneuvered in the regulation,” she declared ominously.[16] Once the regulation is promulgated, moreover, it will instantly become law throughout the European Union, and if the E.U. withdraws from the safe harbor agreement that is currently in place, the European framework could be imposed on U.S. companies doing business in Europe as well.[17] It’s hard to imagine that the Internet that results will be as free and open as it is now.  The Right to Be Forgotten  Jeffrey Rosen (Stanford Law Review)

K&D’s approach is hard to imagine in operation.  Dueling discourses such as security/privacy, creativity/control, efficiency/accommodation illustrate the implications of all this.  The problem with remembering has always been letting go.  The problem with forgetting is never knowing what is forgotten.  We think that there must be a way to manage this kind of thing, all we need is a system. I will follow the progress of this rulings effects with interest.