Cyborgs on Edge

Since 1998 the digital magazine Edge has asked a question to a variety of accomplished people designed to contribute to discussions about issues facing humanity.  The overall project of Edge is to promote a “third culture” which  “consists of those scientists and other thinkers in the empirical world who, through their work and expository writing, are taking the place of the traditional intellectual in rendering visible the deeper meanings of our lives, redefining who and what we are.”

Edge generally poses these questions in a way open to a very wide net of interpretations and provocations.   This year’s question is:

2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

The editors frame this overall question with the following:

Just over a month ago, in early December, Stephen Hawking warned of the potentially apocalyptic consequences of artificial intelligence, which in his opinion could eventually lead to “the end of the human species”. But really, should we fear the danger of a future army of humanoids out of control? Or rather we should celebrate the extraordinary opportunities that could give us the development of thinking machines, and even sentient beings? Do such beings along with ourselves pose new ethical dilemmas? Would they be part of our “society”? Should we grant them civil rights? Would we feel empathy for them?

While the intellectual latitude is wide, Edge editors instructs its contributors to put aside the things of a child, such as fiction and movies, and to “grow up” with some rigorous thinking about Artificial Intelligence.  The post contains responses from 182 contributors.  I’ve sampled a fair number of them but I’m sure I’ve missed more than I’ve absorbed.

So given the editorial stance, no crazy talk about cyborgs or Buddhism here.  Even Andy (“we have always been cyborgs” fame) Clark strikes a mostly reasonable tone – although he does worry at the end of his essay that while unlikely, it is possible that machine intelligence may end up eating us.

Many of the contributors focus on the key words in the question: “think” and “machine” or the elements of related concepts such as “artificial” and “intelligence”.  Not only are such approaches useful, they are necessary.  My impression this is the plane that many of the essays are the most successful.

I expect to make my way through more of the essays, and to do so with more focus and attention than the skimming I have done so far.  I expect to discuss my thoughts in future posts.  Meanwhile check it out for yourself.

observations on “post #74″

For many  occupants,                      experiencing contrast   between digital and analog space can heighten the vividheuristicsense of first person now-ness that can become dulled with immersion in one or the other.

An occupant’s perception ofeitheras unexpectedly stale can damage, possibly destroy, the transmuted fourth wall of the

space encouraging, but not completing, a sense of ruin-ness.

Spaces become ruined  by decay, (dead links, crumbling walls), the encroachment of the out-of-place (trees growing through roofs, obvious spam in the Comments)and progressive temporal                                                        decontextualization.  This is an

 

a-sequentialality rather than an a-temporality.  The de-purposing of such spaces depends, as everything depends, on a/the defining point of view.  A completely de-purposed spaceistheonly completely ruined space.

Time and space do not easily cohabit.  The

 

mere passage of un-updated time opens discoherent voids between mediated space and the occupant.                    Is it loss or

inability, amnesia or aphasia, ghost or monster                       ?

This can reproduce the politics of trauma,whichisall politics, in     the      encounter   with mediated space,whichisallspac e.

The Anxious Cyborg – 2

As the  human/machine relationship continues to develop, information processing increasingly defines how work is done.  In turn, it enters into human considerations of what it means to exist, to intend and to act.

The flow of information mediated by computed coding enables a built environment that creates ever-increasing opportunities for more information to enable more machine work and functioning.  As I previously wrote:

From a machinic perspective, the development of M2M technology introduces a reverse instrumentality.  Technology continues to serve cyborg ends, but cyborgs also become data factories for machines.   Technology has begun to have as its end its own growth and evolution as much as whatever human function it may nominally have….  The world becomes the operational environment of technology.  The Anxious Cyborg

This of course is simply my particular iteration of a complex of ideas others have discussed for a long time now.  Yet, even accounting for the self reinforcing character of much of my blog reading, I feel I have been encountering an unusual number of variations and improvements on this theme.

A recent  post by R. Scott Bakker gives a flavor of the broadly integrative approach that characterizes his blog. He has a particular interest in exploring the vivid deceptiveness of human self-awareness.

Meanwhile, it seems almost certain that the future is only going to become progressively more post-intentional, more difficult to adequately cognize via our murky, apriori intuitions regarding normativity. Even as we speak, society is beginning a second great wave of rationalization, an extraction of organizational efficiencies via the pattern recognition power of Big Data: the New Social Physics. The irrelevance of content—the game of giving and asking for reasons—stands at the root of this movement, whose successes have been dramatic enough to trigger a kind of Moneyball revolution within the corporate world. Where all our previous organizational endeavours have arisen as products of consultation and experimentation, we’re now being organized by our ever-increasing transparency to ever-complicating algorithms. As Alex Pentland (whose MIT lab stands at the forefront of this movement) points out, “most of our beliefs and habits are learned by observing the attitudes, actions, and outcomes of peers, rather than by logic or argument” (Social Physics, 61). The efficiency of our interrelations primarily turns on our unconscious ability to ape our peers, on automatic social learning, not reasoning. Thus first person estimations of character, intelligence, and intent are abandoned in favour of statistical models of institutional behaviour.  Arguing No One: Wolfendale and the Penury of ‘Pragmatic Functionalism’ R. Scott Bakker

Taking a more political turn, Robin James describes the mutual arising of behavior and data in the context of capitalism.

Big data capital wants to get in synch with you just as much as post-identity MRWaSP wants you to synch up with it. [2] Cheney-Lippold calls this process of mutual adaptation “modulation” (168).   A type of “perpetual training” (169) of both us and the algorithms that monitor us and send us information, modulation compels us to temper ourselves by the scales set out by algorithmic capitalism, but it also re-tunes these algorithms to fall more efficiently in phase with the segments of the population it needs to control.

The algorithms you synch up with determine the kinds of opportunities and resources that will be directed your way, and the number of extra hoops you will need to jump through (or not) to be able to access them. Modulation “predicts our lives as users by tethering the potential for alternative futures to our previous actions as users” (Cheney-Lippold 169). Your past patterns of behavior determine the opportunities offered you, and the resources you’re given to realize those opportunities.  Robin James Visible Social Identies vs Algorithmic Identities

Shifting the focus from a systemic and political view, Alistair Croll discusses the individual ethical dimensions of these issues.

Big data is about reducing the cost of analyzing our world. The resulting abundance is triggering entirely new ways of using that data. Visualizations, interfaces, and ubiquitous data collection are increasingly important, because they feed the machine — and the machine is hungry….

Perhaps the biggest threat that a data-driven world presents is an ethical one. Our social safety net is woven on uncertainty. We have welfare, insurance, and other institutions precisely because we can’t tell what’s going to happen — so we amortize that risk across shared resources. The better we are at predicting the future, the less we’ll be willing to share our fates with others. And the more those predictions look like facts, the more justice looks like thoughtcrime.  Alistair Croll New ethics for a new world 

Of course, many cyborgs look forward to all of this with optimism and a sense of opportunity.

When you can use AI as a conduit, as an orchestrating mechanism to the world of information and services, you find yourself in a place where services don’t need to be discovered by an app store or search engine. It’s a new space where users will no longer be required to navigate each individual application or service to find and do what they want. Rather they move effortlessly from one need to the next with thousands of services competing and cooperating to accomplish their desires and tasks simply by expressing their desires. Just by asking….

At this contextual “just arranged a date” moment lies an opportunity to intelligently prompt if the user would like to see what is going on on friday night in the area, get tickets, book dinner reservations, send an Uber to pick them up or send flowers to the table. Incremental revenue nirvana.  Dag Kittlaus A Cambrian Explosion in AI Is Coming

But then it’s always been swell to have money.