The danger of 'emotional' machines

A French supercomputer is used to predict the weather Image copyright Getty Images
Image caption Supercomputers powerful enough to forecast the weather may soon be able to predict - and even influence - human behaviour

Last week a computer program reportedly passed the Turing test by successfully convincing humans it communicated with that it was a real person at least 30% of the time.

Although the results have been challenged by some scientists, it seems to be only a matter of time before computer artificial intelligence is sufficiently capable of pulling of the ruse.

The prospect has Die Welt's Torsten Krauel worried.

"Computers with emotional intelligence can now trick us," he writes (translated by WorldCrunch). "If in the near future we get a text message that is, for example ironic, or kindhearted, it could be from a memory chip that - while it doesn't know what feelings are - can compute them."

The implications for this are far-reaching. Not only could computers be able to imitate humans, they could also draw on their vast stores of information to craft a precise message much more effectively than even the most skilled carbon-based life form.

In politics and stock markets, for instance, predicting - and even shaping - human actions and opinions can be the difference between victory and defeat, riches and ruin.

And all this is happening in shorter and shorter segments of time, Krauel writes:

Because in a nanosecond world where everything is interrelated, secret services and politicians and activists want to be able to identify relevant developments in real time. So, for example, they seek ways to recognize as quickly as possible any unusual news spreading on Twitter or other communication pages.

Time and space have always meant that we could be forewarned, have some time to figure out a reaction even if it was just to protect ourselves. That's over. Now people can no longer be sure if the internet is obeying humans or instead computers that have simply come to know what emotional stimuli are. The situation is claustrophobic.

How are humans responding to this threat? In a very unlogical way, Krauel says. While they don't want machines prying into their private affairs, they like the security and information that constant electronic monitoring provides.

All this means the internet, with its boundless information available almost instantly, is becoming a threat, he says - a weapon that could be dangerous when wielded by intelligent machines.

He concludes:

Living together in a real-time world order is a little like being in a packed elevator - an elevator that like the space ship in Stanley Kubrick's 2001 Space Odyssey is controlled by a rebel computer. Players in the economy, military and politics, but more and more also regular folks, are beginning somewhat uneasily to register the internet-induced loss of the protection provided by space, time and the private sphere.

Krauer's worries are hardly unique. Last month astrophysicist Stephen Hawking joined several other scientists in arguing that "self-aware" machines could be the "worst mistake in human history". Two weeks ago, a professor from Emory University in Atlanta, Georgia wondered whether sentient machines should be considered "people" under the law.

"The establishment of personhood is an assessment made to grant an entity rights and obligations, regardless of how it looks and whether it could pass for human," Mark Goldfeder writes.

At some point the growing chorus of concerns can no longer be dismissed as the product of runaway imaginations and science-fiction-inspired speculation, can it?