The latest instalment in the Terminator franchise sees Arnie’s ageing android rewrite history. Artificial Intelligence specialist Gary Marcus separates the science from the fiction.

Unless you love special effects involving helicopters and molten metal, there is no real reason to see Terminator Genisys. On the night I saw it, only five people showed up, and two left early.  If you have already seen T2 (which you should have), there’s really nothing new. 

And yet the spectre of Skynet, first introduced in 1984 thanks to James Cameron, still haunts us all. The premise of all five movies in the Terminator Series is the idea that one day, in the not-too-distant future, machines, tired of being our slaves, will take over and try to annihilate us. 

Why they would do that rather than relegate us to zoos, I don’t know, but I do know that when the first Terminator came out in the 1980s, the premise seemed like a joke. Artificial intelligence (AI) didn’t then seem poised to take over the world, it seemed poised to disappear; after the initial enthusiasm for it in the 1950s and 1960s, the field had hit seriously hard times, a period often known as ‘the AI winter’. In Britain, the Lighthill report of 1973 had all but crushed the field; AI had almost no significant commercial value, and most people had scarcely heard of the field.

Today, some of the most influential people of our time are scared witless. Elon Musk is worried that AI could “summon a demon”, and Stephen Hawking worries that AI will be our last invention, and not in a good way. On 1 July, the Future of Life Institute announced $7m (£4.5m) of funding – much of it from Musk – for research “aimed at keeping AI beneficial to humanity”. The Institute’s president warned that the new film could risk blurring fact and fiction. “The danger with the Terminator scenario isn’t that it will happen, but that it distracts from the real issues posed by future AI,” said Max Tegmark.

As it happens, I spent last week talking about AI with some of the leading thinkers in the field, and some heavy-duty corporate titans as well. ‘Chatham House rules’ forbid me from mentioning the particulars, but two things became clear: first, some awfully bright people are genuinely worried about AI, and second, nobody knows for sure what’s going to happen. In a private after-discussion, doubters and worriers duelled to a draw, with uncertainty carrying the day.

Rage against the machines

The key issue is one that the film says almost nothing about: will machines ever care enough about us to hurt us or try to seize our resources? The honest answer is that nobody knows for sure.

Here’s my own view: Skynet, with all its associated robots firing heavy shrapnel (and shape-shifting in the most molten ways), is ridiculous. Certainly at no time soon will robots have remotely enough dexterity to take on Arnold Schwarzenegger; this ought to be obvious to anyone who watched last month’s DARPA Robotics competition, and the hilarious and widely-circulated YouTube video of robots falling flat on their backs. The current state of AI, and especially the field of physically-embodied artificial intelligence, otherwise known as robotics, is fairly unimpressive. We might have a globally-networked operating system in 2017 (as the latest Terminator film suggests), but we sure as heck won’t soon have robots that fluidly manoeuvre in a human-engineered world. (Modesty forbids me from estimating when metallic shape-shifters will come.)

Yet we may well have something serious to worry about, whether or not Skynet comes. To begin with, artificial intelligence, if left entirely unchecked, doesn’t need to be embodied in physical robots to do harm; for the moment, anyone is free to write more or less any software, and software doesn’t need to take corporeal form to cause harm. Already software programs have caused ‘flash crashes’ on the stock market; the more that they have access to the world, the more potential harm they could cause. (If there’s one thing the film does get right, it is the risk of networking absolutely everything in the world together.)

Second, robots or other artificial intelligences don’t need to be fully autonomous to do us harm; already we live in a world where criminals use AI to steal credit cards and inundate as with spam. The smarter machines get, the more they can be a weapon for so-called ‘bad actors’; this is of course true of any technology, but the stakes may be higher as systems become more and more networked. (What happens, for instance, if some devious mind 20 years’ hence tries to hack the world’s array of networked, autonomous vehicles?)

Machines don’t need to be smarter than us in all ways to be dangerous; they merely need to be powerful.

Meanwhile, all the currently trendy talk about ‘superintelligence’ might be misplaced; intelligence is not a single-dimensional trait (like height or weight, something that can be measured with one number) but a complex amalgam of many different cognitive traits. AI already vastly exceeds us in its capacity to memorise and calculate, but currently lags far behind us in its ability to plan, reason, and comprehend. Incremental advances in each of the domains will occur over time; there will be no magical date of superintelligence per se, but a constant growth in artificial intelligence spread out over decades. Machines don’t need to be smarter than all of us in all ways to be potentially dangerous; they merely need to be powerful.

Luckily, yet, nobody knows why synthetic agents would ever care about us; for now we are probably pretty safe, protected by their indifference. But the rub is that nobody knows how to guarantee that our current state of bliss – in which machines are indifferent to humanity, and uninterested in our resources – will last. 

AI’s potential benefits are far too great to forego: by bringing us to a deeper understanding of science, and by facilitating advances in technologies ranging from biotech to material science, artificial intelligence could easily end poverty, lead to a cure for cancer or Alzheimer’s, and spark advances in space travel that ultimately (perhaps centuries or millennia hence) save our species in the event of an asteroid strike or other unforeseen calamity. In the biggest intellectual challenges of our time, smarter and smarter computers will inevitably play a starring role.

But that doesn’t mean that we shouldn’t consider AI’s risks as well the rewards. What I love about the Terminator films is that they make people reflect, at least for an instant, on those risks; what I hate about them is that the films are so ludicrous it is hard to take them seriously. AI risk winds up seeming a lot like science fiction. But nuclear weapons were science fiction once, too. Just in case there is a real risk, we ought to make like the Boy Scouts, and be prepared.

Gary Marcus is CEO and Founder of Geometric Intelligence, Inc, and Professor of Psychology and Neuroscience at New York University. His most recent book is The Future of The Brain.

If you would like to comment on this story or anything else you have seen on BBC Culture, head over to our Facebook page or message us on Twitter.