The tube pulls in to a busy station along the London Underground’s Central Line. It is early evening on a Thursday. A gaggle of commuters assembles inside and outside the train, waiting for the doors to open. A moment of impatience grips one man who is nearest to them. He pushes the square, green-rimmed button which says “open”. A second later, the doors satisfyingly part. The crowds mingle, jostling on and off the train, and their journeys continue. Yet whether or not the traveller knew it, his finger had no effect on the mechanism.
Some would call this a “placebo button”– a button which, objectively speaking, provides no control over a system, but which to the user at least is psychologically fulfilling to push. It turns out that there are plentiful examples of buttons which do nothing and indeed other technologies which are purposefully designed to deceive us. But here’s the really surprising thing. Many increasingly argue that we actually benefit from the illusion that we are in control of something – even when, from the observer’s point of view, we’re not.
In 2013, BBC News Magazine writer Tom de Castella discovered that pedestrian crossings up and down the UK were hotbeds of placebo buttons. A crossing in central London had programmed intervals for red and green lights, for example. Pushing the button would only impact the length of these intervals between midnight and 7am. In several other cities during busy periods, the crossings were programmed to alternate their signals at a specific rate. The buttons did nothing, but a “wait” light would still come on when they were pressed and, yes, people still pressed them presumably believing that their actions were having an effect.
Certain psychologists would argue that the buttons were indeed having an effect – just not on the traffic lights themselves. Instead the effect is in the commuter’s minds.
To understand this you have to go back to the early 1970s. At that time, psychologist Ellen Langer, now a professor at Harvard, was a graduate student at Yale. During a five card draw game of poker she dealt one set of cards in a haphazard order.
“Everybody,” she says, “got crazy. The cards somehow belonged to the other person even though you couldn’t see any of them.” Langer decided to find out more about the way people regulated the playing of such games. She went to a casino where, at the slot machines, she found gamblers with elaborate ways of pulling the lever. At another time a “highly rational” fellow student tried to explain to her why tossing a pair of dice could be done in a certain way to affect the numbers which came up. “People believed that all of these behaviours were going to increase the probability of their winning,” she comments.
Naturally they were wrong and for many people a simple objective proof of the matter would have been enough. But not for Langer. The strength of the gamblers’ convictions was, to her, not trivial.
She wrote a paper in 1975 that made her very famous. In it she described the significance of these beliefs and coined a term for the effect that they had on people. Langer called it the “illusion of control”. Langer demonstrated this phenomenon experimentally by asking subjects to play a lottery. Some participants were able to choose their tickets and some of those tickets had symbols on them which were more or less familiar to them. The type of ticket had no effect whatsoever on their chance of winning, but they appeared to believe this was the case. Those who had chosen tickets with recognisable symbols were much less willing to part with them in an exchange than those who hadn’t.
But instead of framing this as an irrational delusion, Langer described the effect as a positive thing. “Feeling you have control over your world is a desirable state,” she explains. When it comes to those deceptive traffic light buttons, Langer says there could be a whole host of reasons why the placebo effect might be counted as a good thing. “Doing something is better than doing nothing, so people believe,” she says. “And when you go to press the button your attention is on the activity at hand. If I’m just standing at the corner I may not even see the light change, or I might only catch the last part of the change, in which case I could put myself in danger.”
Also, if pedestrians wait together at the crossing and a few press the button impatiently, that creates a sense of togetherness with strangers which might otherwise be absent. All of these things may be taken as positive impacts on our mental state, and even socially reinforcing.
Whether or not the designers of pedestrian crossing technology took an interest in our psyches is unclear, as no-one seems able to confirm when the buttons were first made intermittently ineffective, at least in the UK. However, office thermostats have widely been reported to sport placebo buttons and some evidence has trickled out of the industry which suggests that the designers of these devices were well aware of a need to manage frustrated office workers’ emotions. From a company point of view, the temperature is best kept stable. However, suggesting to employees that they have had an impact on turning the heat down or up keeps them happy. It’s possible that the placebo effect may even induce a genuine cooling or warming in the body once the button is pressed, regardless of the temperature in the room.
The truth is that technology has long been deceiving us. Sometimes this is ethically questionable, but in other cases the user benefits from a sense of control and reassurance that the system is working as it should. Computer scientist Eytan Adar at the University of Michigan has described a series of fascinating “benevolent deceptions” in a paper co-written with two Microsoft researchers. Take the 1960s 1ESS telephone system for instance. After dialling, a caller’s connection would sometimes fail to go through properly. Instead of a dead tone or error noise, the system would instead simply route the call to a completely different person. “The caller, thinking that she had simply misdialled, would hang up and try again: disruption decreased and the illusion of an infallible phone system preserved,” notes the paper.
Adar and his co-authors also describe how Skype phone calls today sometimes contain “fake static noise” because when users experience a completely noise free line, they are prone to thinking that the call has in fact dropped. A quick Google search reveals that there are plenty of examples of fake noises. From pre-recorded car door slams to artificial shutter sounds made by digital cameras, the world is full of noises designed to delight users and reassure them that the device is working as intended.
Like the 1ESS system in the 1960s, video streaming service Netflix has evolved a way of “failing gracefully”, as Adar calls it, when the personalised recommendation system goes down. Rather than make this error obvious, the Netflix home page simply shows popular films and TV shows instead. “That kind of idea is pretty common in computer science design,” explains Adar. “We try to build in some graceful degradation of the system. Lots of people would do that. It’s something we would even teach our students to do.”
Adar thinks that some people are occasionally getting wise to such tricks. But interestingly he thinks there’s still plenty of opportunity to deceive – in a benevolent way or not.
A benign example he gives is of computer players in video games programmed to be realistically stupid. Players, it seems, find learning the game more enjoyable if their artificial opponent is doomed to fail, especially when they don’t realise that as a human they have the upper hand.
All of these deceptions are arguably desirable. Whether or not you feel cheated by them, their effect is largely for the greater good. People feel happier with the world around them, more in control of events and comforted by the apparent efficacy of their actions. But what if the illusion of control had negative effects? What if it made people do things that weren’t just detrimental to themselves, but the whole of society?
These are the questions raised by Mark Fenton-O’Creevy and his fellow researchers. Back in 2003, they published a study on the illusion of control in financial traders. During a quick game, traders were told that pressing buttons on a computer keyboard “may” have some effect on the value of a financial index they were watching rise and fall. In reality the buttons had no effect on the index, its movements were pre-determined. But some traders felt their button-bashing had more impact than others. Their subjection to an illusion of control was therefore rated higher. The really interesting result was that Fenton-O’Creevy and his co-authors found that those traders were the ones who, in their real-life day jobs, earned less and were given lower performance ratings by their managers.
“A skilled trader should be able to reflect critically on their performance,” says Fenton-O’Creevy. “They should be able to tell if they made the right decision and got lucky, or made the right decision and it went badly.”
Research also suggested that the illusion of control was not simply induced equally in all of the participants. Rather, certain personality traits such as self-determination and coping ability appeared to make some traders more susceptible to the illusion than others. If traders were more threatened by competition with their peers, for example, they might be more likely to depend on an illusion of control.
Fenton-O’Creevy’s work is quietly popular today with financial professionals. Since the first decade of the millennium, he’s found it easier to do research at traditionally secretive investment banks because, he says, hedge fund managers are familiar with his studies. Perhaps this is partly because the implications of his findings are in reality quite stark.
In 2008, economies around the world were rocked by the failure of several financial institutions culminating in the catastrophic financial crisis. Did an illusion of control pervade at banks where the haphazard dealing of subprime mortgages went on? Fenton-O’Creevy thinks it’s likely.
“This wasn’t rocket science. Most banks have a very strong understanding of the relationship between risk and return. So if you find yourself making a very high return in a market, the obvious question is, ‘if we’re making unusually high levels of return what are the risks we’re taking?’ It’s very obvious people weren’t asking those sorts of questions,” he says.
Far from being a comforting salve for the conscience, in this particular case illusions of control may have helped to upturn huge businesses and economies, harming the lives of millions. The stakes could hardly be greater.
It’s something to think about next time you cross the street.