Is The Lord of the Rings the greatest work of literature of the 20th Century? Is The Shawshank Redemption the best movie ever made? Both have been awarded these titles by public votes. You don’t have to be a literary or film snob to wonder about the wisdom of so-called ‘wisdom of the crowd’.
In an age routinely denounced as selfishly individualistic, it’s curious that a great deal of faith still seems to lie with the judgement of the crowd, especially when it can apparently be far off the mark. Yet there is some truth underpinning the idea that the masses can make more accurate collective judgements than expert individuals. So why is a crowd sometimes right and sometimes disastrously wrong?
The notion that a group’s judgement can be surprisingly good was most compellingly justified in James Surowiecki’s 2005 book The Wisdom of Crowds, and is generally traced back to an observation by Charles Darwin’s cousin Francis Galton in 1907. Galton pointed out that the average of all the entries in a ‘guess the weight of the ox’ competition at a country fair was amazingly accurate – beating not only most of the individual guesses but also those of alleged cattle experts. This is the essence of the wisdom of crowds: their average judgement converges on the right solution.
Still, Surowiecki also pointed out that the crowd is far from infallible. He explained that one requirement for a good crowd judgement is that people’s decisions are independent of one another. If everyone let themselves be influenced by each other’s guesses, there’s more chance that the guesses will drift towards a misplaced bias. This undermining effect of social influence was demonstrated in 2011 by a team at the Swiss Federal Institute of Technology (ETH) in Zurich. They asked groups of participants to estimate certain quantities in geography or crime, about which none of them could be expected to have perfect knowledge but all could hazard a guess – the length of the Swiss-Italian border, for example, or the annual number of murders in Switzerland. The participants were offered modest financial rewards for good group guesses, to make sure they took the challenge seriously.
The researchers found that, as the amount of information participants were given about each others guesses increased, the range of their guesses got narrower, and the centre of this range could drift further from the true value. In other words, the groups were tending towards a consensus, to the detriment of accuracy.
This finding challenges a common view in management and politics that it is best to seek consensus in group decision making. What you can end up with instead is herding towards a relatively arbitrary position. Just how arbitrary depends on what kind of pool of opinions you start off with, according to subsequent work by one of the ETH team, Frank Schweitzer, and his colleagues. They say that if the group generally has good initial judgement, social influence can refine rather than degrade their collective decision.
No one should need warning about the dangers of herding among poorly informed decision-makers: copycat behaviour has been widely regarded as one of the major contributing factors to the financial crisis, and indeed to all financial crises of the past. The Swiss team commented that this detrimental herding effect is likely to be even greater for deciding problems for which no objectively correct answer exists, which perhaps explains how democratic countries occasionally elect such astonishingly inept leaders.
There’s another key factor that makes the crowd accurate, or not. It has long been argued that the wisest crowds are the most diverse. That’s a conclusion supported in a 2004 study by Scott Page of the University of Michigan and Lu Hong of Loyola University in Chicago. They showed that, in a theoretical model of group decision-making, a diverse group of problem-solvers made a better collective guess than that produced by the group of best-performing solvers. In other words, diverse minds do better, when their decisions are averaged, than expert minds.
In fact, here’s a situation where a little knowledge can be a dangerous thing. A study in 2011 by a team led by Joseph Simmons of the Yale School of Management in New Haven, Connecticut found that group predictions about American football results were skewed away from the real outcomes by the over-confidence of the fans’ decisions, which biased them towards alleged 'favourites' in the outcomes of games.
All of these findings suggest that knowing who is in the crowd, and how diverse they are, is vital before you attribute to them any real wisdom.
Could there also be ways to make an existing crowd wiser? Last month, Clintin Davis-Stober of the University of Missouri and his co-workers presented calculations at a conference on Collective Intelligence that provide a few answers.
They first refined the statistical definition of what it means for a crowd to be wise – when, exactly, some aggregate of crowd judgements can be considered better than those of selected individuals. This definition allowed the researchers to develop guidelines for improving the wisdom of a group. Previous work might imply that you should add random individuals whose decisions are unrelated to those of existing group members. That would be good, but it’s better still to add individuals who aren’t simply independent thinkers but whose views are ‘negatively correlated’ – as different as possible – from the existing members. In other words, diversity trumps independence.
If you want accuracy, then, add those who might disagree strongly with your group. What do you reckon of the chances that managers and politicians will select such contrarian candidates to join them? All the same, armed with this information I intend to apply for a position in the Cabinet of the British government. They’d be wise not to refuse.