Andrew Sabisky: What is superforecasting?

  • Published
Media caption,

Dominic Cummings leaves for work after Andrew Sabisky row

The prime minister's chief adviser, Dominic Cummings, has told journalists to "read Philip Tetlock's Superforecasters, instead of political pundits who don't know what they're talking about".

His comments followed the resignation of Downing Street adviser Andrew Sabisky, criticised for comments on pregnancies, eugenics and race.

Mr Sabisky has described himself as a superforecaster.

What is superforecasting?

The idea behind superforecasting is some people tend to be better at predictions - even than experts in their chosen field.

This could include anything from whether a currency will become stronger, one country will invade another or there will be civil unrest in a city.

Image caption,
Andrew Sabisky resigned, saying he did not want to prove a "distraction"

How does it work?

American psychologist Philip Tetlock came up with the Good Judgment Project as part of a US government competition to find better ways of predicting.

He looked at thousands of predictions by experts and found they were no better than if they had selected outcomes at random, which he compared to chimps throwing darts at a board.

Prof Tetlock then asked thousands of people to come up with figures for the chances of a range of things happening, such as a nuclear test by North Korea in the next three months.

A few months later, he selected the most successful of the forecasters - and found, in later exercises, they continued to make better predictions even than those in the intelligence services who had access to secret information.

What is the science behind it?

Superforecasters calculate the probability of something happening and then adjust that as circumstances change.

So, when one of them was looking at the chances of North Korea conducting a nuclear test, the starting point was the country had, on average, conducted tests every 30 months - suggesting a 10% chance there would be a test in the next three months.

This figure was then doubled, to 20%, because North Korea had been threatening to conduct tests.

Image source, KCNA

Superforecasters are supposed to be particularly good at keeping their personal opinions out of the calculations.

The other important part of the method is you take the probabilities estimated by a number of superforecasters and average them out to get a final result.

How successful is it?

Let's look at a couple of the big events over the past few years that were not widely predicted - Brexit and the election of US President Donald Trump.

Superforecasters did not accurately predict Brexit, putting the chances of a Leave vote at 23% in June 2016 - the month of the referendum - according to Bloomberg.

Their predicted figure had been higher a few months previously but they had adjusted the likelihood downwards.

However, superforecasters did apparently collectively predict Donald Trump's success in the primaries in 2016 - the first hurdle in the presidential race.

The successes of other uses of superforecasting are harder to know.

But the idea could be useful in areas from finance, to charities working out how they should distribute aid.

And CIA analysts wrote a paper calling for the US intelligence service to look for the characteristics of superforecasters when recruiting, rather than prioritising applicants' grades.

Prof Tetlock told the BBC he did not think superforecasting should be linked to a particular political point of view. He said most people would want their leaders to be "informed by the most accurate possible estimates of the consequences of the options on the table".