One spectacular such example: the United States Department of Homeland Security was created in the aftermath of the 9/11 terrorist strikes to improve coordination among different intelligence, defence and public safety units designed to prevent or minimise harm caused by a natural or terrorist disaster.
However, when Hurricane Katrina struck the city of New Orleans four years later in 2005, the centralised federal decision-making authority vested in the DHS slowed down the entire government response, drawing criticism over how millions of residents were left stranded, often without food or water, for days on end. Ultimately, 1,833 people died, New Orleans was largely under water and total property damage was estimated to be $108b.
A closer look at Hurricane Katrina also reveals a potentially more detrimental, but fundamental, choice all organisations and people make when deciding how to solve a problem. The slow governmental response to Katrina made things much worse for people in the New Orleans area. But it was the leadership of the DHS who made an assessment that this hurricane wasn’t going to be as lethal as it turned out to be.
This mistake is the so-called “false negative” statisticians talk about. False negatives occur when you decide something bad won’t happen, but it does. It can have horrific consequences if you’re wrong. For example, the undiagnosed but lethal ignition switch defect that General Motors is dealing with in the US in its massive recall is also a false negative. Customers believed the cars were safe but some drivers found they weren’t. GM has reported 16 deaths associated with the faulty switches.
This problem occurs in medicine all the time. In mammography, a false negative is when your results come back clean, but in fact you have cancerous cells that weren’t picked up by the x-ray. The opposite can be just as troubling. A false positive, for instance, would be when you test positive for cancer, but actually don’t have it. In the US, the incidence of this mistake is around 15%, the highest in the world, while in Northern Europe it isn’t even at 1%. The human and economic cost of treating a disease you don’t have can be devastating.
False positives may be less egregious when the cost of treatment is manageable or at least acceptable. Learning its lesson the hard way, the DHS adopted a better “safe than sorry” strategy when the next hurricane that summer approached the US coastline. Preparations for Hurricane Rita were extensive, including the evacuation of millions of people. When the storm proved less damaging than Katrina, there were some complaints about wasted resources, but in the wake of the disastrous response to Katrina, they were hardly consequential.
False positives happen all the time — and we manage by them regularly. When you get checked for weapons at an airport, the detectors are set at minimal tolerance, which is why you have to remove your belt. And GM’s auto recalls create a massive false positive, since very few of the suspect parts could actually cause an accident.
The reason false positives are so common is because of the huge risk associated with being wrong the other way — via a false negative. You don’t want someone to get through airport screening with a weapon. You don’t want that ignition key switch to malfunction when you put it in a car. And you don’t want to underestimate potential damage from a big hurricane, ever.
Managing the risks
False positives and false negatives are both common mistakes that can occur when people and organisations make decisions. Being aware that both are possible is crucial, but so is some discussion about which of the two is less damaging to you. Managing these risks means choosing, in advance, which type of mistake you really don’t want, and by implication, which type of mistake you are more willing to live with.
If we can organise a decision to minimise false positives or negatives, is it also possible to manipulate them to increase the odds of achieving a goal?
My friend is a cyclist who recently competed in a “century” race (100 miles). He relied on a wristwatch that kept track of the miles left before the finish line to manage his energy in a bid to beat his previous time. When the timer indicated 12 miles left, he really turned it on to make that goal. But just a few miles down the road, as he was going all-out, he noticed the finish line up ahead. At that point the timer said eight miles to go, but it was wrong. The false positive — telling him he was falling behind his tracking time when in fact he wasn’t — turned out to be a great motivator. He had pushed himself harder than he intended to throughout the race and turned in his best time ever.
With some thought, I suspect most people can think of all sorts of applications of this insight to their work and their lives. For example, it is standard practise to try to convince people they’re on a “burning platform” to motivate change. Great managers understand that if people believe the situation is really dire, they’re more likely to take on the hard work of doing something different.
Or how about adjusting your bathroom scale upward to motivate you to cut down on all those desserts. Of course, if you’re one of those people who has, on occasion, adjusted the scale to register a lower weight than you actually have — the false negative strategy — to trick yourself into believing there’s no need for that 5 kilometre run this weekend, you already know how the game is played. Dessert please!