On Friday, 11 March 2011, an earthquake struck the oceans near Tohoku, a region on Japan’s east coast. With a magnitude of 9.0, it was among the five most powerful earthquakes ever recorded, strong enough to shift the entire planet by several inches along its axis. It triggered a tsunami that killed thousands of people and wrecked the Fukushima-Daiichi nuclear power plant.
A quake that large shouldn’t have happened at Tohoku, at least not to the best of Japanese scientists’ knowledge. The hazard maps they had drawn up predicted that big earthquakes would strike in one of three zones to the south of the country – Tokai, Tanankai, and Nankai. No earthquake has hit these regions since 1975, while several have occurred in “low-probability” zones, such as Tohoku.
Japan isn’t alone. The incredibly destructive earthquakes that hit Wenchuan, China in 2008 and Christchurch, New Zealand in 2010 and 2011 all happened in areas deemed to be “relatively safe”. These events remind us that earthquake prediction teeters precariously between the overly vague and overly precise. At one extreme, we can calculate the odds that big earthquakes will strike broad geographic areas over years or decades – that’s called forecasting. At the other extreme, early warning systems can relay news of the first tremors to people some distance away, giving them seconds to brace themselves. But the ultimate goal – accurately specifying the time, location and magnitude of a future earthquake – is extremely difficult.
In 1977, Charles Richter – the man who gave his name to a now-defunct scale of earthquake strength – wrote, "Journalists and the general public rush to any suggestion of earthquake prediction like hogs toward a full trough… [Prediction] provides a happy hunting ground for amateurs, cranks, and outright publicity-seeking fakers." Susan Hough from the United States Geological Survey says the 1970s witnessed a heyday of earthquake prediction “But the pendulum swung [because of too many false alarms],” says Hough, who wrote a book about the practice called Predicting the Unpredictable. “People became very pessimistic, and prediction got a really bad name.”
Indeed, some scientists, such as Robert Geller from the University of Tokyo, think that prediction is outright impossible. In a 1997 paper, starkly titled Earthquakes Cannot Be Predicted, he argues that the factors that influence the birth and growth of earthquakes are so numerous and complex that measuring and analysing them is a fool’s errand. Nothing in the last 15 years has changed his mind. In an email to me, he wrote: “All serious scientists know there are no prospects in the immediate future.”
Earthquakes start when two of the Earth’s tectonic plates – the huge, moving slabs of land that carry the continents – move around each other. The plates squash, stretch and catch against each other, storing energy which is then suddenly released, breaking and shaking the rock around them.
Those are the basics; the details are much more complex. Ross Stein from the United States Geological Survey explains the problem by comparing tectonic plates to a brick sitting on a desk, and attached to a fishing rod by a rubber band. You can reel it in to mimic the shifting plates, and because the rubber band is elastic, just like the Earth’s crust, the brick doesn’t slide smoothly. Instead, as you turn the reel, the band stretches until, suddenly, the brick zips forward. That’s an earthquake.
If you did this 10 times, says Stein, you would see a huge difference in the number of turns it took to move the brick, or in the distance the brick slid before stopping. “Even when we simplify the Earth down to this ridiculous extreme, we still don’t get regular earthquakes,” he says. The Earth, of course, isn’t simple. The mass, elasticity and friction of the sliding plates vary between different areas, or even different parts of the same fault. All these factors can influence where an earthquake starts (which, Stein says, can be an area as small as your living room), when it starts, how strong it is, and how long it lasts. “We have no business thinking we’ll see regular periodic earthquakes in the crust,” he says.
That hasn’t stopped people from trying to find “anomalies” that reliably precede an earthquake, including animals acting strangely, radon gas seeping from rocks, patterns of precursor earthquakes, and electromagnetic signals from pressurised rocks. None of these have been backed by strong evidence. Studying such “anomalies” may eventually tell us something useful about the physics of earthquakes, but their value towards a predictive test is questionable.
“Groping in the dark to find something like weird animal behaviour or emitted gases –that effort is not worth pursuing,” says Stein. “We have 30 to 40 years of negative results to convince us that this isn’t a good investment of resources.” Hough adds that the field is replete with bad science. “People will go, ‘Here’s an earthquake and here’s a blip beforehand’, but it’s not statistically rigorous.” And those blips, even if they do exist, rarely precede a quake with any consistency. “You can always find those apparent patterns by looking back, but you run the game forward and the methods don’t work.”
This is not a problem that we can data-crunch our way out of, as a group of scientists tried to do in the 1980s. For more than a century, earthquakes had regularly rocked a small part of the San Andreas fault, near Parkfield, California. Anticipating another in 1993, a hundred-strong team of geologists seeded the area with hundreds of seismometers, looking to discover signals that heralded the onset of the next inevitable earthquake. “People said: This is going to be easy,” recalls Stein. “We bet the farm on Parkfield and we put every bloody instrument we had as deep as we could.” When the earthquake finally happened in 2004, the instruments saw nothing. “How is the Parkfield Earthquake Experiment like technology stocks?” joked Stein in 2002. “They both seemed like a great bet at the time.”
The challenge that seismologists face is that there’s five miles of rock between them and what they want to study. “We’ll always be hamstrung by the fact that we’re stuck on the surface, and the surface only goes for the ride,” says Stein. “The action is at depth.” Consider California’s Hayward Fault, which runs parallel to the famous San Andreas. It is over 100 km (60 miles) long, and 10 kilometres (6 miles) deep. To get readings across its entire area would take “hundreds of thousands of drill holes as deep as anything ever drilled for oil and deeper than everything drilled for science,” says Stein. “Maybe then, we could have all the observations we need.”
But while predicting earthquakes may be beyond our grasp, predicting the subsequent aftershocks might be more feasible. Even though the word “aftershock” almost invites you to downplay them, these tremors cannot be ignored. “They’re not a detail,” says Stein. “They can be in some cases as or more dangerous than the main shock.”
Just look what happened in New Zealand. In September 2010, a magnitude-7.1 earthquake shook the ground 40 km (25 miles) away from Christchurch, the country’s second most populous city. The destruction was mild, and no lives were lost. Then, six months later, a magnitude-6.3 aftershock landed six miles from Christchurch’s centre, killing 185 people and causing 20-30 billion New Zealand dollars of damage (US $17-25 billion).
Aftershocks are indistinguishable from main shocks, but they do have some predictable qualities, including their frequency. Ten days after a main earthquake, the frequency of aftershocks falls by a factor of ten; a hundred days later, it falls by a factor of a hundred. But their magnitude stays the same. “They can really turn around and whack you,” says Stein. “They can be very big, and very late.” He and other scientists are trying to understand where aftershocks are most likely to occur, by simulating what happens when faults get distorted by the initial shocks. “That’s a place where we’re making progress. Then, once we’ve had that main shock, we can begin to address what’s more vulnerable and what’s safer.”
Even then, we would have to tread carefully. Six Italian scientists and a former government official are currently being trialled for manslaughter after allegedly downplaying the risk of an earthquake that struck L’Aquila, Italy in 2009, killing 309 people. Meanwhile, a lab technician called Gioacchino Giuliani claims to have predicted the quake based on radon emissions, even though he raised two previous false alarms. The L’Aquila case illustrates how inaccurate predictions can lead to panic and unnecessary evacuations on one hand, or a false sense of security on the other.
And for what? Even if prediction became the exact science it clearly isn’t, we would still need evacuation plans and earthquake-proof buildings – and these measures do not depend on prediction. “If you want to promote resilience and safety, quantify the hazard and expected motions from earthquakes, and build buildings more appropriately,” says Hough. “That’s money better spent.”
Geller concurs. “All of Japan is at risk from earthquakes, and the present state of seismological science does not allow us to reliably differentiate the risk level in particular geographic areas,” he wrote in a recent comment piece in the magazine Nature. “We should instead tell the public and the government to prepare for the unexpected.”
If you would like to comment on this article or anything else you have seen on Future, head over to our Facebook page or message us on Twitter.