A very slow magnetic doom

So, a piece was recently published at Undark called "The Magnetic Field Is Shifting. The Poles May Flip. This Could Get Bad.". Unsurprisingly, I have thoughts. Somewhat complicated thoughts. Let’s start with the important stuff:

Yes – the dipolar component (the dominant, bar-magnet like part) of the Earth’s magnetic field has been decreasing in intensity over the couple of hundred years we have had the ability to directly measure it. Based on the records of the Earth’s magnetic field intensity over previous reversals, preserved in igneous and sedimentary rocks formed at the time, reversals are associated with a substantial weakening of the dipole field.

Records of dipole intensity over the last five reversals, showing a steady decrease over several tens of thousands of years before the actual reversal, which lasts of the order of 10,000 years. Source: Valet et al. 2005.

But no – the recent weakening trend doesn’t mean a reversal is necessarily imminent. It turns out that baking clay to make pottery is a great way of taking accurate snapshots of the Earth’s magnetic field, so thanks to archeology, we have a fairly good idea of the strength of the dipole field over the past few thousand years. When plotted on the geological records of reversals above, we can see that the current dipole field is still 2-4 times stronger than it seems to be during an actual reversal, despite the recent decrease. A closer look at the record for the last 10,000 years shows we’re actually coming down from a fairly significant peak in field strength at around 0 AD.

A figure summarising several different models of dipole strength over the past few thousand years, based on different compilations of paleomagnetic measurements of geological and archeological samples. All show a peak in field strength about 2000 years ago. Source: Korte & Muscheler (2012)

Yes – regardless of present trends, the field will reverse eventually. The rate has varied over geological time, but the recent rate of reversals is somewhere between 2-5 every million years; at 780,000 years and counting, the current polarity chron is definitely pretty long by recent standards. It is certainly possible that the current decrease is part of the run up to the next one.

But no – when it does happen, a reversal will not happen overnight. Look again at the record of past reversals above, which shows a 30,000-50,000 year period of decaying dipole strength (which could be what we are witnessing the early stages of right now) before the dipole reverses and then more quickly recovers its strength. We can estimate the length of the transition period, when the dipole is weak and the higher-order components of the field dominate, from records of cosmogenic isotope production, which will be boosted in a weaker field because the atmosphere is less shielded from incoming high energy solar and cosmic particles. These records suggest a transition period of around 10,000-20,000 years, at least for the last reversal 780,000 years ago. If the field had started reversing when the Pyramids or Stonehenge were being built, we would still be waiting for the polarity switch to be completed. On current trends, even if the dipole continues to steadily weaken, the Empire State Building and the Eiffel Tower would be well into their second millennia of existence before the dipole field truly started to fail. Our many-times-great-grandchildren could easily be wondering about an ‘impending’ reversal in much the same way as we are.

A spike in the levels of the cosmogenic isotope Beryllium-10 in sediment and ice-core records over a 20,000 year period 780,000 years ago seems to record the interval when the Earth’s magnetic field was more disorganised and weaker during the last reversal. Source: Valet & Fournier (2016)

Yes, there are potential problems arising from a weaker and more disorganised field during a reversal. Lower and more changeable ionospheric currents could disrupt electrical grids (as they can today during geomagnetic storms). Orbiting satellites will also face a more hostile radiation environment.

But no – a global cataclysm is not on the cards. There is zero evidence of any species extinctions associated with a magnetic reversal. Beyond the field not being dipolar, the actual behaviour of the field during the transition, particularly the rate of change, is not well constrained by the geological record*, but if we use recent behaviour as a guide, variation of the non-dipolar components of the field appears to mainly happen over centuries, not months and years. Migratory animals that have been shown to at least partially rely on sensing the magnetic field to navigate seem to have coped with past reversals just fine. And if you think about things from an evolutionary perspective, the fact that they have also retained this ability suggests that the changes during the thousands of years that the dipole is weak are not so rapid that successive generations are getting totally lost on their migrations.

What our civilisation would actually face during a magnetic field reversal would not be a quick catastrophe, but something more akin to the effects of sea-level rise: a long-term deterioration in the conditions that we are used to, and our society and infrastructure are tuned for. As the decades pass, we might start seeing more frequent disruptions of the grid due to ionospheric interference, and not just when there was an intense geomagnetic storm; we might see a drop in the average lifetime of orbiting satellites, and more random losses. Increased surface radiation levels in particular locations could also affect background cancer rates. These long-term trends present challenges, but not insurmountable ones. And again, we are talking about this playing out over centuries, or even thousands of years.

So, back to the article that prompted this. As I said, my feelings are complicated. Factually, there’s not too much wrong with it. The author, Alanna Mitchell, has written a new book on the Earth’s magnetic field and has clearly done her research**. The problem is more subtle. Here are some of the terms used in this piece to describe a reversal:

  • "planetary anarchy"
  • "under attack from within"
  • "a battle…raging at the edge of the core"
  • "a coup"
  • "a revolution"
  • "turbulent and ungovernable"

What do all these terms have in common? They all suggest an abrupt, rapid, violent event. The use of the present tense also strongly suggests an event that is on the verge of happening, if not happening right now. And that is true: from a certain point of view. If you ask a geologist if magnetic reversals are rapid, they would say yes. If you asked if a field reversal was imminent, I would say maybe, but you could definitely find scientists who would express more confidence. But these are Deep Time answers, where ‘rapid’ and ‘imminent’ have durations that are considerably more stretched out than their more everyday meanings.

If asked to name an abrupt geological event, most people would name things like earthquakes, volcanoes, and catastrophic landslides: processes that cause quick, violent upheavals over the space of a few hours or days. Geologists, reading the history of the Earth from preserved sequences of rocks, don’t quite see things this way. Each layer – of limestone, of sandstone, of volcanic ash, of mudstone – is a page that describes the prevailing conditions on the Earth at the time it formed. By reading through the book, from bottom to top, we can chart how those conditions change over time. But geological narratives are rarely exhaustive: they are more like a fast-paced thriller that you buy at the airport to read on your holidays. A lot of narrative can be squashed into a relatively small thickness of rock: a cliff-face built from sedimentary rocks might recount the passage of millions of years, and thousands of years might be missed in the turn of a page – between the end of one unit being deposited and the start of another.

So when geologists start talking about ‘rapid’ events, what we really mean is that in the compressed and fragmented books of Earth history, we can observe a change with a clearly identifiable ‘before’ that differs significantly from ‘after’, but the details of the transition are squashed into a single thin horizon, or even lost in the transition between two different rock units. When a centimetre or two of rock can span thousands of years, you can start to see that for geologists, an event that lasts millennia can be rapid: an event that is going to happen in a few thousand years can be imminent. From a geological perspective, the last 35 years of eruptive activity at Kilauea is a tiny blip in Earth history. An earthquake is part of a longer cycle that involves centuries – or millennia – of strain accumulation across a fault, which means there is little difference in terms of the record left behind whether it occurs tomorrow or a century from now. And the resolution of most geological records is such that changes that take several thousand years – longer than the length of recorded human history – are often more like punctuation marks than complete sentences.

Proportionally, ten thousand years in the multibillion-year lifetime of the Earth is the equivalent of an hour or two in the average human life. So there is a certain narrative sense to this stretching out of the timespans implied by commonplace words; it helps us translate the vast tracts of geological time into a frame of reference we can more easily grasp. But the downside of this linguistic appropriation is that while geologists recognise and understand code switching between the Deep Time meanings and the everyday meanings of ‘abrupt’ and ‘rapid’, the rest of the world does not. Field reversals are abrupt? Then we’re in disaster movie territory when it happens, aren’t we***?

Or, as Alanna Mitchell puts it, in the section of her piece which I do think treads a bit too far into scaremongering territory:

"NO LIGHTS. No computers. No cellphones. Even flushing a toilet or filling a car’s gas tank would be impossible. And that’s just for starters."

This mismatch is why I periodically find myself trying to tamp down magnetic field collapse hysteria – and will surely end up doing so again in the future. And yet: there is one further layer of complication, which at the very least makes me sympathetic to what ‘The Poles May Flip’ was (I suspect) aiming to do. If ‘rapid’ changes can last thousands of years, then threats can also slowly develop over similar timeframes. As our response to the threat of climate change rather depressingly illustrates, we are very bad at prioritising problems that manifest over timescales of decades, let alone centuries. The earlier we act, the less we have to do to stave off danger. But the harder it is to convince ourselves into action, because we take on all the pain and our distant ancestors reap all of the benefit.

Alanna Mitchell is right that the long-term deterioration of the field in the run-up to a reversal – should that be what ends up happening over the next century or five – is eventually going to be a problem. Coping with these changes may not technically present an insurmountable challenge. But spending the money required to build infrastructure that is not just fit for the conditions it faces today, but is also resilient to the threats that we know are coming in its working lifetime, turns out to be a tough ask. To our ape brains, urgent threats that accumulate over decades and centuries seem like a contradiction in terms; such things are so far outside of the way we think about time that we lack the language to even properly describe such things.

So how do we present such problems in a way that people actually perceive them as a problem that might require action? We have been discussing an example of the most obvious tactic: using words like ‘rapid’ and ‘abrupt’, which are technically correct in the Deep Time sense, and letting peoples’ more common understanding of these words add the sense of urgency. It surely works to grab attention, and sometimes inspire concern. But I question if it is effective at actually building a consensus for long-term action. Eventually, someone has to explain that we are not talking about the Day After Tomorrow, but the century or millennium after next. Once this happens, people will probably stop worrying again, and may also feel resentful that they were made to worry in the first place.

But what do we do instead? Sadly, I don’t have the answers. If our species is to face and survive the long-term threats presented by our geologically active planet, and our alteration of it, we need to find that new language, that expresses the fierce urgency of acting now to avoid trouble centuries hence.

But for today: your compass will continue to point north. Your children’s compasses will continue to point north. And their children’s too.


*if you want the gory, highly technical details of what we do – and don’t – know about reversals based on the paleomagnetic record, this excellent review is a good place to start.

**from a sneak peek, she even visited the site in France where the first reversed polarity paleomagnetic samples were collected, which makes me more than a little jealous.

***I love The Core. The fact that I know how gloriously wrong it is is probably what elevates it above your standard terrible disaster movie.

Categories: deep time, geology, palaeomagic, public science, society

Simulating radioactive decay

3.8 billion years! 4 billion years! 4.4 billion years! 4.57 billion years! When discussing the age of the Earth in introductory geology, I think it is important for students to know at least the basic principles of where these ages come from. That means explaining radiometric dating, which is consistently a challenging concept for students to get their heads around. This year, my attempts to come up with more useful ways to illustrate radioactive decay led me to code some simple visualisations in Python. I briefly describe them below, along with GIFs of the simulations themselves. Click on the images for larger versions that don’t loop. Please feel free to use them if you find them useful; suggested improvements are also encouraged!

1: Radioactive decay as a random, and yet predictable, process

The basic simulation is a grid of 1000 purple parent atoms. Over each half life step, each parent has a 50-50 chance of decaying into a grey daughter atom. With 1000 atoms, you can nicely see how the proportions of parent to daughter change dramatically as the rock ages, and also see how the process completes after about ten half lives. Repeating the simulation multiple times helps to emphasise that:

– It is a random process: which atoms decay first, and which ones that survive the longest, changes between each simulation.
– The overall behaviour of this system is still predictable: the proportions of parent and daughter after each half life step stay roughly the same in each simulation. 1000 atoms is a bit on the small side for perfect statistical behaviour, but by adding the traces of previous runs you can see the average behaviour converges pretty well.

In this simulation, 1000 purple parent atoms are decaying into gr??????ey daughter atoms. Multiple runs show that this is a random, but predictable process

In this simulation, 1000 purple parent atoms decay into grey daughter atoms over multiple half-lives. The simulation is repeated 9 times.

2: Different half-lives means very different rates of decay

In this simulation, I’ve added a second grid of a green radioactive isotope, with a half life that is four times that of the purple atoms. This is an attempt to translate the more abstract ‘half-life’ concept into a physical length of time, and emphasise that different half lives mean different rates of change from parent to daughter. In this case, the purple atoms, which have the shorter half-life, quickly transform into their daughter atoms; the green atoms, with a considerably longer half-life transform much more slowly. This means that the green atoms persist (and can be used to date the rock by measuring their proportion relative to their daughter) long after the purple has all vanished (making it useless for dating).

In this simulation, two isotopes with different half lives, trapped in minerals in the same rock, are decaying over the same interval of time. The purple atoms (top) have a short half-life, so quickly transform into their daughter atoms??????. The green atoms (bottom) have a half-live that is four times longer, so transform much more slowly.

Simulation of two isotopes with different half lives decaying over the same interval of time. The purple atoms have a half-life 4 times shorter than the green atoms.

Categories: deep time, geology, science education, teaching

All of this has happened before, and all of this will happen again: an introduction to How the Earth Works

A few months ago Katie Hinde posted the story she told her anthropology students in the first class of the semester. Eschewing a run-through of the syllabus, she instead illustrated the overarching themes of her course with a compelling tale of species-hopping disease outbreaks and the cultural behaviours and conflicts that shaped both the course of the outbreak and its aftermath. It’s pretty awesome. You should read it.

Like Katie, I’ve been telling a story at the beginning of the introductory geology course I teach, called How the Earth Works, for a couple of years now. I’m not going to claim it’s as good a story, but I like to think it gives a flavour of the kinds of stories you can tell about the Earth, if you know how to look: stories of how the world slowly remakes itself over hundreds of millions of years, of how the very high was once the very low, and will be again. But I’ve never written it down, which probably means that I don’t actually tell it as well as I could. So to kick off this semester, I thought I’d tell it properly.

It starts, unsurprisingly, with a rock. Rocks are witnesses to the ‘crime’ of Earth history. Geologists are the detectives, trying to tease clues out of the rocks to try and work out what happened, when, and why. Today’s person of lithological interest is from a very special place:

Mount Everest viewed from the southeast.

Mount Everest viewed from the southeast.

About 20 feet below the summit of Mount Everest, almost 9 kilometers above sea-level, you find an outcrop of rock that looks like this.*

A rock from just below the summit of Mount Everest

The highest rock in the world. Photo by Callan Bentley

So what secrets does rock taken from the roof of the world hold? Two basic features stand out:

  • It is layered. The dark and light grey bands have been formed by mineral grains slowly settling to the bottom of an ocean, sea, or lake over thousands of years. This is a sedimentary rock.
The rock has dark grey and light grey banding


  • It is fractured. There are cracks, and the originally continuous sedimentary layers are offset across them. This rock has been deformed.
Numerous fractures can be seen


These simple observations already tell us a lot, but with a bit more detail, and a little specialist knowledge, we can start to tell a much more vivid story. The next question to ask is: what are the grains that this sedimentary rock is built from like? Firstly, they are extremely small – it’s actually pretty much impossible to pick out individual grains in this picture, because they are only fractions of a millimetre across. This, in itself, tells us something important: such small grains, which are easily wafted away by even the weakest current, only settle out of the water column somewhere very sheltered, or deep enough that the water is undisturbed by even the strongest storms on the surface.

Different minerals have different properties, like colour, and hardness; some minerals are very common in some kinds of rocks, and not in others. Clues such as these point to the teeny tiny, dull grey mineral grains in this rock being made of calcium carbonate (most calcium carbonate is in the form of a mineral called calcite; in this rock it is in a slightly different form with some magnesium mixed in, called dolomite). Most calcium carbonate in the geological record is produced by living organisms, who use it to build protective shells. But the very fine grained crystals we see here are not shell fragments; they are individual grains that precipitated directly out of the water before settling onto the sea bed. This gives us yet more useful information: the right conditions for the calcium and bicarbonate ions dissolved in seawater to spontaneously crystallise into new mineral grains only occur in a few places. Here’s one such place:

The Persian Gulf, viewed from space. On the southern edge, the dark blue water is turned milky by clouds of newly crystallised calcium carbonate suspended in the water column.

Satellite image of the Persian Gulf, with ‘whitings’ (small bright white patches) produced by carbonate formation in the shallow waters off the coast of the United Arab Emirates. The larger-scale cloudiness is probably the result of a phytoplankton bloom. Via the USGS

The Persian Gulf is a warm, shallow sea. In the summer, biological activity (photosynthesis removes CO2 and makes ocean water less acidic) and concentration of dissolved ions by evaporation make it possible to spontaneously precipitate calcium carbonate – the small bright white patches in the satellite image above are basically clouds of small carbonate crystals, suspended in the water, that will eventually settle and accumulate to form limestone on the sea bed.

Here’s another place where something similar is happening: the Bahamas.

Satellite view of a cay in the Northern Bahamas, with small patches of whitings colouring the seas around it.

Small patches of whitings in the Northern Bahamas.

In some places, changes in sea-level have exposed the resulting rock above sea-level. Look a little familiar?

Off-white and grey layers of thinly bedded carbonates on a Bahamas beach

Layered carbonates on Warderick Wells, Exuma Cays, Bahamas.
Photo by Zach Clemence

Note the conditions these two places have in common: calm, warm, shallow water. One of the key geological ideas that we will explore in this course is the principle of uniformitarianism: the notion that we can understand past rocks in terms of the processes that shape the modern Earth. So if we see a rock that looks like something that forms today in warm, shallow tropical seas, we expect that that rock also formed in warm, shallow tropical seas. Let’s just remind ourselves where we actually found it.

Mount Everest viewed from the southeast.

*record scratch* *freeze frame* Yup, that’s me. You’re probably wondering how I ended up in this situation.

Clearly, something rather dramatic has happened to this rock since it was formed, that has lifted it at least 9 kilometres into the air. And the forces that conspired to do this have actually left their mark on our rock, in the form of fractures we’ve already noted.

Nowadays, most people know how the Himalayas formed: from the collision of two continents, as a result of plate tectonics. In the world’s biggest and slowest car crash, India is moving north into the space occupied by Asia, and the crust in the collision zone is crumpling up, creating the worlds highest mountains.

Illustration of India's northward drift from south of the equator 70 million years ago to its present position colliding with Asia.

Reconstructed motion of India in the last 70 million years. From the USGS.

Uniformitarianism allows us to unravel the what of Earth history; plate tectonics is what allows us to understand the why. Why are rocks that formed at the bottom of the ocean now at the roof of the world? Because the Earth’s surface is spilt up into a constantly morphing jigsaw puzzle. As the pieces – the rigid plates – jostle and slide against each other, you get earthquakes, volcanoes, and a dramatic reshaping of the Earth’s surface. Where two plates divide, you get new oceans; where two plates collide, you get mountain belts.

We’ll tell the story of how we know this in a couple of weeks. It was a hard-won insight, and a recent one, too: the geologists who taught me – and despite the grey hairs, I’m not that old – actually lived through the discovery. Many of them have now also lived to see us develop the ability to see plate tectonics happening, almost in real time. The GPS in your phone allows you to find the nearest coffee shop, or hail an Uber; but attach a GPS unit to solid rock, and leave it for a few years, and you can observe some bits of the Earth are moving steadily across the Earth’s surface relative to other bits, at rates of a few centimetres a year. The map below shows how India is still pushing into Asia at about 4 centimetres a year. That may not seem like much, but at that rate it can travel about 2,500 kilometres in the time since the dinosaurs went extinct (and we believe India was moving several times faster than that prior to the collision).

Map of northern India and SW Asia, dotted with arrows showing the mostly north and east motion of the crust in the Himalayan collision zone relative to the interior of Eurasia.

The arrows show the direction and speed of motion of GPS stations relative to the interior of Eurasia. India is moving to the northeast at about 4 centimetres a year. The arrows reduce in size as this motion is accommodated by faulting, and change direction where crust is being shoved out of the way rather than getting crumpled up. From Gan et al., 2007

So, from this one rock, we can tell the story of how our planet changes; of how lands that were once at the bottom of a tropical ocean now lord it over the rest of the world’s topography. It’s a pretty good story. You may even have heard a similar one before. But what you may not have heard is that the story doesn’t end there. Because mountains don’t stay still. We’ve just seen that the collision that created the Himalayas is still going. India’s continued motion is pushing the Himalayas, including Mount Everest, ever higher, by around half a centimetre to a centimetre a year. However, even if the land beneath is going up that fast, the summit of Mount Everest is not. Another important Earth process is hard at work trying to grind the mountains down again.

When I spent time in New Zealand doing fieldwork for my PhD, I learnt a phrase: ‘Tall Poppies Syndrome’. It describes how the act of standing out from the crowd focusses the crowds attention on you, and often triggers the desire to cut you down to size. In a similar way, elevated rocks are exposing themselves to the water cycle. Mountains create weather. Storms, freezing water, and flowing ice physically and chemically attack the rocks exposed on the peaks – including our limestone – and gradually weaken, fracture, and them break apart. Fragments large and small fall downhill onto the glaciers that fill the valleys. These icy conveyer belts, darkened by their load of debris, flow downhill, out of Himalayas.

Satellite view of Mount Everest and the surrounding peaks. The summits are covered with bright white snow. The icy glaciers in the valleys between the peaks are a dirty grey colour, covered by debris weathered from the slopes above them.

Around Mount Everest, the blinding white of the snow on the peaks contrasts strongly with the dirty appearance of the glaciers in the surrounding valleys – glaciers covered with rocky debris produced by intense weathering and erosion. Image from NASA’s Earth Observatory

Eventually the glaciers melt, but the water continues to flow downhill, fast enough to carry all but the very largest boulders downhill with it. And water continues to chemically attack the rocks, breaking them down into their individual elements and carrying them downstream as dissolved ions. Because rainwater is slightly acidic, carbonate rocks are particularly prone to chemical disassembly: the rivers flowing out of the Himalayas are loaded with dissolved calcium and bicarbonate ions: the building blocks of future carbonate minerals.

Satellite image of the Ganges and Brahmaputra rivers, draining south from the Himalayas into the Bay of Bengal.

Source to sink: the Himalayas in the north, the Ganges Delta to the South. Source: NASA’s Earth Observatory

And thus it is that the rocks that plate tectonics raised up are cut back down to size and returned to whence they came by water and gravity. When rivers reach the coast, the water slows and drops its load of sand and mud. New land – and eventually, new sedimentary rock – is built. The Bengal fan is a 16 kilometre thick pile of eroded debris, carried out of the Himalayas by the Ganges and Brahmaputra rivers. But the water, and its dissolved ionic passengers, does not stop. Wind driven currents move it onwards, until the chemical wreckage of the Himalayas is spread throughout every ocean basin, and in the waters of every shallow sea. Places such as the Bahamas, or the Persian Gulf.

You can’t point to an individual particle in those clouds of new mineral grains and say, ‘that one contains calcium from Mount Everest!’, but some of them do. In a cycle that has spanned a whole planet and hundreds of millions of years, elements have moved from water in an ancient ocean, to rock at the bottom of that ocean, to rock at the highest point on the planet, back to the modern oceans, and then back to rock on the sea bed again. For a while, at least. Because Arabia is on the move. Iran is a hotbed of seismic activity as earthquakes accommodate plate convergence.

Map of Magnitude 5 earthquakes in the Persian Gulf between 1900 and 2017.

The concentrated band of seismicity in the Zagros Mountains on the north coast of the Persian Gulf, which continues along the Iran-Iraq border, is a zone of convergence generated by the north-east motion of the Arabian peninsula relative to Asia.

The grand cycles that make up the story of Earth history – cycles of rock, of water, of energy – will continue. The shallow sea now between Arabia and Iran will be thrown up, and crumpled up, and in a future mountain range, an intrepid geologist – maybe human-ish, maybe cockroach – will find a layered, fine-grained, deformed carbonate rock. Hard evidence that the Persian Gulf, long closed up, once existed – until its components are once more returned to a far-future ocean. It’s enough to give you (Cylon) religion.

“All of this has happened before, and all of this will happen again.”

Except, perhaps, for one thing. It is undeniably true that our understanding of how the Earth has operated up to this point can help us understand what the future has in store. But there is also a new geological force is at work, one the planet has not seen before: us. Our prodding of the planet may well push in into places it would not have gone without us. If anything, this makes understanding what makes this planet of ours tick even more important. We have found the accelerator; it would be nice to work out where the mirrors and the brakes are as well.

*I haven’t actually touched this rock myself, unfortunately: we have Callan Bentley to thank for the picture.

Categories: academic life, basics, deep time, geology, geomorphology, ice and glaciers, outcrops, past worlds, rocks & minerals, science education, tectonics

Earthquake warning systems are hard, but not having one is worse.

The premise of earthquake early warning systems is simple. An earthquake produces several different kinds of seismic waves that race away from the rupture point. Because they are different kinds of vibrations, they travel at different speeds; and the farther they travel, the more the speedy compressional P-waves pull away from the transverse S-waves, and the more the surface waves lag even further behind.

Cartoon showing a race between P, S and surface waves.

In the race between different seismic waves, fleet-footed P-waves are heralds for their slower and more earth-shaking brethren.

Fortunately for us, the speediest waves are also the weaker, less damaging ones. The P-waves shake us up a little when they arrive, but they are also giving us a heads up that more damaging shaking is on its way. This warning is at most a few tens of seconds, but with the right infrastructure in place this is enough to shut down vital machinery (trains, elevators, nuclear power stations…) and prepare people for incoming shaking. If detected soon enough, the warning can also be sent ahead of the P-waves at the speed of light, giving even more advanced warning ahead of the expanding front of seismic energy.

Of course, it is much more challenging to put this simple theory into practice. The small window of opportunity for a timely warning can quickly close if the system is not responsive enough. On the other hand, the degree of automation required to gain that responsiveness can lead to a system that is more easily fooled by complex seismic events. Two recent news stories about two of the countries that actually have working earthquake early warning systems highlight challenges from either end of this balancing act.

Mexico: sneak attack from below

Mexico’s earthquake warning system was put into place after a 1985 magnitude 8 rupture on the subduction thrust off the west coast killed thousands in Mexico City. That is the system’s focus: it was built to detect large ruptures on the subduction zone, and warn the residents of Mexico City, who live on top of a massive seismic wave amplifier.

The system worked as designed for the biggest earthquake of 2017, a M8.2 plate bending event. But it struggled to respond quickly enough in the much closer M7.1 a few days later – this NPR story starts with an account of the sirens going off only after the strong shaking started. This earthquake, whilst weaker than the one 12 days earlier, was much closer to Mexico City, resulting in strong shaking that collapsed buildings and killed several hundred people. And that proximity was a problem for the early warning system: with only around one hundred kilometres to travel, rather than several hundred, the P-waves could only pull a little bit ahead of the S-waves and surface waves, leaving barely any time for a warning to get out. The NPR story linked above indicates that changes are already being made to make the warning system more responsive to these kinds of events.

Japan: false positives attack

Last Friday, Japan’s warning system was triggered when it detected P-wave arrivals from what it estimated as a magnitude 6.4 earthquake off the coast north-eastern Japan. No such event had occurred: instead the Japanese Meteorological Agency, who operate the system, reported that the false warning was the result of the early warning system misreading two smaller earthquakes, a M4.4 on the east coast and a M3.9 that occurred on the west coast at the same time, as one larger event.

I was actually interested enough to do a little impromptu data analysis to see if I could work out why the system got fooled. The seismogram for this time from a station in central Japan is a little strange, with very little amplitude variation between the body waves and the surface waves, and earlier P-wave arrivals than expected (a comparison with an M4.7 a little later in the day, in roughly the same location, makes this clear). My speculative interpretation at the time was that the P-waves from the E coast quake reached nearby stations at the same time as surface waves from the smaller, earlier W coast quake. This does seem to have boosted the apparent P-wave magnitude, but by further comparison with the M4.7 seismogram, the boost was clearly not enough to make the signal look like a M6.4. Perhaps it is also a matter of duration: larger ruptures take longer, because a bigger section of the fault is progressively unzipping. If the system interpreted the whole sequence as an extended package of P waves, that may have been sufficient for the system to mistakenly trigger.

Seismograms from Japan.

Blue: seismogram for the event that triggered Japan’s earthquake early warning system on January 5th, from this station in central Japan. It is probably the hybrid of two events, and looks weird compared to a more normal earthquake (red). Data from IRIS, plotted using Obspy.

Either way, this is a tricky scenario for an automated system to handle, and therein lies the challenge. To save the most lives, people have to respond quickly when an alarm sounds. If you have a computer that cries wolf – a system so sensitive that it is prone to triggering in the absence of a real threat (a false positive) – then people might stop paying attention to it. On the other hand, you don’t want to risk the system failing to trigger when there is a threat (a false negative). This isn’t the first time that Japan’s system has given a false warning – there was also one in August 2016 – but occurrences are hopefully rare enough that the system is still trusted. Even if the alert sound isn’t Godzilla roaring.

Canada (and the US): not quite there yet

The problems described above are largely good problems to have, because you actually have a working earthquake warning system in place to struggle with and improve – a system that may not be perfect, but does save lives. On the west coast of North America, despite the looming threat of the San Andreas Fault and especially the Cascadia subduction zone, a functional warning system is still some way from implementation. This article updates the progress on the Canadian side of the border, where ocean bottom sensors and GPS data are being tied into the network to get more timely and accurate detections. I was all ready to use it as a cudgel to whack the US government over the head with for continuing not to properly fund the ShakeAlert system, when I read more closely and realised that the Canadian system is in exactly the same position. They have an at least mostly working prototype, with sensors, and computers dedicated to processing the output of those sensors to generate alerts. But is it the next step, building the infrastructure to get timely warnings out to those in harm’s way, that is the challenging step. Or perhaps more accurately, the challenging step in the US is securing the funding to do so. It’s not cheap ($40 million to set up and $16 million a year to run, the USGS estimates, but it’s a drop in the federal budget to protect 50 million people on the west coast. The lack of urgency is frustrating – perhaps the Canadians will be more sensible.

Categories: earthquakes, geohazards, geophysics, links, society

What does it mean to read the literature, really? (Anne’s 2017 #365papers in review)


For the 3rd year in a row, I have meticulously tracked each and every paper, proposal, manuscript, etc. I read for professional reasons. Begun by Jacquelyn Gill in early 2015, I found the twitter hashtag #365papers an appealing way to get a sense of what types of things and how many I was reading. In 2015, I was particularly curious how having an infant might affect my reading habits. I carried on logging my papers in 2016, when I was teaching a full load, including 2 graduate courses. It turns out that my teaching load does have a big impact on my reading, because I read (and discuss) so much primary literature in class. Based on how helpful I’ve found the process of being analytical about my reading, I continued to log my papers for 2017, as just a professional version of life logging or the quantified self idea. This year, I had a lighter teacher loading in the spring (but a high administrative/service load) and was on a research leave in the fall. What difference will I see?

Before we get into the data, a brief pause for a key definition and some discussion of that: What does it mean to read a paper? Here’s what I wrote in 2016:

I only counted papers that I read fully through the results and discussion sections, so there are quite a few papers that I read large chunks of but didn’t make the list because I didn’t finish.

What I’m logging as reading is not opening up a PDF to confirm something that I already think I know or check a key statistic. I’m not reading the abstract and I’m not hunting for a citation that I can plop at the end of a sentence for something I’m writing. I’m deeply reading the paper (introduction, methods, results, and discussion), looking at all of the figures and probably most of the tables. For most papers I count as read, I have either highlighted, made marginal notes, or written a brief synopsis after I’ve finished with them. I’m know that I’ve spent time with hundreds of other papers this year that never made my spreadsheet for the #365papers project, because in my mind I haven’t truly read them top-to-bottom.

Last week, Caitlin MacKenzie wrote a beautiful post about her own reading for this year and she reflects on this idea of deep reading. (Read it!) She talks about how we train ourselves to perfect the art of skimming a paper and how we flick-bounce between things we read, and then she mentions the number of papers that the average science faculty member allegedly reads per year: 468. Let me tell you in advance, I did not deeply read 468 papers in 2017, or 2016, or 2015, or probably any year since 2007 when I became a faculty member. And honestly, I don’t believe that the average science faculty member did either… or at least not deeply read that number of papers per year. (However, I’ve added the two papers that quantified these statistics to my “to read” pile for 2018 and I’m looking forward to see how they defined reading.)

With that important point made, I’ll refer you to my epic 2016 blog post for a detailed set of methods. The only methodological difference this year is that I did count student thesis proposals, because I was deeply reading them and decided that they deserved to be counted.


In 2017, I deeply read 105 items, which falls neatly between 2016 (132) and 2015 (78). It does support my conclusion from last year that teaching grad classes enhanced my reading, but it also makes me wonder what exactly an average year looks like. If I ever have what I think is an average year, I’ll let you know.

Of the items I read, 70 were published journal articles or GSA publications and 12 were manuscripts that I either reviewed or for which I served as associate editor. (I have been an associate editor at Water Resources Research since May.) I know that there are both published articles and manuscripts under review that I read deeply more than once in 2017, but I only counted them one time each. I also reviewed 16 grant proposals, 3 unpublished dissertation chapters, and 4 MS thesis proposals. As in previous years, I did not count a zillion blog posts and news articles (even though I learned a ton from them) and an ungodly number of student thesis drafts and other writing assignments. Relative to 2016, it’s clear that my reading took a hit in the published journal articles arena, as I reviewed slightly more papers and proposals than last year.

Figure 1. Cumulative distribution function of my reading in 2017.

The rhythm of my year comes into stark relief when I examine my patterns of reading over time (Figure 1). I read, slowly but steadily, through spring semester when my service load was heavy, with a push around the time I served on a grant review panel. I also read steadily through most of fall semester’s research leave, and at faster rate, which makes sense since my whole job was to read and write and think.

It’s the summer though where I see the most interesting phenomena. My reading clicks along steadily through my summer travels, helped by attending a conference and some train and plane journeys. But then there’s a big 20 day gap in July (centered on day 200) that I really can’t explain. And, oddly, there’s a similar gap in my 2016 data. I’m fairly certain that I didn’t just fail to record my reading during that period, and I sort of think it wasn’t just reading, because I remember that in early August I told myself that things had to change if I wanted to make my fall semester a success. The good news is that I did get back on track, picked up the reading pace, and read 50% of my items for the year on or after August 3rd. (I also got 3 papers submitted between August and December.) Heading into 2018, I’ll know to watch out for the July doldrums and hopefully can avert a similar slump.

For 2016, I made a graph that showed why I read the things I read and I found that my graduate class teaching accounted for 48% of my non-review reading for the year. In 2017, without those graduate classes, teaching-related reading was only 5% of my total.I also broke out graduate students (dissertation chapters, proposals) as their own category and that took up 12% of my reading space.  And of course, some reading should really double-count, e.g., papers we discussed in our lab meeting were categorized as research, but they certainly served a teaching purpose as well.  A higher percentage of my reading (11%) revolved around public engagement in 2017, and I did less generally keeping up with my fields sort of reading (18%) than in 2016. That left the majority of my reading (54%) categorized as directly related to ongoing research projects. With lots of research irons in the fire and 3 conferences (+reviewing) as vehicles for generally keeping up with my fields, I think this makes a lot of sense for 2017.

Another way of thinking about whether I’m keeping up with the literature in my fields is to look at when the papers I’m reading were published. As in previous years, it looks like I’m mostly reading papers published in the last few years (Figure 2). 50% of the papers I read were published in 2017, and my weighted-average publication date is 2014. So, yes, I’m reading the recent literature relevant to my research. Yay! On the other hand, these data suggest that if I don’t get to a paper within a year or two of its publication, I might not ever deeply read it, which is a little sad, given the size of my “want to read” folder.

Line graph showing a 0-1 papers per year from 1996 through ~2012. Then a huge rise with a peak in 2017 at 35 papers per year.

Figure 2. Publication date for papers I read in 2017. One paper had a 2018 publication date.

I also kept track of first authors, their likely gender, and what country they were affiliated with. I’m kind of thrilled that I read a nearly equal number of papers with women first authors as those by men. 48.6% of the papers I read had women first authors and 47% of the unique first authors were women. This substantially better than last year, and substantially better than than we might expect given the gender disparities that persist in the geosciences. I suspect, though, that some of this comes from my interdisciplinary reading habits and that fields like ecology and public engagement don’t skew so heavily male.  On the other hand, about 75% of the papers I read had a first author affiliation in the USA and I still need to break out of the USA-Australia-Canada-UK reading hegemony.

Is the open-access revolution arriving? 33% of the papers I read were available for free from their publishers and another 11% were available elsewhere on the web for free. I’d been thinking that the availability of AGU publications >2 years old might be a reason for my increased ease of paywall-free reading, but when I look at the papers that were available OA, I see that I didn’t actually take advantage of that all that much (mostly because of Figure 2). Also, 33% of the papers that I published in 2017 are available open-access. This paper, with Tara Smith and colleagues,: “Prevalence and Characterization of Staphylococcus aureus and Methicillin-Resistant Staphylococcus aureus on Public Recreational Beaches in Northeast Ohio” was published in AGU’s new open access GeoHealth journal in November.

Over the course of 2017, I read articles from 43 different journals. I found these articles primarily via  people (students, colleagues, twitter, the authors) (28), google scholar (18), and tables of contents sent to my email (10), with lower numbers attributed to citation alerts, browsing a journal (on paper or electronically), or in the references cited of other papers. Here I need to give a shoutout to my mom, who told me about the paper that ended up being one of my favorites for the year: on phosphorous and nitrogen budgets for urban watersheds, published in PNAS by Sarah Hobbie et al. The top journal I read from was Science (thanks to that digital subscription that comes with AAAS membership), but if you add back in my reviewing and editing, the top journal was Water Resources Research.

As in 2016, I tagged each article I read with a primary topic and sometimes a secondary topic. My top primary topics of the year were (no, surprise) urban hydrology, (general) hydrology, and geomorphology. When I add in the secondary topics, I add water quality/geochemistry, climate change/climatology, and science communication to the list of most frequently read topics. This, I feel, is a pretty nice summation of how I saw myself as a scientist in 2017.


I could certainly be doing more reading in a year (especially in the summer), even if I don’t think I’ll ever deeply read anywhere near the 468 papers the average science faculty member allegedly reads. I continue to be concerned that reviewing takes up such a large fraction of my reading space. I’ve found it really helpful to track my reading habits the last few years, and I’ve also been tracking my writing habits for about 6 months now. Tracking my reading and writing has certainly made me more aware of my own productivity patterns, and has changed them for the better (the observer effect in action).

One thing that has been percolating through my mind is that there are lots of exhortations to write every day and to schedule regular time to write. There are even whole books about how to write a lot. Yet, I don’t often see similar calls to schedule regular reading time, read every day, etc. When I do see something about reading the literature, it’s more along the lines of this hilariously true article on someone’s first experience trying to read a scientific article. It’s all to easy to push aside the reading in favor of the “doing” bit of science, whether that’s collecting and analyzing data or writing up the results. As a profession, we reward productivity in the form of papers and grants, and sitting down to deeply read journal articles can feel like wasted time. Yet, if we aren’t regularly reading the literature, we risk that the work we are doing is out-of-date, duplicative, or derivative.

For myself, I’ve learned that I have a very hard time stopping in the middle of a workday to read papers. Most of the time, if I am reading during the workday, I’m looking for a very specific thing or for just big picture understanding. Or I’m reading abstracts and filing papers away to read “later”, whenever that might be. But on mornings when I get up early and the house is quiet, I can make myself a cup of tea and read papers until my family wakes up. Because my mind isn’t in busy-busy-do-all-the-things mode, I’m able to drink in the papers along with my tea. I’ve also found some success in listening to papers via PDF reader while walking the dog and folding the laundry, though that doesn’t work well for math or figure-heavy papers. I’ve also found hat I have more ideas and enthusiasm for what I’m working on if I’m reading a lot of papers in my field and that periods of intense reading are followed by periods of intense data analysis and/or writing productivity. The trick is not to completely drop the reading habit when I’m those other phases of work.

As all of these thoughts were playing in my head in the last week of 2017, Caitlin MacKenzie’s post gave rise to a great discussion on Twitter about reading habits. In particular, Meghan Duffy and Susana Wadgymar brought up the point and setting aside regular times to do that reading, and Nina Wale reminded us how it important it is to our job as writers.

Spurred on by the conversation on Twitter, in 2018, I’ll be continuing to track my reading habits, but I’m also going to try something new. I’m setting aside time on my calendar, most days, for reading. Maybe I’ll tweet it using the Meghan suggested #readinghour hashtag, or maybe I’ll stay away from internet distractions. Sometimes my reading will be in the early morning, but other days it will be in my office, with the door closed, after brewing a cup of tea with my office kettle. We’ll see how it goes, but I’m hopeful that I’ll read more, more regularly, and be overall more creative, happy, and productive because of it. Whatever the case, I’m sure you can look forward to an update here a year from now.

Categories: academic life, by Anne