Simulating radioactive decay

3.8 billion years! 4 billion years! 4.4 billion years! 4.57 billion years! When discussing the age of the Earth in introductory geology, I think it is important for students to know at least the basic principles of where these ages come from. That means explaining radiometric dating, which is consistently a challenging concept for students to get their heads around. This year, my attempts to come up with more useful ways to illustrate radioactive decay led me to code some simple visualisations in Python. I briefly describe them below, along with GIFs of the simulations themselves. Click on the images for larger versions that don’t loop. Please feel free to use them if you find them useful; suggested improvements are also encouraged!

1: Radioactive decay as a random, and yet predictable, process

The basic simulation is a grid of 1000 purple parent atoms. Over each half life step, each parent has a 50-50 chance of decaying into a grey daughter atom. With 1000 atoms, you can nicely see how the proportions of parent to daughter change dramatically as the rock ages, and also see how the process completes after about ten half lives. Repeating the simulation multiple times helps to emphasise that:

– It is a random process: which atoms decay first, and which ones that survive the longest, changes between each simulation.
– The overall behaviour of this system is still predictable: the proportions of parent and daughter after each half life step stay roughly the same in each simulation. 1000 atoms is a bit on the small side for perfect statistical behaviour, but by adding the traces of previous runs you can see the average behaviour converges pretty well.

In this simulation, 1000 purple parent atoms are decaying into gr??????ey daughter atoms. Multiple runs show that this is a random, but predictable process

In this simulation, 1000 purple parent atoms decay into grey daughter atoms over multiple half-lives. The simulation is repeated 9 times.

2: Different half-lives means very different rates of decay

In this simulation, I’ve added a second grid of a green radioactive isotope, with a half life that is four times that of the purple atoms. This is an attempt to translate the more abstract ‘half-life’ concept into a physical length of time, and emphasise that different half lives mean different rates of change from parent to daughter. In this case, the purple atoms, which have the shorter half-life, quickly transform into their daughter atoms; the green atoms, with a considerably longer half-life transform much more slowly. This means that the green atoms persist (and can be used to date the rock by measuring their proportion relative to their daughter) long after the purple has all vanished (making it useless for dating).

In this simulation, two isotopes with different half lives, trapped in minerals in the same rock, are decaying over the same interval of time. The purple atoms (top) have a short half-life, so quickly transform into their daughter atoms??????. The green atoms (bottom) have a half-live that is four times longer, so transform much more slowly.

Simulation of two isotopes with different half lives decaying over the same interval of time. The purple atoms have a half-life 4 times shorter than the green atoms.

Categories: deep time, geology, science education, teaching

All of this has happened before, and all of this will happen again: an introduction to How the Earth Works

A few months ago Katie Hinde posted the story she told her anthropology students in the first class of the semester. Eschewing a run-through of the syllabus, she instead illustrated the overarching themes of her course with a compelling tale of species-hopping disease outbreaks and the cultural behaviours and conflicts that shaped both the course of the outbreak and its aftermath. It’s pretty awesome. You should read it.

Like Katie, I’ve been telling a story at the beginning of the introductory geology course I teach, called How the Earth Works, for a couple of years now. I’m not going to claim it’s as good a story, but I like to think it gives a flavour of the kinds of stories you can tell about the Earth, if you know how to look: stories of how the world slowly remakes itself over hundreds of millions of years, of how the very high was once the very low, and will be again. But I’ve never written it down, which probably means that I don’t actually tell it as well as I could. So to kick off this semester, I thought I’d tell it properly.

It starts, unsurprisingly, with a rock. Rocks are witnesses to the ‘crime’ of Earth history. Geologists are the detectives, trying to tease clues out of the rocks to try and work out what happened, when, and why. Today’s person of lithological interest is from a very special place:

Mount Everest viewed from the southeast.

Mount Everest viewed from the southeast.

About 20 feet below the summit of Mount Everest, almost 9 kilometers above sea-level, you find an outcrop of rock that looks like this.*

A rock from just below the summit of Mount Everest

The highest rock in the world. Photo by Callan Bentley

So what secrets does rock taken from the roof of the world hold? Two basic features stand out:

  • It is layered. The dark and light grey bands have been formed by mineral grains slowly settling to the bottom of an ocean, sea, or lake over thousands of years. This is a sedimentary rock.
The rock has dark grey and light grey banding

Layers.

  • It is fractured. There are cracks, and the originally continuous sedimentary layers are offset across them. This rock has been deformed.
Numerous fractures can be seen

Fractures.

These simple observations already tell us a lot, but with a bit more detail, and a little specialist knowledge, we can start to tell a much more vivid story. The next question to ask is: what are the grains that this sedimentary rock is built from like? Firstly, they are extremely small – it’s actually pretty much impossible to pick out individual grains in this picture, because they are only fractions of a millimetre across. This, in itself, tells us something important: such small grains, which are easily wafted away by even the weakest current, only settle out of the water column somewhere very sheltered, or deep enough that the water is undisturbed by even the strongest storms on the surface.

Different minerals have different properties, like colour, and hardness; some minerals are very common in some kinds of rocks, and not in others. Clues such as these point to the teeny tiny, dull grey mineral grains in this rock being made of calcium carbonate (most calcium carbonate is in the form of a mineral called calcite; in this rock it is in a slightly different form with some magnesium mixed in, called dolomite). Most calcium carbonate in the geological record is produced by living organisms, who use it to build protective shells. But the very fine grained crystals we see here are not shell fragments; they are individual grains that precipitated directly out of the water before settling onto the sea bed. This gives us yet more useful information: the right conditions for the calcium and bicarbonate ions dissolved in seawater to spontaneously crystallise into new mineral grains only occur in a few places. Here’s one such place:

The Persian Gulf, viewed from space. On the southern edge, the dark blue water is turned milky by clouds of newly crystallised calcium carbonate suspended in the water column.

Satellite image of the Persian Gulf, with ‘whitings’ (small bright white patches) produced by carbonate formation in the shallow waters off the coast of the United Arab Emirates. The larger-scale cloudiness is probably the result of a phytoplankton bloom. Via the USGS

The Persian Gulf is a warm, shallow sea. In the summer, biological activity (photosynthesis removes CO2 and makes ocean water less acidic) and concentration of dissolved ions by evaporation make it possible to spontaneously precipitate calcium carbonate – the small bright white patches in the satellite image above are basically clouds of small carbonate crystals, suspended in the water, that will eventually settle and accumulate to form limestone on the sea bed.

Here’s another place where something similar is happening: the Bahamas.

Satellite view of a cay in the Northern Bahamas, with small patches of whitings colouring the seas around it.

Small patches of whitings in the Northern Bahamas.

In some places, changes in sea-level have exposed the resulting rock above sea-level. Look a little familiar?

Off-white and grey layers of thinly bedded carbonates on a Bahamas beach

Layered carbonates on Warderick Wells, Exuma Cays, Bahamas.
Photo by Zach Clemence

Note the conditions these two places have in common: calm, warm, shallow water. One of the key geological ideas that we will explore in this course is the principle of uniformitarianism: the notion that we can understand past rocks in terms of the processes that shape the modern Earth. So if we see a rock that looks like something that forms today in warm, shallow tropical seas, we expect that that rock also formed in warm, shallow tropical seas. Let’s just remind ourselves where we actually found it.

Mount Everest viewed from the southeast.

*record scratch* *freeze frame* Yup, that’s me. You’re probably wondering how I ended up in this situation.

Clearly, something rather dramatic has happened to this rock since it was formed, that has lifted it at least 9 kilometres into the air. And the forces that conspired to do this have actually left their mark on our rock, in the form of fractures we’ve already noted.

Nowadays, most people know how the Himalayas formed: from the collision of two continents, as a result of plate tectonics. In the world’s biggest and slowest car crash, India is moving north into the space occupied by Asia, and the crust in the collision zone is crumpling up, creating the worlds highest mountains.

Illustration of India's northward drift from south of the equator 70 million years ago to its present position colliding with Asia.

Reconstructed motion of India in the last 70 million years. From the USGS.

Uniformitarianism allows us to unravel the what of Earth history; plate tectonics is what allows us to understand the why. Why are rocks that formed at the bottom of the ocean now at the roof of the world? Because the Earth’s surface is spilt up into a constantly morphing jigsaw puzzle. As the pieces – the rigid plates – jostle and slide against each other, you get earthquakes, volcanoes, and a dramatic reshaping of the Earth’s surface. Where two plates divide, you get new oceans; where two plates collide, you get mountain belts.

We’ll tell the story of how we know this in a couple of weeks. It was a hard-won insight, and a recent one, too: the geologists who taught me – and despite the grey hairs, I’m not that old – actually lived through the discovery. Many of them have now also lived to see us develop the ability to see plate tectonics happening, almost in real time. The GPS in your phone allows you to find the nearest coffee shop, or hail an Uber; but attach a GPS unit to solid rock, and leave it for a few years, and you can observe some bits of the Earth are moving steadily across the Earth’s surface relative to other bits, at rates of a few centimetres a year. The map below shows how India is still pushing into Asia at about 4 centimetres a year. That may not seem like much, but at that rate it can travel about 2,500 kilometres in the time since the dinosaurs went extinct (and we believe India was moving several times faster than that prior to the collision).

Map of northern India and SW Asia, dotted with arrows showing the mostly north and east motion of the crust in the Himalayan collision zone relative to the interior of Eurasia.

The arrows show the direction and speed of motion of GPS stations relative to the interior of Eurasia. India is moving to the northeast at about 4 centimetres a year. The arrows reduce in size as this motion is accommodated by faulting, and change direction where crust is being shoved out of the way rather than getting crumpled up. From Gan et al., 2007

So, from this one rock, we can tell the story of how our planet changes; of how lands that were once at the bottom of a tropical ocean now lord it over the rest of the world’s topography. It’s a pretty good story. You may even have heard a similar one before. But what you may not have heard is that the story doesn’t end there. Because mountains don’t stay still. We’ve just seen that the collision that created the Himalayas is still going. India’s continued motion is pushing the Himalayas, including Mount Everest, ever higher, by around half a centimetre to a centimetre a year. However, even if the land beneath is going up that fast, the summit of Mount Everest is not. Another important Earth process is hard at work trying to grind the mountains down again.

When I spent time in New Zealand doing fieldwork for my PhD, I learnt a phrase: ‘Tall Poppies Syndrome’. It describes how the act of standing out from the crowd focusses the crowds attention on you, and often triggers the desire to cut you down to size. In a similar way, elevated rocks are exposing themselves to the water cycle. Mountains create weather. Storms, freezing water, and flowing ice physically and chemically attack the rocks exposed on the peaks – including our limestone – and gradually weaken, fracture, and them break apart. Fragments large and small fall downhill onto the glaciers that fill the valleys. These icy conveyer belts, darkened by their load of debris, flow downhill, out of Himalayas.

Satellite view of Mount Everest and the surrounding peaks. The summits are covered with bright white snow. The icy glaciers in the valleys between the peaks are a dirty grey colour, covered by debris weathered from the slopes above them.

Around Mount Everest, the blinding white of the snow on the peaks contrasts strongly with the dirty appearance of the glaciers in the surrounding valleys – glaciers covered with rocky debris produced by intense weathering and erosion. Image from NASA’s Earth Observatory

Eventually the glaciers melt, but the water continues to flow downhill, fast enough to carry all but the very largest boulders downhill with it. And water continues to chemically attack the rocks, breaking them down into their individual elements and carrying them downstream as dissolved ions. Because rainwater is slightly acidic, carbonate rocks are particularly prone to chemical disassembly: the rivers flowing out of the Himalayas are loaded with dissolved calcium and bicarbonate ions: the building blocks of future carbonate minerals.

Satellite image of the Ganges and Brahmaputra rivers, draining south from the Himalayas into the Bay of Bengal.

Source to sink: the Himalayas in the north, the Ganges Delta to the South. Source: NASA’s Earth Observatory

And thus it is that the rocks that plate tectonics raised up are cut back down to size and returned to whence they came by water and gravity. When rivers reach the coast, the water slows and drops its load of sand and mud. New land – and eventually, new sedimentary rock – is built. The Bengal fan is a 16 kilometre thick pile of eroded debris, carried out of the Himalayas by the Ganges and Brahmaputra rivers. But the water, and its dissolved ionic passengers, does not stop. Wind driven currents move it onwards, until the chemical wreckage of the Himalayas is spread throughout every ocean basin, and in the waters of every shallow sea. Places such as the Bahamas, or the Persian Gulf.

You can’t point to an individual particle in those clouds of new mineral grains and say, ‘that one contains calcium from Mount Everest!’, but some of them do. In a cycle that has spanned a whole planet and hundreds of millions of years, elements have moved from water in an ancient ocean, to rock at the bottom of that ocean, to rock at the highest point on the planet, back to the modern oceans, and then back to rock on the sea bed again. For a while, at least. Because Arabia is on the move. Iran is a hotbed of seismic activity as earthquakes accommodate plate convergence.

Map of Magnitude 5 earthquakes in the Persian Gulf between 1900 and 2017.

The concentrated band of seismicity in the Zagros Mountains on the north coast of the Persian Gulf, which continues along the Iran-Iraq border, is a zone of convergence generated by the north-east motion of the Arabian peninsula relative to Asia.

The grand cycles that make up the story of Earth history – cycles of rock, of water, of energy – will continue. The shallow sea now between Arabia and Iran will be thrown up, and crumpled up, and in a future mountain range, an intrepid geologist – maybe human-ish, maybe cockroach – will find a layered, fine-grained, deformed carbonate rock. Hard evidence that the Persian Gulf, long closed up, once existed – until its components are once more returned to a far-future ocean. It’s enough to give you (Cylon) religion.

“All of this has happened before, and all of this will happen again.”

Except, perhaps, for one thing. It is undeniably true that our understanding of how the Earth has operated up to this point can help us understand what the future has in store. But there is also a new geological force is at work, one the planet has not seen before: us. Our prodding of the planet may well push in into places it would not have gone without us. If anything, this makes understanding what makes this planet of ours tick even more important. We have found the accelerator; it would be nice to work out where the mirrors and the brakes are as well.

*I haven’t actually touched this rock myself, unfortunately: we have Callan Bentley to thank for the picture.

Categories: academic life, basics, deep time, geology, geomorphology, ice and glaciers, outcrops, past worlds, rocks & minerals, science education, tectonics

Earthquake warning systems are hard, but not having one is worse.

The premise of earthquake early warning systems is simple. An earthquake produces several different kinds of seismic waves that race away from the rupture point. Because they are different kinds of vibrations, they travel at different speeds; and the farther they travel, the more the speedy compressional P-waves pull away from the transverse S-waves, and the more the surface waves lag even further behind.

Cartoon showing a race between P, S and surface waves.

In the race between different seismic waves, fleet-footed P-waves are heralds for their slower and more earth-shaking brethren.

Fortunately for us, the speediest waves are also the weaker, less damaging ones. The P-waves shake us up a little when they arrive, but they are also giving us a heads up that more damaging shaking is on its way. This warning is at most a few tens of seconds, but with the right infrastructure in place this is enough to shut down vital machinery (trains, elevators, nuclear power stations…) and prepare people for incoming shaking. If detected soon enough, the warning can also be sent ahead of the P-waves at the speed of light, giving even more advanced warning ahead of the expanding front of seismic energy.

Of course, it is much more challenging to put this simple theory into practice. The small window of opportunity for a timely warning can quickly close if the system is not responsive enough. On the other hand, the degree of automation required to gain that responsiveness can lead to a system that is more easily fooled by complex seismic events. Two recent news stories about two of the countries that actually have working earthquake early warning systems highlight challenges from either end of this balancing act.

Mexico: sneak attack from below

Mexico’s earthquake warning system was put into place after a 1985 magnitude 8 rupture on the subduction thrust off the west coast killed thousands in Mexico City. That is the system’s focus: it was built to detect large ruptures on the subduction zone, and warn the residents of Mexico City, who live on top of a massive seismic wave amplifier.

The system worked as designed for the biggest earthquake of 2017, a M8.2 plate bending event. But it struggled to respond quickly enough in the much closer M7.1 a few days later – this NPR story starts with an account of the sirens going off only after the strong shaking started. This earthquake, whilst weaker than the one 12 days earlier, was much closer to Mexico City, resulting in strong shaking that collapsed buildings and killed several hundred people. And that proximity was a problem for the early warning system: with only around one hundred kilometres to travel, rather than several hundred, the P-waves could only pull a little bit ahead of the S-waves and surface waves, leaving barely any time for a warning to get out. The NPR story linked above indicates that changes are already being made to make the warning system more responsive to these kinds of events.

Japan: false positives attack

Last Friday, Japan’s warning system was triggered when it detected P-wave arrivals from what it estimated as a magnitude 6.4 earthquake off the coast north-eastern Japan. No such event had occurred: instead the Japanese Meteorological Agency, who operate the system, reported that the false warning was the result of the early warning system misreading two smaller earthquakes, a M4.4 on the east coast and a M3.9 that occurred on the west coast at the same time, as one larger event.

I was actually interested enough to do a little impromptu data analysis to see if I could work out why the system got fooled. The seismogram for this time from a station in central Japan is a little strange, with very little amplitude variation between the body waves and the surface waves, and earlier P-wave arrivals than expected (a comparison with an M4.7 a little later in the day, in roughly the same location, makes this clear). My speculative interpretation at the time was that the P-waves from the E coast quake reached nearby stations at the same time as surface waves from the smaller, earlier W coast quake. This does seem to have boosted the apparent P-wave magnitude, but by further comparison with the M4.7 seismogram, the boost was clearly not enough to make the signal look like a M6.4. Perhaps it is also a matter of duration: larger ruptures take longer, because a bigger section of the fault is progressively unzipping. If the system interpreted the whole sequence as an extended package of P waves, that may have been sufficient for the system to mistakenly trigger.

Seismograms from Japan.

Blue: seismogram for the event that triggered Japan’s earthquake early warning system on January 5th, from this station in central Japan. It is probably the hybrid of two events, and looks weird compared to a more normal earthquake (red). Data from IRIS, plotted using Obspy.

Either way, this is a tricky scenario for an automated system to handle, and therein lies the challenge. To save the most lives, people have to respond quickly when an alarm sounds. If you have a computer that cries wolf – a system so sensitive that it is prone to triggering in the absence of a real threat (a false positive) – then people might stop paying attention to it. On the other hand, you don’t want to risk the system failing to trigger when there is a threat (a false negative). This isn’t the first time that Japan’s system has given a false warning – there was also one in August 2016 – but occurrences are hopefully rare enough that the system is still trusted. Even if the alert sound isn’t Godzilla roaring.

Canada (and the US): not quite there yet

The problems described above are largely good problems to have, because you actually have a working earthquake warning system in place to struggle with and improve – a system that may not be perfect, but does save lives. On the west coast of North America, despite the looming threat of the San Andreas Fault and especially the Cascadia subduction zone, a functional warning system is still some way from implementation. This article updates the progress on the Canadian side of the border, where ocean bottom sensors and GPS data are being tied into the network to get more timely and accurate detections. I was all ready to use it as a cudgel to whack the US government over the head with for continuing not to properly fund the ShakeAlert system, when I read more closely and realised that the Canadian system is in exactly the same position. They have an at least mostly working prototype, with sensors, and computers dedicated to processing the output of those sensors to generate alerts. But is it the next step, building the infrastructure to get timely warnings out to those in harm’s way, that is the challenging step. Or perhaps more accurately, the challenging step in the US is securing the funding to do so. It’s not cheap ($40 million to set up and $16 million a year to run, the USGS estimates, but it’s a drop in the federal budget to protect 50 million people on the west coast. The lack of urgency is frustrating – perhaps the Canadians will be more sensible.

Categories: earthquakes, geohazards, geophysics, links, society

What does it mean to read the literature, really? (Anne’s 2017 #365papers in review)

Preface

For the 3rd year in a row, I have meticulously tracked each and every paper, proposal, manuscript, etc. I read for professional reasons. Begun by Jacquelyn Gill in early 2015, I found the twitter hashtag #365papers an appealing way to get a sense of what types of things and how many I was reading. In 2015, I was particularly curious how having an infant might affect my reading habits. I carried on logging my papers in 2016, when I was teaching a full load, including 2 graduate courses. It turns out that my teaching load does have a big impact on my reading, because I read (and discuss) so much primary literature in class. Based on how helpful I’ve found the process of being analytical about my reading, I continued to log my papers for 2017, as just a professional version of life logging or the quantified self idea. This year, I had a lighter teacher loading in the spring (but a high administrative/service load) and was on a research leave in the fall. What difference will I see?

Before we get into the data, a brief pause for a key definition and some discussion of that: What does it mean to read a paper? Here’s what I wrote in 2016:

I only counted papers that I read fully through the results and discussion sections, so there are quite a few papers that I read large chunks of but didn’t make the list because I didn’t finish.

What I’m logging as reading is not opening up a PDF to confirm something that I already think I know or check a key statistic. I’m not reading the abstract and I’m not hunting for a citation that I can plop at the end of a sentence for something I’m writing. I’m deeply reading the paper (introduction, methods, results, and discussion), looking at all of the figures and probably most of the tables. For most papers I count as read, I have either highlighted, made marginal notes, or written a brief synopsis after I’ve finished with them. I’m know that I’ve spent time with hundreds of other papers this year that never made my spreadsheet for the #365papers project, because in my mind I haven’t truly read them top-to-bottom.

Last week, Caitlin MacKenzie wrote a beautiful post about her own reading for this year and she reflects on this idea of deep reading. (Read it!) She talks about how we train ourselves to perfect the art of skimming a paper and how we flick-bounce between things we read, and then she mentions the number of papers that the average science faculty member allegedly reads per year: 468. Let me tell you in advance, I did not deeply read 468 papers in 2017, or 2016, or 2015, or probably any year since 2007 when I became a faculty member. And honestly, I don’t believe that the average science faculty member did either… or at least not deeply read that number of papers per year. (However, I’ve added the two papers that quantified these statistics to my “to read” pile for 2018 and I’m looking forward to see how they defined reading.)

With that important point made, I’ll refer you to my epic 2016 blog post for a detailed set of methods. The only methodological difference this year is that I did count student thesis proposals, because I was deeply reading them and decided that they deserved to be counted.

Results

In 2017, I deeply read 105 items, which falls neatly between 2016 (132) and 2015 (78). It does support my conclusion from last year that teaching grad classes enhanced my reading, but it also makes me wonder what exactly an average year looks like. If I ever have what I think is an average year, I’ll let you know.

Of the items I read, 70 were published journal articles or GSA publications and 12 were manuscripts that I either reviewed or for which I served as associate editor. (I have been an associate editor at Water Resources Research since May.) I know that there are both published articles and manuscripts under review that I read deeply more than once in 2017, but I only counted them one time each. I also reviewed 16 grant proposals, 3 unpublished dissertation chapters, and 4 MS thesis proposals. As in previous years, I did not count a zillion blog posts and news articles (even though I learned a ton from them) and an ungodly number of student thesis drafts and other writing assignments. Relative to 2016, it’s clear that my reading took a hit in the published journal articles arena, as I reviewed slightly more papers and proposals than last year.

Figure 1. Cumulative distribution function of my reading in 2017.

The rhythm of my year comes into stark relief when I examine my patterns of reading over time (Figure 1). I read, slowly but steadily, through spring semester when my service load was heavy, with a push around the time I served on a grant review panel. I also read steadily through most of fall semester’s research leave, and at faster rate, which makes sense since my whole job was to read and write and think.

It’s the summer though where I see the most interesting phenomena. My reading clicks along steadily through my summer travels, helped by attending a conference and some train and plane journeys. But then there’s a big 20 day gap in July (centered on day 200) that I really can’t explain. And, oddly, there’s a similar gap in my 2016 data. I’m fairly certain that I didn’t just fail to record my reading during that period, and I sort of think it wasn’t just reading, because I remember that in early August I told myself that things had to change if I wanted to make my fall semester a success. The good news is that I did get back on track, picked up the reading pace, and read 50% of my items for the year on or after August 3rd. (I also got 3 papers submitted between August and December.) Heading into 2018, I’ll know to watch out for the July doldrums and hopefully can avert a similar slump.

For 2016, I made a graph that showed why I read the things I read and I found that my graduate class teaching accounted for 48% of my non-review reading for the year. In 2017, without those graduate classes, teaching-related reading was only 5% of my total.I also broke out graduate students (dissertation chapters, proposals) as their own category and that took up 12% of my reading space.  And of course, some reading should really double-count, e.g., papers we discussed in our lab meeting were categorized as research, but they certainly served a teaching purpose as well.  A higher percentage of my reading (11%) revolved around public engagement in 2017, and I did less generally keeping up with my fields sort of reading (18%) than in 2016. That left the majority of my reading (54%) categorized as directly related to ongoing research projects. With lots of research irons in the fire and 3 conferences (+reviewing) as vehicles for generally keeping up with my fields, I think this makes a lot of sense for 2017.

Another way of thinking about whether I’m keeping up with the literature in my fields is to look at when the papers I’m reading were published. As in previous years, it looks like I’m mostly reading papers published in the last few years (Figure 2). 50% of the papers I read were published in 2017, and my weighted-average publication date is 2014. So, yes, I’m reading the recent literature relevant to my research. Yay! On the other hand, these data suggest that if I don’t get to a paper within a year or two of its publication, I might not ever deeply read it, which is a little sad, given the size of my “want to read” folder.

Line graph showing a 0-1 papers per year from 1996 through ~2012. Then a huge rise with a peak in 2017 at 35 papers per year.

Figure 2. Publication date for papers I read in 2017. One paper had a 2018 publication date.

I also kept track of first authors, their likely gender, and what country they were affiliated with. I’m kind of thrilled that I read a nearly equal number of papers with women first authors as those by men. 48.6% of the papers I read had women first authors and 47% of the unique first authors were women. This substantially better than last year, and substantially better than than we might expect given the gender disparities that persist in the geosciences. I suspect, though, that some of this comes from my interdisciplinary reading habits and that fields like ecology and public engagement don’t skew so heavily male.  On the other hand, about 75% of the papers I read had a first author affiliation in the USA and I still need to break out of the USA-Australia-Canada-UK reading hegemony.

Is the open-access revolution arriving? 33% of the papers I read were available for free from their publishers and another 11% were available elsewhere on the web for free. I’d been thinking that the availability of AGU publications >2 years old might be a reason for my increased ease of paywall-free reading, but when I look at the papers that were available OA, I see that I didn’t actually take advantage of that all that much (mostly because of Figure 2). Also, 33% of the papers that I published in 2017 are available open-access. This paper, with Tara Smith and colleagues,: “Prevalence and Characterization of Staphylococcus aureus and Methicillin-Resistant Staphylococcus aureus on Public Recreational Beaches in Northeast Ohio” was published in AGU’s new open access GeoHealth journal in November.

Over the course of 2017, I read articles from 43 different journals. I found these articles primarily via  people (students, colleagues, twitter, the authors) (28), google scholar (18), and tables of contents sent to my email (10), with lower numbers attributed to citation alerts, browsing a journal (on paper or electronically), or in the references cited of other papers. Here I need to give a shoutout to my mom, who told me about the paper that ended up being one of my favorites for the year: on phosphorous and nitrogen budgets for urban watersheds, published in PNAS by Sarah Hobbie et al. The top journal I read from was Science (thanks to that digital subscription that comes with AAAS membership), but if you add back in my reviewing and editing, the top journal was Water Resources Research.

As in 2016, I tagged each article I read with a primary topic and sometimes a secondary topic. My top primary topics of the year were (no, surprise) urban hydrology, (general) hydrology, and geomorphology. When I add in the secondary topics, I add water quality/geochemistry, climate change/climatology, and science communication to the list of most frequently read topics. This, I feel, is a pretty nice summation of how I saw myself as a scientist in 2017.

Discussion

I could certainly be doing more reading in a year (especially in the summer), even if I don’t think I’ll ever deeply read anywhere near the 468 papers the average science faculty member allegedly reads. I continue to be concerned that reviewing takes up such a large fraction of my reading space. I’ve found it really helpful to track my reading habits the last few years, and I’ve also been tracking my writing habits for about 6 months now. Tracking my reading and writing has certainly made me more aware of my own productivity patterns, and has changed them for the better (the observer effect in action).

One thing that has been percolating through my mind is that there are lots of exhortations to write every day and to schedule regular time to write. There are even whole books about how to write a lot. Yet, I don’t often see similar calls to schedule regular reading time, read every day, etc. When I do see something about reading the literature, it’s more along the lines of this hilariously true article on someone’s first experience trying to read a scientific article. It’s all to easy to push aside the reading in favor of the “doing” bit of science, whether that’s collecting and analyzing data or writing up the results. As a profession, we reward productivity in the form of papers and grants, and sitting down to deeply read journal articles can feel like wasted time. Yet, if we aren’t regularly reading the literature, we risk that the work we are doing is out-of-date, duplicative, or derivative.

For myself, I’ve learned that I have a very hard time stopping in the middle of a workday to read papers. Most of the time, if I am reading during the workday, I’m looking for a very specific thing or for just big picture understanding. Or I’m reading abstracts and filing papers away to read “later”, whenever that might be. But on mornings when I get up early and the house is quiet, I can make myself a cup of tea and read papers until my family wakes up. Because my mind isn’t in busy-busy-do-all-the-things mode, I’m able to drink in the papers along with my tea. I’ve also found some success in listening to papers via PDF reader while walking the dog and folding the laundry, though that doesn’t work well for math or figure-heavy papers. I’ve also found hat I have more ideas and enthusiasm for what I’m working on if I’m reading a lot of papers in my field and that periods of intense reading are followed by periods of intense data analysis and/or writing productivity. The trick is not to completely drop the reading habit when I’m those other phases of work.

As all of these thoughts were playing in my head in the last week of 2017, Caitlin MacKenzie’s post gave rise to a great discussion on Twitter about reading habits. In particular, Meghan Duffy and Susana Wadgymar brought up the point and setting aside regular times to do that reading, and Nina Wale reminded us how it important it is to our job as writers.



Spurred on by the conversation on Twitter, in 2018, I’ll be continuing to track my reading habits, but I’m also going to try something new. I’m setting aside time on my calendar, most days, for reading. Maybe I’ll tweet it using the Meghan suggested #readinghour hashtag, or maybe I’ll stay away from internet distractions. Sometimes my reading will be in the early morning, but other days it will be in my office, with the door closed, after brewing a cup of tea with my office kettle. We’ll see how it goes, but I’m hopeful that I’ll read more, more regularly, and be overall more creative, happy, and productive because of it. Whatever the case, I’m sure you can look forward to an update here a year from now.

Categories: academic life, by Anne

A Seismic Summary of 2017

Plenty of natural disasters hit the news in 2017, but most of the headlines were hogged by disasters linked to extreme weather, such as Hurricane Harvey. Nonetheless, in the background the Earth’s tectonic plates continued bumping and grinding against each other, producing 1,558 earthquakes of magnitude 5 or greater over the past 12 months. As well as a map of individual locations, I’ve also produced a global seismic ‘heat map’, scaled to represented the total energy release in each grid square.

Global map with earthquake locations marked as circles, scaled according to their size.

Global Map of 2016 earthquakes, according to the USGS database.

Gridded global map where intensity of colour in each 5 degree grid square represents the total energy released by earthquakes in 2017.

Heat Map of global seismic activity, scaled to total moment release from all M5+ earthquakes in each 5º grid square

Unlike last year, we did see a magnitude 8 earthquake this year, in the form of a M8.2 in the subducting slab off the coast of Mexico. The logarithmic relationship between magnitude and energy release – one unit on the magnitude scale equals 32 times as much energy – means this single earthquake shows up quite clearly as the darkest square on the heat map. A further six earthquakes – about one third of 1% – were between magnitude 7 and 8, and 104 – about 7% – were between magnitude 6 and 7. The remaining 93% were between magnitude 5 and 6.

Bar charts showing numbers of magnitude 5 to 6, 6 to 7, 7 to 8 and greater than 8 earthquake in 2017. The average, maximum and minimum frequencies since 1970, and the average for the past 6 years, are also shown.

Number of earthquakes in different magnitude ranges in 2017, compared to longer term averages and ranges.

If we compare these tallies to the instrumental record of seismic activity since the mid 20th century, 2017 meets expectations at the low and high ends: based on the last 50 years, we expect around 1500 magnitude 5-6 earthquakes in an average year, and maybe one event larger than magnitude 8. In the middle, the numbers of M6-7 and particularly M7-8 earthquakes last year were lower than the 50-year average. Not ‘lower’ in the sense that they’re outside the observed variability in the instrumental record, but lower than they’ve been for some time: 1998 was the last time we had this few M6-7 quakes (109 vs 104), and 1982 was the last time it was lower (83 M6-7 quakes, at the end of a decade of fairly quiet years); 2017 also saw the lowest number of M7-8 (6) since 1980. In other words, it hasn’t been this quiet since I was a wee lad.

Bar charts showing yearly totals of earthquakes in different magnitude ranges since the mid-20th century. Lines show a smoothed, 6-year moving window average.

2017 earthquake frequency compared to the instrumental record since 1950 (note that M5-6 events are largely missing from the catalogue prior to the 1970s).

One surprising observation is the lack of large (M6-7) aftershocks of the M8.2 Mexico quake in September. The response was strangely muted: the largest aftershocks are mainly M5-6, with a M6.1 right at the edge of the aftershock cloud. The fact that it was an extensional event due to bending within the plate, rather than occurring on the subduction thrust, might explain this. There was a possibly linked M7 shock: an M7.1 close to Mexico city, that was also the year’s second deadliest behind a M7.3 on the Iran/Iraq border in November. Occurring 650 km away and 12 days later, it was too far away to be affected by permanent stress changes in the crust around the M8.2, and too late to be triggered by the transient stresses associated with passing seismic waves. The possible mechanisms of longer-term triggering at a distance are still poorly understood, but the timing is certainly suspicious.

Interest in decadal trends in global earthquake activity has been boosted recently by a newly published study that suggests a link between changes in the Earth’s rotation rate and the frequency of large earthquakes. As the lead author explains, we’re not talking about generating earthquakes that wouldn’t have happened anyway. Instead a small nudge, in the form of a slight deceleration in the Earth’s rotation that imparts some additional stress on the rigid lithosphere*, causes faults that are already poised to fail to rupture a little bit earlier than perhaps they would have otherwise. The mechanism is plausible: the daring part is that the authors have made a prediction that a recent slowing of the Earth’s spin is going to cause a spike in M7 or greater earthquakes over the next five years. If anything, this year’s relative calm makes monitoring that hypothesis a bit more difficult, because more than 7 earthquakes a year above magnitude 7 is a low bar that we’d generally expect to be exceeded even in the absence of any external factor. The question is whether we’ll see an increase significantly above the long-term average of around 15 events a year.

Definitely something to keep an eye on in the next year or five. I have questioned its usefulness for seismic hazard assessment, since we are talking about a relatively small change in a global signal: others have argued that in edge cases (such as a fault at the old end of its known recurrence interval) it may be relevant, and that there are in fact some interested parties with global exposure.

What is true that as we have seen again this year, bigger does not necessarily mean badder when it comes to earthquakes: the two deadliest earthquakes this year were both low magnitude 7. Any magnitude 7 earthquake represents a substantial hazard if it happens in the wrong place.

*Because the ‘solid’ Earth beneath the lithosphere is ductile and can flow in response to stresses, the Earth’s rotation causes it to bulge at the equator. The size of this bulge depends on the rotation rate; reduce the rotation rate and the Earth will (slowly) flow into a more spherical shape. The lithosphere is cold and rigid, so accumulates stress instead as the bulge shrinks beneath it.

Categories: earthquakes, geohazards, geophysics, tectonics