# The Most Terrifying Papers I Read Last Year

An ice sheet forms when snow falls on land, compacts into ice, and forms a system of interconnected glaciers which gradually flow downhill like play-dough. In Antarctica, it is so cold that the ice flows right into the ocean before it melts, sometimes hundreds of kilometres from the coast. These giant slabs of ice, floating on the ocean while still attached to the continent, are called ice shelves.

For an ice sheet to have constant size, the mass of ice added from snowfall must equal the mass lost due to melting and calving (when icebergs break off). Since this ice loss mainly occurs at the edges, the rate of ice loss will depend on how fast glaciers can flow towards the edges.

Ice shelves slow down this flow. They hold back the glaciers behind them in what is known as the “buttressing effect”. If the ice shelves were smaller, the glaciers would flow much faster towards the ocean, melting and calving more ice than snowfall inland could replace. This situation is called a “negative mass balance”, which leads directly to global sea level rise.

Respect the ice shelves. They are holding back disaster.

Ice shelves are perhaps the most important part of the Antarctic ice sheet for its overall stability. Unfortunately, they are also the part of the ice sheet most at risk. This is because they are the only bits touching the ocean. And the Antarctic ice sheet is not directly threatened by a warming atmosphere – it is threatened by a warming ocean.

The atmosphere would have to warm outrageously in order to melt the Antarctic ice sheet from the top down. Snowfall tends to be heaviest when temperatures are just below 0°C, but temperatures at the South Pole rarely go above -20°C, even in the summer. So atmospheric warming will likely lead to a slight increase in snowfall over Antarctica, adding to the mass of the ice sheet. Unfortunately, the ocean is warming at the same time. And a slightly warmer ocean will be very good at melting Antarctica from the bottom up.

This is partly because ice melts faster in water than it does in air, even if the air and the water are the same temperature. But the ocean-induced melting will be exacerbated by some unlucky topography: over 40% of the Antarctic ice sheet (by area) rests on bedrock that is below sea level.

Elevation of the bedrock underlying Antarctica. All of the blue regions are below sea level. (Figure 9 of Fretwell et al.)

This means that ocean water can melt its way in and get right under the ice, and gravity won’t stop it. The grounding lines, where the ice sheet detaches from the bedrock and floats on the ocean as an ice shelf, will retreat. Essentially, a warming ocean will turn more of the Antarctic ice sheet into ice shelves, which the ocean will then melt from the bottom up.

This situation is especially risky on a retrograde bed, where bedrock gets deeper below sea level as you go inland – like a giant, gently sloping bowl. Retrograde beds occur because of isostatic loading (the weight of an ice sheet pushes the crust down, making the tectonic plate sit lower in the mantle) as well as glacial erosion (the ice sheet scrapes away the surface bedrock over time). Ice sheets resting on retrograde beds are inherently unstable, because once the grounding lines reach the edge of the “bowl”, they will eventually retreat all the way to the bottom of the “bowl” even if the ocean water intruding beneath the ice doesn’t get any warmer. This instability occurs because the melting point temperature of water decreases as you go deeper in the ocean, where pressures are higher. In other words, the deeper the ice is in the ocean, the easier it is to melt it. Equivalently, the deeper a grounding line is in the ocean, the easier it is to make it retreat. In a retrograde bed, retreating grounding lines get deeper, so they retreat more easily, which makes them even deeper, and they retreat even more easily, and this goes on and on even if the ocean stops warming.

Diagram of an ice shelf on a retrograde bed (“Continental shelf”)

Which brings us to Terrifying Paper #1, by Rignot et al. A decent chunk of West Antarctica, called the Amundsen Sea Sector, is melting particularly quickly. The grounding lines of ice shelves in this region have been rapidly retreating (several kilometres per year), as this paper shows using satellite data. Unfortunately, the Amundsen Sea Sector sits on a retrograde bed, and the grounding lines have now gone past the edge of it. This retrograde bed is so huge that the amount of ice sheet it underpins would cause 1.2 metres of global sea level rise. We’re now committed to losing that ice eventually, even if the ocean stopped warming tomorrow. “Upstream of the 2011 grounding line positions,” Rignot et al., write, “we find no major bed obstacle that would prevent the glaciers from further retreat and draw down the entire basin.”

They look at each source glacier in turn, and it’s pretty bleak:

• Pine Island Glacier: “A region where the bed elevation is smoothly decreasing inland, with no major hill to prevent further retreat.”
• Smith/Kohler Glaciers: “Favorable to more vigorous ice shelf melt even if the ocean temperature does not change with time.”
• Thwaites Glacier: “Everywhere along the grounding line, the retreat proceeds along clear pathways of retrograde bed.”

Only one small glacier, Haynes Glacier, is not necessarily doomed, since there are mountains in the way that cut off the retrograde bed.

From satellite data, you can already see the ice sheet speeding up its flow towards the coast, due to the loss of buttressing as the ice shelves thin: “Ice flow changes are detected hundreds of kilometers inland, to the flanks of the topographic divides, demonstrating that coastal perturbations are felt far inland and propagate rapidly.”

It will probably take a few centuries for the Amundsen Sector to fully disintegrate. But that 1.2 metres of global sea level rise is coming eventually, on top of what we’ve already seen from other glaciers and thermal expansion, and there’s nothing we can do to stop it (short of geoengineering). We’re going to lose a lot of coastal cities because of this glacier system alone.

Terrifying Paper #2, by Mengel & Levermann, examines the Wilkes Basin Sector of East Antarctica. This region contains enough ice to raise global sea level by 3 to 4 metres. Unlike the Amundsen Sector, we aren’t yet committed to losing this ice, but it wouldn’t be too hard to reach that point. The Wilkes Basin glaciers rest on a system of deep troughs in the bedrock. The troughs are currently full of ice, but if seawater got in there, it would melt all the way along the troughs without needing any further ocean warming – like a very bad retrograde bed situation. The entire Wilkes Basin would change from ice sheet to ice shelf, bringing along that 3-4 metres of global sea level rise.

It turns out that the only thing stopping seawater getting in the troughs is a very small bit of ice, equivalent to only 8 centimetres of global sea level rise, which Mengel & Levermann nickname the “ice plug”. As long as the ice plug is there, this sector of the ice sheet is stable; but take the ice plug away, and the whole thing will eventually fall apart even if the ocean stops warming. Simulations from an ice sheet model suggest it would take at least 200 years of increased ocean temperature to melt this ice plug, depending on how much warmer the ocean got. 200 years sounds like a long time for us to find a solution to climate change, but it actually takes much longer than that for the ocean to cool back down after it’s been warmed up.

This might sound like all bad news. And you’re right, it is. But it can always get worse. That means we can always stop it from getting worse. That’s not necessarily good news, but at least it’s empowering. The sea level rise we’re already committed to, whether it’s 1 or 2 or 5 metres, will be awful. But it’s much better than 58 metres, which is what we would get if the entire Antarctic ice sheet melted. Climate change is not an all-or-nothing situation; it falls on a spectrum. We will have to deal with some amount of climate change no matter what. The question of “how much” is for us to decide.

# With a Little Help from the Elephant Seals

A problem which has plagued oceanography since the very beginning is a lack of observations. We envy atmospheric scientists with their surface stations and satellite data that monitor virtually the entire atmosphere in real time. Until very recently, all that oceanographers had to work with were measurements taken by ships. This data was very sparse in space and time, and was biased towards certain ship tracks and seasons.

A lack of observations makes life difficult for ocean modellers, because there is very little to compare the simulations to. You can’t have confidence in a model if you have no way of knowing how well it’s performing, and you can’t make many improvements to a model without an understanding of its shortcomings.

Our knowledge of the ocean took a giant leap forward in 2000, when a program called Argo began. “Argo floats” are smallish instruments floating around in the ocean that control their own buoyancy, rising and sinking between the surface and about 2000 m depth. They use a CTD sensor to measure Conductivity (from which you can easily calculate salinity), Temperature, and Depth. Every 10 days they surface and send these measurements to a satellite. Argo floats are battery-powered and last for about 4 years before losing power. After this point they are sacrificed to the ocean, because collecting them would be too expensive.

This is what an Argo float looks like while it’s being deployed:

With at least 27 countries helping with deployment, the number of active Argo floats is steadily rising. At the time of this writing, there were 3748 in operation, with good coverage everywhere except in the polar oceans:

The result of this program is a massive amount of high-quality, high-resolution data for temperature and salinity in the surface and intermediate ocean. A resource like this is invaluable for oceanographers, analogous to the global network of weather stations used by atmospheric scientists. It allows us to better understand the current state of the ocean, to monitor trends in temperature and salinity as climate change continues, and to assess the skill of ocean models.

But it’s still not good enough. There are two major shortcomings to Argo floats. First, they can’t withstand the extreme pressure in the deep ocean, so they don’t sink below about 2000 m depth. Since the average depth of the world’s oceans is around 4000 m, the Argo program is only sampling the upper half. Fortunately, a new program called Deep Argo has developed floats which can withstand pressures down to 6000 m depth, covering all but the deepest ocean trenches. Last June, two prototypes were successfully deployed off the coast of New Zealand, and the data collected so far is looking good. If all future Argo floats were of the Deep Argo variety, in five or ten years we would know as much about the deep ocean’s temperature and salinity structure as we currently know about the surface. To oceanographers, particularly those studying bottom water formation and transport, there is almost nothing more exciting than this prospect.

The other major problem with Argo floats is that they can’t handle sea ice. Even if they manage to get underneath the ice by drifting in sideways, the next time they rise to the surface they will bash into the underside of the ice, get stuck, and stay there until their battery dies. This is a major problem for scientists like me who study the Southern Ocean (surrounding Antarctica), which is largely covered with sea ice for much of the year. This ocean will be incredibly important for sea level rise, because the easiest way to destabilise the Antarctic Ice Sheet is to warm up the ocean and melt the ice shelves (the edges of the ice sheet which extend over the ocean) from below. But we can’t monitor this process using Argo data, because there is a big gap in observations over the region. There’s always the manual option – sending in scientists to take measurements – but this is very expensive, and nobody wants to go there in the winter.

Instead, oceanographers have recently teamed up with biologists to try another method of data collection, which is just really excellent:

They are turning seals into Argo floats that can navigate sea ice.

Southern elephant seals swim incredible distances in the Southern Ocean, and often dive as far as 2000 m below the surface. Scientists are utilising the seals’ natural talents to fill in the gaps in the Argo network, so far with great success. Each seal is tranquilized while a miniature CTD is glued to the fur on its head, after which it is released back into the wild. As the seal swims around, the sensors take measurements and communicate with satellites just like regular Argo floats. The next time the seal sheds its coat (once per year), the CTD falls off and the seal gets on with its life, probably wondering what that whole thing was about.

This project is relatively new and it will be a few years before it’s possible to identify trends in the data. It’s also not clear whether or not the seals tend to swim right underneath the ice shelves, where observations would be most useful. But if this dataset gains popularity among oceanographers, and seals become officially integrated into the Argo network…

…then we will be the coolest scientists of all.

# What I am doing with my life

After a long hiatus – much longer than I like to think about or admit to – I am finally back. I just finished the last semester of my undergraduate degree, which was by far the busiest few months I’ve ever experienced.

This was largely due to my honours thesis, on which I spent probably three times more effort than was warranted. I built a (not very good, but still interesting) model of ocean circulation and implemented it in Python. It turns out that (surprise, surprise) it’s really hard to get a numerical solution to the Navier-Stokes equations to converge. I now have an enormous amount of respect for ocean models like MOM, POP, and NEMO, which are extremely realistic as well as extremely stable. I also feel like I know the physics governing ocean circulation inside out, which will definitely be useful going forward.

Convocation is not until early June, so I am spending the month of May back in Toronto working with Steve Easterbrook. We are finally finishing up our project on the software architecture of climate models, and writing it up into a paper which we hope to submit early this summer. It’s great to be back in Toronto, and to have a chance to revisit all of the interesting places I found the first time around.

In August I will be returning to Australia to begin a PhD in Climate Science at the University of New South Wales, with Katrin Meissner and Matthew England as my supervisors. I am so, so excited about this. It was a big decision to make but ultimately I’m confident it was the right one, and I can’t wait to see what adventures Australia will bring.

# A Simple Stochastic Climate Model: Climate Sensitivity

Last time I derived the following ODE for temperature T at time t:



where S and τ are constants, and F(t) is the net radiative forcing at time t. Eventually I will discuss each of these terms in detail; this post will focus on S.

At equilibrium, when dT/dt = 0, the ODE necessitates T(t) = S F(t). A physical interpretation for S becomes apparent: it measures the equilibrium change in temperature per unit forcing, also known as climate sensitivity.

A great deal of research has been conducted with the aim of quantifying climate sensitivity, through paleoclimate analyses, modelling experiments, and instrumental data. Overall, these assessments show that climate sensitivity is on the order of 3 K per doubling of CO2 (divide by 5.35 ln 2 W/m2 to convert to warming per unit forcing).

The IPCC AR4 report (note that AR5 was not yet published at the time of my calculations) compared many different probability distribution functions (PDFs) of climate sensitivity, shown below. They follow the same general shape of a shifted distribution with a long tail to the right, and average 5-95% confidence intervals of around 1.5 to 7 K per doubling of CO2.

Box 10.2, Figure 1 of the IPCC AR4 WG1: Probability distribution functions of climate sensitivity (a), 5-95% confidence intervals (b).

These PDFs generally consist of discrete data points that are not publicly available. Consequently, sampling from any existing PDF would be difficult. Instead, I chose to create my own PDF of climate sensitivity, modelled as a log-normal distribution (e raised to the power of a normal distribution) with the same shape and bounds as the existing datasets.

The challenge was to find values for μ and σ, the mean and standard deviation of the corresponding normal distribution, such that for any z sampled from the log-normal distribution,





Since erf, the error function, cannot be evaluated analytically, this two-parameter problem must be solved numerically. I built a simple particle swarm optimizer to find the solution, which consistently yielded results of μ = 1.1757, σ = 0.4683.

The upper tail of a log-normal distribution is unbounded, so I truncated the distribution at 10 K, consistent with existing PDFs (see figure above). At the beginning of each simulation, climate sensitivity in my model is sampled from this distribution and held fixed for the entire run. A histogram of 106 sampled points, shown below, has the desired characteristics.

Histogram of 106 points sampled from the log-normal distribution used for climate sensitivity in the model.

Note that in order to be used in the ODE, the sampled points must then be converted to units of Km2/W (warming per unit forcing) by dividing by 5.35 ln 2 W/m2, the forcing from doubled CO2.

# Bits and Pieces

Now that the academic summer is over, I have left Australia and returned home to Canada. It is great to be with my friends and family again, but I really miss the ocean and the giant monster bats. Not to mention the lab: after four months as a proper scientist, it’s very hard to be an undergrad again.

While I continue to settle in, move to a new apartment, and recover from jet lag (which is way worse in this direction!), here are a few pieces of reading to tide you over:

Scott Johnson from Ars Technica wrote a fabulous piece about climate modelling, and the process by which scientists build and test new components. The article is accurate and compelling, and features interviews with two of my former supervisors (Steve Easterbrook and Andrew Weaver) and lots of other great communicators (Gavin Schmidt and Richard Alley, to name a few).

I have just started reading A Short History of Nearly Everything by Bill Bryson. So far, it is one of the best pieces of science writing I have ever read. As well as being funny and easy to understand, it makes me excited about areas of science I haven’t studied since high school.

Finally, my third and final paper from last summer in Victoria was published in the August edition of Journal of Climate. The full text (subscription required) is available here. It is a companion paper to our recent Climate of the Past study, and compares the projections of EMICs (Earth System Models of Intermediate Complexity) when forced with different RCP scenarios. In a nutshell, we found that even after anthropogenic emissions fall to zero, it takes a very long time for CO2 concentrations to recover, even longer for global temperatures to start falling, and longer still for sea level rise (caused by thermal expansion alone, i.e. neglecting the melting of ice sheets) to stabilize, let alone reverse.

# A Simple Stochastic Climate Model: Deriving the Backbone

Last time I introduced the concept of a simple climate model which uses stochastic techniques to simulate uncertainty in our knowledge of the climate system. Here I will derive the backbone of this model, an ODE describing the response of global temperature to net radiative forcing. This derivation is based on unpublished work by Nathan Urban – many thanks!

In reality, the climate system should be modelled not as a single ODE, but as a coupled system of hundreds of PDEs in four dimensions. Such a task is about as arduous as numerical science can get, but dozens of research groups around the world have built GCMs (General Circulation Models, or Global Climate Models, depending on who you talk to) which come quite close to this ideal.

Each GCM has taken hundreds of person-years to develop, and I only had eight weeks. So for the purposes of this project, I treat the Earth as a spatially uniform body with a single temperature. This is clearly a huge simplification but I decided it was necessary.

Let’s start by defining T1(t) to be the absolute temperature of this spatially uniform Earth at time t, and let its heat capacity be C. Therefore,

$C \: T_1(t) = E$

where E is the change in energy required to warm the Earth from 0 K to temperature T1. Taking the time derivative of both sides,

$C \: \frac{dT_1}{dt} = \frac{dE}{dt}$

Now, divide through by A, the surface area of the Earth:

$c \: \frac{dT_1}{dt} = \frac{1}{A} \frac{dE}{dt}$

where c = C/A is the heat capacity per unit area. Note that the right side of the equation, a change in energy per unit time per unit area, has units of W/m2. We can express this as the difference of incoming and outgoing radiative fluxes, I(t) and O(t) respectively:

$c \: \frac{dT_1}{dt} = I(t)- O(t)$

By the Stefan-Boltzmann Law,

$c \: \frac{dT_1}{dt} = I(t) - \epsilon \sigma T_1(t)^4$

where ϵ is the emissivity of the Earth and σ is the Stefan-Boltzmann constant.

To consider the effect of a change in temperature, suppose that T1(t) = T0 + T(t), where T0 is an initial equilibrium temperature and T(t) is a temperature anomaly. Substituting into the equation,

$c \: \frac{d(T_0 + T(t))}{dt} = I(t) - \epsilon \sigma (T_0 + T(t))^4$

Noting that T0 is a constant, and also factoring the right side,

$c \: \frac{dT}{dt} = I(t) - \epsilon \sigma T_0^4 (1 + \tfrac{T(t)}{T_0})^4$

Since the absolute temperature of the Earth is around 280 K, and we are interested in perturbations of around 5 K, we can assume that T(t)/T0 ≪ 1. So we can linearize (1 + T(t)/T0)4 using a Taylor expansion about T(t) = 0:

$c \: \frac{dT}{dt} = I(t) - \epsilon \sigma T_0^4 (1 + 4 \tfrac{T(t)}{T_0} + O[(\tfrac{T(t)}{T_0})^2])$

$\approx I(t) - \epsilon \sigma T_0^4 (1 + 4 \tfrac{T(t)}{T_0})$

$= I(t) - \epsilon \sigma T_0^4 - 4 \epsilon \sigma T_0^3 T(t)$

Next, let O0 = ϵσT04 be the initial outgoing flux. So,

$c \: \frac{dT}{dt} = I(t) - O_0 - 4 \epsilon \sigma T_0^3 T(t)$

Let F(t) = I(t) – O0 be the radiative forcing at time t. Making this substitution as well as dividing by c, we have

$\frac{dT}{dt} = \frac{F(t) - 4 \epsilon \sigma T_0^3 T(t)}{c}$

Dividing each term by 4ϵσT03 and rearranging the numerator,

$\frac{dT}{dt} = - \frac{T(t) - \tfrac{1}{4 \epsilon \sigma T_0^3} F(t)}{\tfrac{c}{4 \epsilon \sigma T_0^3}}$

Finally, let S = 1/(4ϵσT03) and τ = cS. Our final equation is

$\frac{dT}{dt} = - \frac{T(t) - S F(t)}{\tau}$

While S depends on the initial temperature T0, all of the model runs for this project begin in the preindustrial period when global temperature is approximately constant. Therefore, we can treat S as a parameter independent of initial conditions. As I will show in the next post, the uncertainty in S based on climate system dynamics far overwhelms any error we might introduce by disregarding T0.

# A Simple Stochastic Climate Model: Introduction

This winter I took a course in computational physics, which has probably been my favourite undergraduate course to date. Essentially it was an advanced numerical methods course, but from a very practical point of view. We got a lot of practice using numerical techniques to solve realistic problems, rather than just analysing error estimates and proving conditions of convergence. As a math student I found this refreshing, and incredibly useful for my research career.

We all had to complete a term project of our choice, and I decided to build a small climate model. I was particularly interested in the stochastic techniques taught in the course, and given that modern GCMs and EMICs are almost entirely deterministic, it was possible that I could contribute something original to the field.

The basic premise of my model is this: All anthropogenic forcings are deterministic, and chosen by the user. Everything else is determined stochastically: parameters such as climate sensitivity are sampled from probability distributions, whereas natural forcings are randomly generated but follow the same general pattern that exists in observations. The idea is to run this model with the same anthropogenic input hundreds of times and build up a probability distribution of future temperature trajectories. The spread in possible scenarios is entirely due to uncertainty in the natural processes involved.

This approach mimics the real world, because the only part of the climate system we have full control over is our own actions. Other influences on climate are out of our control, sometimes poorly understood, and often unpredictable. It is just begging to be modelled as a stochastic system. (Not that it is actually stochastic, of course; in fact, I understand that nothing is truly stochastic, even random number generators – unless you can find a counterexample using quantum mechanics? But that’s a discussion for another time.)

A word of caution: I built this model in about eight weeks. As such, it is highly simplified and leaves out a lot of processes. You should never ever use it for real climate projections. This project is purely an exercise in numerical methods, and an exploration of the possible role of stochastic techniques in climate modelling.

Over the coming weeks, I will write a series of posts that explains each component of my simple stochastic climate model in detail. I will show the results from some sample simulations, and discuss how one might apply these stochastic techniques to existing GCMs. I also plan to make the code available to anyone who’s interested – it’s written in Matlab, although I might translate it to a free language like Python, partly because I need an excuse to finally learn Python.

I am very excited to finally share this project with you all! Check back soon for the next installment.