Feeds:
Posts
Comments

Posts Tagged ‘climate models’

I haven’t forgotten about this project! Read the introduction and ODE derivation if you haven’t already.

Last time I derived the following ODE for temperature T at time t:

where S and τ are constants, and F(t) is the net radiative forcing at time t. Eventually I will discuss each of these terms in detail; this post will focus on S.

At equilibrium, when dT/dt = 0, the ODE necessitates T(t) = S F(t). A physical interpretation for S becomes apparent: it measures the equilibrium change in temperature per unit forcing, also known as climate sensitivity.

A great deal of research has been conducted with the aim of quantifying climate sensitivity, through paleoclimate analyses, modelling experiments, and instrumental data. Overall, these assessments show that climate sensitivity is on the order of 3 K per doubling of CO2 (divide by 5.35 ln 2 W/m2 to convert to warming per unit forcing).

The IPCC AR4 report (note that AR5 was not yet published at the time of my calculations) compared many different probability distribution functions (PDFs) of climate sensitivity, shown below. They follow the same general shape of a shifted distribution with a long tail to the right, and average 5-95% confidence intervals of around 1.5 to 7 K per doubling of CO2.

Box 10.2, Figure 1 of the IPCC AR4 WG1: Probability distribution functions of climate sensitivity (a), 5-95% confidence intervals (b).

These PDFs generally consist of discrete data points that are not publicly available. Consequently, sampling from any existing PDF would be difficult. Instead, I chose to create my own PDF of climate sensitivity, modelled as a log-normal distribution (e raised to the power of a normal distribution) with the same shape and bounds as the existing datasets.

The challenge was to find values for μ and σ, the mean and standard deviation of the corresponding normal distribution, such that for any z sampled from the log-normal distribution,

Since erf, the error function, cannot be evaluated analytically, this two-parameter problem must be solved numerically. I built a simple particle swarm optimizer to find the solution, which consistently yielded results of μ = 1.1757, σ = 0.4683.

The upper tail of a log-normal distribution is unbounded, so I truncated the distribution at 10 K, consistent with existing PDFs (see figure above). At the beginning of each simulation, climate sensitivity in my model is sampled from this distribution and held fixed for the entire run. A histogram of 106 sampled points, shown below, has the desired characteristics.

Histogram of 106 points sampled from the log-normal distribution used for climate sensitivity in the model.

Histogram of 106 points sampled from the log-normal distribution used for climate sensitivity in the model.

Note that in order to be used in the ODE, the sampled points must then be converted to units of Km2/W (warming per unit forcing) by dividing by 5.35 ln 2 W/m2, the forcing from doubled CO2.

Read Full Post »

Last time I introduced the concept of a simple climate model which uses stochastic techniques to simulate uncertainty in our knowledge of the climate system. Here I will derive the backbone of this model, an ODE describing the response of global temperature to net radiative forcing. This derivation is based on unpublished work by Nathan Urban – many thanks!

In reality, the climate system should be modelled not as a single ODE, but as a coupled system of hundreds of PDEs in four dimensions. Such a task is about as arduous as numerical science can get, but dozens of research groups around the world have built GCMs (General Circulation Models, or Global Climate Models, depending on who you talk to) which come quite close to this ideal.

Each GCM has taken hundreds of person-years to develop, and I only had eight weeks. So for the purposes of this project, I treat the Earth as a spatially uniform body with a single temperature. This is clearly a huge simplification but I decided it was necessary.

Let’s start by defining T1(t) to be the absolute temperature of this spatially uniform Earth at time t, and let its heat capacity be C. Therefore,

C \: T_1(t) = E

where E is the change in energy required to warm the Earth from 0 K to temperature T1. Taking the time derivative of both sides,

C \: \frac{dT_1}{dt} = \frac{dE}{dt}

Now, divide through by A, the surface area of the Earth:

c \: \frac{dT_1}{dt} = \frac{1}{A} \frac{dE}{dt}

where c = C/A is the heat capacity per unit area. Note that the right side of the equation, a change in energy per unit time per unit area, has units of W/m2. We can express this as the difference of incoming and outgoing radiative fluxes, I(t) and O(t) respectively:

c \: \frac{dT_1}{dt} = I(t)- O(t)

By the Stefan-Boltzmann Law,

c \: \frac{dT_1}{dt} = I(t) - \epsilon \sigma T_1(t)^4

where ϵ is the emissivity of the Earth and σ is the Stefan-Boltzmann constant.

To consider the effect of a change in temperature, suppose that T1(t) = T0 + T(t), where T0 is an initial equilibrium temperature and T(t) is a temperature anomaly. Substituting into the equation,

c \: \frac{d(T_0 + T(t))}{dt} = I(t) - \epsilon \sigma (T_0 + T(t))^4

Noting that T0 is a constant, and also factoring the right side,

c \: \frac{dT}{dt} = I(t) - \epsilon \sigma T_0^4 (1 + \tfrac{T(t)}{T_0})^4

Since the absolute temperature of the Earth is around 280 K, and we are interested in perturbations of around 5 K, we can assume that T(t)/T0 ≪ 1. So we can linearize (1 + T(t)/T0)4 using a Taylor expansion about T(t) = 0:

c \: \frac{dT}{dt} = I(t) - \epsilon \sigma T_0^4 (1 + 4 \tfrac{T(t)}{T_0} + O[(\tfrac{T(t)}{T_0})^2])

\approx I(t) - \epsilon \sigma T_0^4 (1 + 4 \tfrac{T(t)}{T_0})

= I(t) - \epsilon \sigma T_0^4 - 4 \epsilon \sigma T_0^3 T(t)

Next, let O0 = ϵσT04 be the initial outgoing flux. So,

c \: \frac{dT}{dt} = I(t) - O_0 - 4 \epsilon \sigma T_0^3 T(t)

Let F(t) = I(t) – O0 be the radiative forcing at time t. Making this substitution as well as dividing by c, we have

\frac{dT}{dt} = \frac{F(t) - 4 \epsilon \sigma T_0^3 T(t)}{c}

Dividing each term by 4ϵσT03 and rearranging the numerator,

\frac{dT}{dt} = - \frac{T(t) - \tfrac{1}{4 \epsilon \sigma T_0^3} F(t)}{\tfrac{c}{4 \epsilon \sigma T_0^3}}

Finally, let S = 1/(4ϵσT03) and τ = cS. Our final equation is

\frac{dT}{dt} = - \frac{T(t) - S F(t)}{\tau}

While S depends on the initial temperature T0, all of the model runs for this project begin in the preindustrial period when global temperature is approximately constant. Therefore, we can treat S as a parameter independent of initial conditions. As I will show in the next post, the uncertainty in S based on climate system dynamics far overwhelms any error we might introduce by disregarding T0.

Read Full Post »

This winter I took a course in computational physics, which has probably been my favourite undergraduate course to date. Essentially it was an advanced numerical methods course, but from a very practical point of view. We got a lot of practice using numerical techniques to solve realistic problems, rather than just analysing error estimates and proving conditions of convergence. As a math student I found this refreshing, and incredibly useful for my research career.

We all had to complete a term project of our choice, and I decided to build a small climate model. I was particularly interested in the stochastic techniques taught in the course, and given that modern GCMs and EMICs are almost entirely deterministic, it was possible that I could contribute something original to the field.

The basic premise of my model is this: All anthropogenic forcings are deterministic, and chosen by the user. Everything else is determined stochastically: parameters such as climate sensitivity are sampled from probability distributions, whereas natural forcings are randomly generated but follow the same general pattern that exists in observations. The idea is to run this model with the same anthropogenic input hundreds of times and build up a probability distribution of future temperature trajectories. The spread in possible scenarios is entirely due to uncertainty in the natural processes involved.

This approach mimics the real world, because the only part of the climate system we have full control over is our own actions. Other influences on climate are out of our control, sometimes poorly understood, and often unpredictable. It is just begging to be modelled as a stochastic system. (Not that it is actually stochastic, of course; in fact, I understand that nothing is truly stochastic, even random number generators – unless you can find a counterexample using quantum mechanics? But that’s a discussion for another time.)

A word of caution: I built this model in about eight weeks. As such, it is highly simplified and leaves out a lot of processes. You should never ever use it for real climate projections. This project is purely an exercise in numerical methods, and an exploration of the possible role of stochastic techniques in climate modelling.

Over the coming weeks, I will write a series of posts that explains each component of my simple stochastic climate model in detail. I will show the results from some sample simulations, and discuss how one might apply these stochastic techniques to existing GCMs. I also plan to make the code available to anyone who’s interested – it’s written in Matlab, although I might translate it to a free language like Python, partly because I need an excuse to finally learn Python.

I am very excited to finally share this project with you all! Check back soon for the next installment.

Read Full Post »

You may have already heard that carbon dioxide concentrations have surpassed 400 ppm. The most famous monitoring station, Mauna Loa Observatory in Hawaii, reached this value on May 9th. Due to the seasonal cycle, CO2 levels began to decline almost immediately thereafter, but next year they will easily blow past 400 ppm.

Of course, this milestone is largely arbitrary. There’s nothing inherently special about 400 ppm. But it’s a good reminder that while we were arguing about taxation, CO2 levels continued to quietly tick up and up.


In happier news, John Cook and others have just published the most exhaustive survey of the peer-reviewed climate literature to date. Read the paper here (open access), and a detailed but accessible summary here. Unsurprisingly, they found the same 97% consensus that has come up over and over again.

Cook et al read the abstracts of nearly 12 000 papers published between 1991 and 2011 – every single hit from the ISI Web of Science with the keywords “global climate change” or “global warming”. Several different people categorized each abstract, and the authors were contacted whenever possible to categorize their own papers. Using several different methods like this makes the results more reliable.

Around two-thirds of the studies, particularly the more recent ones, didn’t mention the cause of climate change. This is unsurprising, since human-caused warming has been common knowledge in the field for years. Similarly, seismology papers don’t usually mention that plate tectonics cause earthquakes, particularly in the abstracts where space is limited.

Among the papers which did express a position, 97.1% said climate change was human-caused. Again, unsurprising to anyone working in the field, but it’s news to many members of the public. The study has been widely covered in the mainstream media – everywhere from The Guardian to The Australian – and even President Obama’s Twitter feed.


Congratulations are also due to Andrew Weaver, my supervisor from last summer, who has just been elected to the British Columbia provincial legislature. He is not only the first-ever Green Party MLA in BC’s history, but also (as far as I know) the first-ever climate scientist to hold public office.

Governments the world over are sorely in need of officials who actually understand the problem of climate change. Nobody fits this description better than Andrew, and I think he is going to be great. The large margin by which he won also indicates that public support for climate action is perhaps higher than we thought.


Finally, my second publication came out this week in Climate of the Past. It describes an EMIC intercomparison project the UVic lab conducted for the next IPCC report, which I helped out with while I was there. The project was so large that we split the results into two papers (the second of which is in press in Journal of Climate). This paper covers the historical experiments – comparing model results from 850-2005 to observations and proxy reconstructions – as well as some idealized experiments designed to measure metrics such as climate sensitivity, transient climate response, and carbon cycle feedbacks.

Read Full Post »

Last week I was lucky enough to attend the Second Workshop on Coupling Technologies for Earth System Models, held at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, USA. I was excited just to visit NCAR, which is one of the top climate research facilities in the world. Not only is it packed full of interesting scientists and great museum displays, but it’s nestled in the Rocky Mountains and so the view from the conference room looks like this:

2013-02-21 13.46.43

Many of the visitors would spend large portions of the coffee breaks just staring out the window…

The conference was focused on couplers – the part of a climate model that ties all the other components (atmosphere, ocean, land, etc.) together. However, the presentations covered (as Rob Jacob put it) “everything that physical scientists don’t care about unless it stops working”. Since I consider myself a physical scientist, this included a lot of concepts I hadn’t thought about before:

  • Parallel processing: Since climate models are so big, it makes sense to multitask by splitting the work over many computer processors. You have to allocate the right number of processors to each component, though: if the atmosphere has too many processors, it will finish its timestep too quickly and sit there waiting until the ocean is done, and vice versa. This is called load balancing, and it gets very tricky as soon as the number of components exceeds two.
  • Scalability: The more processors you use, the faster the model runs, but the speed has diminishing returns. If you double the number of processors, you won’t quite double the speed, particularly if the number of processors exceeds 104 (a setup which is becoming increasingly affordable for large research groups). Historically, the coupler has not been a code bottleneck (limiting factor for model speed), but as the number of processors gets very large, that scenario is changing. We have to figure out the most efficient way to couple many small components together, so that climate model speed can continue to keep up with advances in computer hardware.
  • Standardization: Modelling groups across the world are communicating with each other more and more, and using each other’s code. Currently this requires a lot of modifications, because every climate model has a different structure. Everyone seems to agree that it would be great to have a standard interface that allowed you to plug any combination of components together, but of course everyone has a different idea of what that standard should be.
  • Fortran is still the best language for climate models, believe it or not, because it is the fastest language for the kinds of operations required. If a modern, accessible language like Python could compete based on speed, you can bet that new climate models like MPAS would use it.

I was at the conference with Steve Easterbrook and his new M.Sc. student Daniel Levy, presenting our bubble diagrams of model architecture. (If you haven’t already, read my AGU poster schpiel first, or none of this will make sense!) As interesting and useful as these diagrams are, there were some flaws in our original analysis:

  1. We didn’t use preprocessed code, meaning that each “model” is actually the code base for many different model configurations. So our estimate of model complexity based on line count is biased towards models which are very configurable, but might not actually be very complex. We can fix this by choosing specific configurations of each model (for consistency, the configuration used in CMIP5 or the equivalent EMIC AR5 intercomparison project) and obtaining preprocessed code from the corresponding institutions.
  2. We sorted the code into components (eg atmosphere) and sub-components (eg atmospheric aerosols) based on folder structure, which might not reflect the hierarchy of routines formed at runtime. Some modelling groups keep their files very organized, but often code from different parts of the model was mixed together, and separating it out was very much a judgement call. To fix this, we can sort based on the dependency structure (a massive tree graph showing which routines call which): all the descendants of the atmosphere driver are part of the atmosphere component, and so on.
  3. We made our diagrams in Microsoft PowerPoint, which is quite limited, and didn’t allow us to size the bubbles so their area was perfectly proportional to line count. Instead, we just had to eyeball it. We can fix this by using Adobe Illustrator, which is much more advanced and has this capability.

So far, we’ve repeated the analysis for the UK Met Office Model, version HadGEM2-ES. I created the dependency structure by going manually through every file and making good use of grep, which took hours and hours (although it was a nice, menial way to avoid studying for my courses!). Daniel is going to write a Fortran parser to make the job easier next time around. In the meantime, our HadGEM2-ES diagram is absolutely gorgeous and wonderfully accurate:
HadGEM2-ES
I will post future diagrams as they become available. We think the main use of these diagrams will be as communication tools between scientists, so they are free to use with attribution.

Just a few more weeks of classes, then I can enjoy some full-time research. Now that I’ve had a taste of being a proper scientist, it’s hard to go back!

Read Full Post »

Lately I have been reading a lot about the Paleocene-Eocene Thermal Maximum, or PETM, which is my favourite paleoclimatic event (is it weird to have a favourite?) This episode of rapid global warming 55 million years ago is particularly relevant to our situation today, because it was clearly caused by greenhouse gases. Unfortunately, the rest of the story is far less clear.

Paleocene mammals

The PETM happened about 10 million years after the extinction that killed the dinosaurs. The Age of Mammals was well underway, although humans wouldn’t appear in any form for another few million years. It was several degrees warmer, to start with, than today’s conditions. Sea levels would have been higher, and there were probably no polar ice caps.

Then, over several thousand years, the world warmed by between 5 and 8°C. It seems to have happened in a few bursts, against a background of slower temperature increase. Even the deep ocean, usually a very stable thermal environment, warmed by at least 5°C. It took around a hundred thousand years for the climate system to recover.

Such rapid global warming hasn’t been seen since, although it’s possible (probable?) that human-caused warming will surpass this rate, if it hasn’t already. It is particularly troubling to realize that our species has never before experienced an event like the one we’re causing today. The climate has changed before, but humans generally weren’t there to see it.

The PETM is marked in the geological record by a sudden jump in the amount of “light” carbon in the climate system. Carbon comes in different isotopes, two of which are most important for climate analysis: carbon with 7 neutrons (13C), and carbon with 6 neutrons (12C). Different carbon cycle processes sequester these forms of carbon in different amounts. Biological processes like photosynthesis preferentially take 12C out of the air in the form of CO2, while geological processes like subduction of the Earth’s crust take anything that’s part of the rock. When the carbon comes back up, the ratios of 12C to 13C are preserved: emissions from the burning of fossil fuels, for example, are relatively “light” because they originated from the tissues of living organisms; emissions from volcanoes are more or less “normal” because they came from molten crust that was once the ocean floor.

In order to explain the isotopic signature of the PETM, you need to add to the climate system either a massive amount of carbon that’s somewhat enriched in light carbon, or a smaller amount of carbon that’s extremely enriched in light carbon, or (most likely) something in the middle. The carbon came in the form of CO2, or possibly CH4 that soon oxidized to form CO2. That, in turn, almost certainly caused the warming.

There was a lot of warming, though, so there must have been a great deal of carbon. We don’t know exactly how much, because the warming power of CO2 depends on how much is already present in the atmosphere, and estimates for initial CO2 concentration during the PETM vary wildly. However, the carbon injection was probably something like 5 trillion tonnes. This is comparable to the amount of carbon we could emit today from burning all our fossil fuel reserves. That’s a heck of a lot of carbon, and what nobody can figure out is where did it all come from?

Arguably the most popular hypothesis is methane hydrates. On continental shelves, methane gas (CH4) is frozen into the ocean floor. Microscopic cages of water contain a single molecule of methane each, but when the water melts the methane is released and bubbles up to the surface. Today there are about 10 trillion tonnes of carbon stored in methane hydrates. In the PETM the levels were lower, but nobody is sure by how much.

The characteristics of methane hydrates seem appealing as an explanation for the PETM. They are very enriched in 12C, meaning less of them would be needed to cause the isotopic shift. They discharge rapidly and build back up slowly, mirroring the sudden onset and slow recovery of the PETM. The main problem with the methane hydrate hypothesis is that there might not have been enough of them to account for the warming observed in the fossil record.

However, remember that in order to release their carbon, methane hydrates must first warm up enough to melt. So some other agent could have started the warming, which then triggered the methane release and the sudden bursts of warming. There is no geological evidence for any particular source – everything is speculative, except for the fact that something spat out all this CO2.

Magnified foraminifera

Don’t forget that where there is greenhouse warming, there is ocean acidification. The ocean is great at soaking up greenhouse gases, but this comes at a cost to organisms that build shells out of calcium carbonate (CaCO3, the same chemical that makes up chalk). CO2 in the water forms carbonic acid, which starts to dissolve their shells. Likely for this reason, the PETM caused a mass extinction of benthic foraminifera (foraminifera = microscopic animals with CaCO3 shells; benthic = lives on the ocean floor).

Other groups of animals seemed to do okay, though. There was a lot of rearranging of habitats – species would disappear in one area but flourish somewhere else – but no mass extinction like the one that killed the dinosaurs. The fossil record can be deceptive in this manner, though, because it only preserves a small number of species. By sheer probability, the most abundant and widespread organisms are most likely to appear in the fossil record. There could be many organisms that were less common, or lived in restricted areas, that went extinct without leaving any signs that they ever existed.

Climate modellers really like the PETM, because it’s a historical example of exactly the kind of situation we’re trying to understand using computers. If you add a few trillion tonnes of carbon to the atmosphere in a relatively short period of time, how much does the world warm and what happens to its inhabitants? The PETM ran this experiment for us in the real world, and can give us some idea of what to expect in the centuries to come. If only it had left more data behind for us to discover.

References:
Pagani et al., 2006
Dickens, 2011
McInerney and Wing, 2011

Read Full Post »

Today my very first scientific publication is appearing in Geophysical Research Letters. During my summer at UVic, I helped out with a model intercomparison project regarding the effect of climate change on Atlantic circulation, and was listed as a coauthor on the resulting paper. I suppose I am a proper scientist now, rather than just a scientist larva.

The Atlantic meridional overturning circulation (AMOC for short) is an integral part of the global ocean conveyor belt. In the North Atlantic, a massive amount of water near the surface, cooling down on its way to the poles, becomes dense enough to sink. From there it goes on a thousand-year journey around the world – inching its way along the bottom of the ocean, looping around Antarctica – before finally warming up enough to rise back to the surface. A whole multitude of currents depend on the AMOC, most famously the Gulf Stream, which keeps Europe pleasantly warm.

Some have hypothesized that climate change might shut down the AMOC: the extra heat and freshwater (from melting ice) coming into the North Atlantic could conceivably lower the density of surface water enough to stop it sinking. This happened as the world was coming out of the last ice age, in an event known as the Younger Dryas: a huge ice sheet over North America suddenly gave way, drained into the North Atlantic, and shut down the AMOC. Europe, cut off from the Gulf Stream and at the mercy of the ice-albedo feedback, experienced another thousand years of glacial conditions.

A shutdown today would not lead to another ice age, but it could cause some serious regional cooling over Europe, among other impacts that we don’t fully understand. Today, though, there’s a lot less ice to start with. Could the AMOC still shut down? If not, how much will it weaken due to climate change? So far, scientists have answered these two questions with “probably not” and “something like 25%” respectively. In this study, we analysed 30 climate models (25 complex CMIP5 models, and 5 smaller, less complex EMICs) and came up with basically the same answer. It’s important to note that none of the models include dynamic ice sheets (computational glacial dynamics is a headache and a half), which might affect our results.

Models ran the four standard RCP experiments from 2006-2100. Not every model completed every RCP, and some extended their simulations to 2300 or 3000. In total, there were over 30 000 model years of data. We measured the “strength” of the AMOC using the standard unit Sv (Sverdrups), where each Sv is 1 million cubic metres of water per second.

Only two models simulated an AMOC collapse, and only at the tail end of the most extreme scenario (RCP8.5, which quite frankly gives me a stomachache). Bern3D, an EMIC from Switzerland, showed a MOC strength of essentially zero by the year 3000; CNRM-CM5, a GCM from France, stabilized near zero by 2300. In general, the models showed only a moderate weakening of the AMOC by 2100, with best estimates ranging from a 22% drop for RCP2.6 to a 40% drop for RCP8.5 (with respect to preindustrial conditions).

Are these somewhat-reassuring results trustworthy? Or is the Atlantic circulation in today’s climate models intrinsically too stable? Our model intercomparison also addressed that question, using a neat little scalar metric known as Fov: the net amount of freshwater travelling from the AMOC to the South Atlantic.

The current thinking in physical oceanography is that the AMOC is more or less binary – it’s either “on” or “off”. When AMOC strength is below a certain level (let’s call it A), its only stable state is “off”, and the strength will converge to zero as the currents shut down. When AMOC strength is above some other level (let’s call it B), its only stable state is “on”, and if you were to artificially shut it off, it would bounce right back up to its original level. However, when AMOC strength is between A and B, both conditions can be stable, so whether it’s on or off depends on where it started. This phenomenon is known as hysteresis, and is found in many systems in nature.

This figure was not part of the paper. I made it just now in MS Paint.

Here’s the key part: when AMOC strength is less than A or greater than B, Fov is positive and the system is monostable. When AMOC strength is between A and B, Fov is negative and the system is bistable. The physical justification for Fov is its association with the salt advection feedback, the sign of which is opposite Fov: positive Fov means the salt advection feedback is negative (i.e. stabilizing the current state, so monostable); a negative Fov means the salt advection feedback is positive (i.e. reinforcing changes in either direction, so bistable).

Most observational estimates (largely ocean reanalyses) have Fov as slightly negative. If models’ AMOCs really were too stable, their Fov‘s should be positive. In our intercomparison, we found both positives and negatives – the models were kind of all over the place with respect to Fov. So maybe some models are overly stable, but certainly not all of them, or even the majority.

As part of this project, I got to write a new section of code for the UVic model, which calculated Fov each timestep and included the annual mean in the model output. Software development on a large, established project with many contributors can be tricky, and the process involved a great deal of head-scratching, but it was a lot of fun. Programming is so satisfying.

Beyond that, my main contribution to the project was creating the figures and calculating the multi-model statistics, which got a bit unwieldy as the model count approached 30, but we made it work. I am now extremely well-versed in IDL graphics keywords, which I’m sure will come in handy again. Unfortunately I don’t think I can reproduce any figures here, as the paper’s not open-access.

I was pretty paranoid while coding and doing calculations, though – I kept worrying that I would make a mistake, never catch it, and have it dredged out by contrarians a decade later (“Kate-gate”, they would call it). As a climate scientist, I suppose that comes with the job these days. But I can live with it, because this stuff is just so darned interesting.

Read Full Post »

During my summer at UVic, two PhD students at the lab (Andrew MacDougall and Chris Avis) as well as my supervisor (Andrew Weaver) wrote a paper modelling the permafrost carbon feedback, which was recently published in Nature Geoscience. I read a draft version of this paper several months ago, and am very excited to finally share it here.

Studying the permafrost carbon feedback is at once exciting (because it has been left out of climate models for so long) and terrifying (because it has the potential to be a real game-changer). There is about twice as much carbon frozen into permafrost than there is floating around in the entire atmosphere. As high CO2 levels cause the world to warm, some of the permafrost will thaw and release this carbon as more CO2 – causing more warming, and so on. Previous climate model simulations involving permafrost have measured the CO2 released during thaw, but haven’t actually applied it to the atmosphere and allowed it to change the climate. This UVic study is the first to close that feedback loop (in climate model speak we call this “fully coupled”).

The permafrost part of the land component was already in place – it was developed for Chris’s PhD thesis, and implemented in a previous paper. It involves converting the existing single-layer soil model to a multi-layer model where some layers can be frozen year-round. Also, instead of the four RCP scenarios, the authors used DEPs (Diagnosed Emission Pathways): exactly the same as RCPs, except that CO2 emissions, rather than concentrations, are given to the model as input. This was necessary so that extra emissions from permafrost thaw would be taken into account by concentration values calculated at the time.

As a result, permafrost added an extra 44, 104, 185, and 279 ppm of CO2 to the atmosphere for DEP 2.6, 4.5, 6.0, and 8.5 respectively. However, the extra warming by 2100 was about the same for each DEP, with central estimates around 0.25 °C. Interestingly, the logarithmic effect of CO2 on climate (adding 10 ppm to the atmosphere causes more warming when the background concentration is 300 ppm than when it is 400 ppm) managed to cancel out the increasing amounts of permafrost thaw. By 2300, the central estimates of extra warming were more variable, and ranged from 0.13 to 1.69 °C when full uncertainty ranges were taken into account. Altering climate sensitivity (by means of an artificial feedback), in particular, had a large effect.

As a result of the thawing permafrost, the land switched from a carbon sink (net CO2 absorber) to a carbon source (net CO2 emitter) decades earlier than it would have otherwise – before 2100 for every DEP. The ocean kept absorbing carbon, but in some scenarios the carbon source of the land outweighed the carbon sink of the ocean. That is, even without human emissions, the land was emitting more CO2 than the ocean could soak up. Concentrations kept climbing indefinitely, even if human emissions suddenly dropped to zero. This is the part of the paper that made me want to hide under my desk.

This scenario wasn’t too hard to reach, either – if climate sensitivity was greater than 3°C warming per doubling of CO2 (about a 50% chance, as 3°C is the median estimate by scientists today), and people followed DEP 8.5 to at least 2013 before stopping all emissions (a very intense scenario, but I wouldn’t underestimate our ability to dig up fossil fuels and burn them really fast), permafrost thaw ensured that CO2 concentrations kept rising on their own in a self-sustaining loop. The scenarios didn’t run past 2300, but I’m sure that if you left it long enough the ocean would eventually win and CO2 would start to fall. The ocean always wins in the end, but things can be pretty nasty until then.

As if that weren’t enough, the paper goes on to list a whole bunch of reasons why their values are likely underestimates. For example, they assumed that all emissions from permafrost were  CO2, rather than the much stronger CH4 which is easily produced in oxygen-depleted soil; the UVic model is also known to underestimate Arctic amplification of climate change (how much faster the Arctic warms than the rest of the planet). Most of the uncertainties – and there are many – are in the direction we don’t want, suggesting that the problem will be worse than what we see in the model.

This paper went in my mental “oh shit” folder, because it made me realize that we are starting to lose control over the climate system. No matter what path we follow – even if we manage slightly negative emissions, i.e. artificially removing CO2 from the atmosphere – this model suggests we’ve got an extra 0.25°C in the pipeline due to permafrost. It doesn’t sound like much, but add that to the 0.8°C we’ve already seen, and take technological inertia into account (it’s simply not feasible to stop all emissions overnight), and we’re coming perilously close to the big nonlinearity (i.e. tipping point) that many argue is between 1.5 and 2°C. Take political inertia into account (most governments are nowhere near even creating a plan to reduce emissions), and we’ve long passed it.

Just because we’re probably going to miss the the first tipping point, though, doesn’t mean we should throw up our hands and give up. 2°C is bad, but 5°C is awful, and 10°C is unthinkable. The situation can always get worse if we let it, and how irresponsible would it be if we did?

Read Full Post »

Near the end of my summer at the UVic Climate Lab, all the scientists seemed to go on vacation at the same time and us summer students were left to our own devices. I was instructed to teach Jeremy, Andrew Weaver’s other summer student, how to use the UVic climate model – he had been working with weather station data for most of the summer, but was interested in Earth system modelling too.

Jeremy caught on quickly to the basics of configuration and I/O, and after only a day or two, we wanted to do something more exciting than the standard test simulations. Remembering an old post I wrote, I dug up this paper (open access) by Damon Matthews and Ken Caldeira, which modelled geoengineering by reducing incoming solar radiation uniformly across the globe. We decided to replicate their method on the newest version of the UVic ESCM, using the four RCP scenarios in place of the old A2 scenario. We only took CO2 forcing into account, though: other greenhouse gases would have been easy enough to add in, but sulphate aerosols are spatially heterogeneous and would complicate the algorithm substantially.

Since we were interested in the carbon cycle response to geoengineering, we wanted to prescribe CO2 emissions, rather than concentrations. However, the RCP scenarios prescribe concentrations, so we had to run the model with each concentration trajectory and find the equivalent emissions timeseries. Since the UVic model includes a reasonably complete carbon cycle, it can “diagnose” emissions by calculating the change in atmospheric carbon, subtracting contributions from land and ocean CO2 fluxes, and assigning the residual to anthropogenic sources.

After a few failed attempts to represent geoengineering without editing the model code (e.g., altering the volcanic forcing input file), we realized it was unavoidable. Model development is always a bit of a headache, but it makes you feel like a superhero when everything falls into place. The job was fairly small – just a few lines that culminated in equation 1 from the original paper – but it still took several hours to puzzle through the necessary variable names and header files! Essentially, every timestep the model calculates the forcing from CO2 and reduces incoming solar radiation to offset that, taking changing planetary albedo into account. When we were confident that the code was working correctly, we ran all four RCPs from 2006-2300 with geoengineering turned on. The results were interesting (see below for further discussion) but we had one burning question: what would happen if geoengineering were suddenly turned off?

By this time, having completed several thousand years of model simulations, we realized that we were getting a bit carried away. But nobody else had models in the queue – again, they were all on vacation – so our simulations were running three times faster than normal. Using restart files (written every 100 years) as our starting point, we turned off geoengineering instantaneously for RCPs 6.0 and 8.5, after 100 years as well as 200 years.

Results

Similarly to previous experiments, our representation of geoengineering still led to sizable regional climate changes. Although average global temperatures fell down to preindustrial levels, the poles remained warmer than preindustrial while the tropics were cooler:

Also, nearly everywhere on the globe became drier than in preindustrial times. Subtropical areas were particularly hard-hit. I suspect that some of the drying over the Amazon and the Congo is due to deforestation since preindustrial times, though:

Jeremy also made some plots of key one-dimensional variables for RCP8.5, showing the results of no geoengineering (i.e. the regular RCP – yellow), geoengineering for the entire simulation (red), and geoengineering turned off in 2106 (green) or 2206 (blue):

It only took about 20 years for average global temperature to fall back to preindustrial levels. Changes in solar radiation definitely work quickly. Unfortunately, changes in the other direction work quickly too: shutting off geoengineering overnight led to rates of warming up to 5 C / decade, as the climate system finally reacted to all the extra CO2. To put that in perspective, we’re currently warming around 0.2 C / decade, which far surpasses historical climate changes like the Ice Ages.

Sea level rise (due to thermal expansion only – the ice sheet component of the model isn’t yet fully implemented) is directly related to temperature, but changes extremely slowly. When geoengineering is turned off, the reversals in sea level trajectory look more like linear offsets from the regular RCP.

Sea ice area, in contrast, reacts quite quickly to changes in temperature. Note that this data gives annual averages, rather than annual minimums, so we can’t tell when the Arctic Ocean first becomes ice-free. Also, note that sea ice area is declining ever so slightly even with geoengineering – this is because the poles are still warming a little bit, while the tropics cool.

Things get really interesting when you look at the carbon cycle. Geoengineering actually reduced atmospheric CO2 concentrations compared to the regular RCP. This was expected, due to the dual nature of carbon cycle feedbacks. Geoengineering allows natural carbon sinks to enjoy all the benefits of high CO2 without the associated drawbacks of high temperatures, and these sinks become stronger as a result. From looking at the different sinks, we found that the sequestration was due almost entirely to the land, rather than the ocean:

In this graph, positive values mean that the land is a net carbon sink (absorbing CO2), while negative values mean it is a net carbon source (releasing CO2). Note the large negative spikes when geoengineering is turned off: the land, adjusting to the sudden warming, spits out much of the carbon that it had previously absorbed.

Within the land component, we found that the strengthening carbon sink was due almost entirely to soil carbon, rather than vegetation:

This graph shows total carbon content, rather than fluxes – think of it as the integral of the previous graph, but discounting vegetation carbon.

Finally, the lower atmospheric CO2 led to lower dissolved CO2 in the ocean, and alleviated ocean acidification very slightly. Again, this benefit quickly went away when geoengineering was turned off.

Conclusions

Is geoengineering worth it? I don’t know. I can certainly imagine scenarios in which it’s the lesser of two evils, and find it plausible (even probable) that we will reach such a scenario within my lifetime. But it’s not something to undertake lightly. As I’ve said before, desperate governments are likely to use geoengineering whether or not it’s safe, so we should do as much research as possible ahead of time to find the safest form of implementation.

The modelling of geoengineering is in its infancy, and I have a few ideas for improvement. In particular, I think it would be interesting to use a complex atmospheric chemistry component to allow for spatial variation in the forcing reduction through sulphate aerosols: increase the aerosol optical depth over one source country, for example, and let it disperse over time. I’d also like to try modelling different kinds of geoengineering – sulphate aerosols as well as mirrors in space and iron fertilization of the ocean.

Jeremy and I didn’t research anything that others haven’t, so this project isn’t original enough for publication, but it was a fun way to stretch our brains. It was also a good topic for a post, and hopefully others will learn something from our experiments.

Above all, leave over-eager summer students alone at your own risk. They just might get into something like this.

Read Full Post »

Arctic sea ice is in the midst of a record-breaking melt season. This is yet another symptom of human-caused climate change progressing much faster than scientists anticipated.

Every year, the frozen surface of the Arctic Ocean waxes and wanes, covering the largest area in February or March and the smallest in September. Over the past few decades, these September minima have been getting smaller and smaller. The lowest sea ice extent on record occurred in 2007, followed closely by 2011, 2008, 2010, and 2009. That is, the five lowest years on record all happened in the past five years. While year-to-year weather conditions, like summer storms, impact the variability of Arctic sea ice cover, the undeniable downward trend can only be explained by human-caused climate change.

The 2012 melt season started off hopefully, with April sea ice extent near the 1979-2000 average. Then things took a turn for the worse, and sea ice was at record or near-record low conditions for most of the summer. In early August, a storm spread out the remaining ice, exacerbating the melt. Currently, sea ice is significantly below the previous record for this time of year. See the light blue line in the figure below:

The 2012 minimum is already the fifth-lowest on record for any day of the year – and the worst part is, it will keep melting for about another month. At this rate, it’s looking pretty likely that we’ll break the 2007 record and hit an all-time low in September. Sea ice volume, rather than extent, is in the same situation.

Computer models of the climate system have a difficult time reproducing this sudden melt. As recently as 2007, the absolute worst-case projections showed summer Arctic sea ice disappearing around 2100. Based on observations, scientists are now confident that will happen well before 2050, and possibly within a decade. Climate models, which many pundits like to dismiss as “alarmist,” actually underestimated the severity of the problem. Uncertainty cuts both ways.

The impacts of an ice-free Arctic Ocean will be wide-ranging and severe. Luckily, melting sea ice does not contribute to sea level rise (only landlocked ice does, such as the Greenland and Antarctic ice sheets), but many other problems remain. The Inuit peoples of the north, who depend on sea ice for hunting, will lose an essential source of food and culture. Geopolitical tensions regarding ownership of the newly-accessible Arctic waters are likely. Changes to the Arctic food web, from blooming phytoplankton to dwindling polar bears, will irreversibly alter the ecosystem. While scientists don’t know exactly what this new Arctic will look like, it is certain to involve a great deal of disruption and suffering.

Daily updates on Arctic sea ice conditions are available from the NSIDC website.

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 334 other followers