What I am doing with my life

After a long hiatus – much longer than I like to think about or admit to – I am finally back. I just finished the last semester of my undergraduate degree, which was by far the busiest few months I’ve ever experienced.

This was largely due to my honours thesis, on which I spent probably three times more effort than was warranted. I built a (not very good, but still interesting) model of ocean circulation and implemented it in Python. It turns out that (surprise, surprise) it’s really hard to get a numerical solution to the Navier-Stokes equations to converge. I now have an enormous amount of respect for ocean models like MOM, POP, and NEMO, which are extremely realistic as well as extremely stable. I also feel like I know the physics governing ocean circulation inside out, which will definitely be useful going forward.

Convocation is not until early June, so I am spending the month of May back in Toronto working with Steve Easterbrook. We are finally finishing up our project on the software architecture of climate models, and writing it up into a paper which we hope to submit early this summer. It’s great to be back in Toronto, and to have a chance to revisit all of the interesting places I found the first time around.

In August I will be returning to Australia to begin a PhD in Climate Science at the University of New South Wales, with Katrin Meissner and Matthew England as my supervisors. I am so, so excited about this. It was a big decision to make but ultimately I’m confident it was the right one, and I can’t wait to see what adventures Australia will bring.

Advertisement

More on Phytoplankton

On the heels of my last post about iron fertilization of the ocean, I found another interesting paper on the topic. This one, written by Long Cao and Ken Caldeira in 2010, was much less hopeful.

Instead of a small-scale field test, Cao and Caldeira decided to model iron fertilization using the ocean GCM from Lawrence Livermore National Laboratory. To account for uncertainties, they chose to calculate an upper bound on iron fertilization rather than a most likely scenario. That is, they maxed out phytoplankton growth until something else became the limiting factor – in this case, phosphates. On every single cell of the sea surface, the model phytoplankton were programmed to grow until phosphate concentrations were zero.

A 2008-2100 simulation implementing this method was forced with CO2 emissions data from the A2 scenario. An otherwise identical A2 simulation did not include the ocean fertilization, to act as a control. Geoengineering modelling is strange that way, because there are multiple definitions of “control run”: a non-geoengineered climate that is allowed to warm unabated, as well as preindustrial conditions (the usual definition in climate modelling).

Without any geoengineering, atmospheric CO2 reached 965 ppm by 2100. With the maximum amount of iron fertilization possible, these levels only fell to 833 ppm. The mitigation of ocean acidification was also quite modest: the sea surface pH in 2100 was 7.74 without geoengineering, and 7.80 with. Given the potential side effects of iron fertilization, is such a small improvement worth the trouble?

Unfortunately, the ocean acidification doesn’t end there. Although the problem was lessened somewhat at the surface, deeper layers in the ocean actually became more acidic. There was less CO2 being gradually mixed in from the atmosphere, but another source of dissolved carbon appeared: as the phytoplankton died and sank, they decomposed a little bit and released enough CO2 to cause a net decrease in pH compared to the control run.

In the diagram below, compare the first row (A2 control run) to the second (A2 with iron fertilization). The more red the contours are, the more acidic that layer of the ocean is with respect to preindustrial conditions. The third row contains data from another simulation in which emissions were allowed to increase just enough to offest sequestration by phytoplankton, leading to the same CO2 concentrations as the control run. The general pattern – iron fertilization reduces some acidity at the surface, but increases it at depth – is clear.

depth vs. latitude at 2100 (left); depth vs. time (right)

The more I read about geoengineering, the more I realize how poor the associated cost-benefit ratios might be. The oft-repeated assertion is true: the easiest way to prevent further climate change is, by a long shot, to simply reduce our emissions.

Feeding the Phytoplankton

While many forms of geoengineering involve counteracting global warming with induced cooling, others move closer to the source of the problem and target the CO2 increase. By artificially boosting the strength of natural carbon sinks, it might be possible to suck CO2 emissions right out of the air. Currently around 30% of human emissions are absorbed by these sinks; if we could make this metric greater than 100%, atmospheric CO2 concentrations would decline.

One of the most prominent proposals for carbon sink enhancement involves enlisting phytoplankton, photosynthetic organisms in the ocean which take the carbon out of carbon dioxide and use it to build their bodies. When nutrients are abundant, phytoplankton populations explode and create massive blue or green blooms visible from space. Very few animals enjoy eating these organisms, so they just float there for a while. Then they run out of nutrients, die, and sink to the bottom of the ocean, taking the carbon with them.

Phytoplankton blooms are a massive carbon sink, but they still can’t keep up with human emissions. This is because CO2 is not the limiting factor for their growth. In many parts of the ocean, the limiting factor is actually iron. So this geoengineering proposal, often known as “iron fertilization”, involves dumping iron compounds into the ocean and letting the phytoplankton go to work.

A recent study from Germany (see also the Nature news article) tested out this proposal on a small scale. The Southern Ocean, which surrounds Antarctica, was the location of their field tests, since it has a strong circumpolar current that kept the iron contained. After adding several tonnes of iron sulphate, the research ship tracked the phytoplankton as they bloomed, died, and sank.

Measurements showed that at least half of the phytoplankton sank below 1 km after they died, and “a substantial portion is likely to have reached the sea floor”. At this depth, which is below the mixed layer of the ocean, the water won’t be exposed to the atmosphere for centuries. The carbon from the phytoplankton’s bodies is safely stored away, without the danger of CO2 leakage that carbon capture and storage presents. Unlike in previous studies, the researchers were able to show that iron fertilization could be effective.

However, there are other potential side effects of large-scale iron fertilization. We don’t know what the impacts of so much iron might be on other marine life. Coating the sea surface with phytoplankton would block light from entering the mixed layer, decreasing photosynthesis in aquatic plants and possibly leading to oxygen depletion or “dead zones”. It’s also possible that toxic species of algae would get a hold of the nutrients and create poisonous blooms. On the other hand, the negative impacts of ocean acidification from high levels of CO2 would be lessened, a problem which is not addressed by solar radiation-based forms of geoengineering.

Evidently, the safest way to fix the global warming problem is to stop burning fossil fuels. Most scientists agree that geoengineering should be a last resort, an emergency measure to pull out if the Greenland ice sheet is about to go, rather than an excuse for nations to continue burning coal. And some scientists, myself included, fully expect that geoengineering will be necessary one day, so we might as well figure out the safest approach.

How do climate models work?

Also published at Skeptical Science

This is a climate model:

T = [(1-α)S/(4εσ)]1/4

(T is temperature, α is the albedo, S is the incoming solar radiation, ε is the emissivity, and σ is the Stefan-Boltzmann constant)

An extremely simplified climate model, that is. It’s one line long, and is at the heart of every computer model of global warming. Using basic thermodynamics, it calculates the temperature of the Earth based on incoming sunlight and the reflectivity of the surface. The model is zero-dimensional, treating the Earth as a point mass at a fixed time. It doesn’t consider the greenhouse effect, ocean currents, nutrient cycles, volcanoes, or pollution.

If you fix these deficiencies, the model becomes more and more complex. You have to derive many variables from physical laws, and use empirical data to approximate certain values. You have to repeat the calculations over and over for different parts of the Earth. Eventually the model is too complex to solve using pencil, paper and a pocket calculator. It’s necessary to program the equations into a computer, and that’s what climate scientists have been doing ever since computers were invented.

A pixellated Earth

Today’s most sophisticated climate models are called GCMs, which stands for General Circulation Model or Global Climate Model, depending on who you talk to. On average, they are about 500 000 lines of computer code long, and mainly written in Fortran, a scientific programming language. Despite the huge jump in complexity, GCMs have much in common with the one-line climate model above: they’re just a lot of basic physics equations put together.

Computers are great for doing a lot of calculations very quickly, but they have a disadvantage: computers are discrete, while the real world is continuous. To understand the term “discrete”, think about a digital photo. It’s composed of a finite number of pixels, which you can see if you zoom in far enough. The existence of these indivisible pixels, with clear boundaries between them, makes digital photos discrete. But the real world doesn’t work this way. If you look at the subject of your photo with your own eyes, it’s not pixellated, no matter how close you get – even if you look at it through a microscope. The real world is continuous (unless you’re working at the quantum level!)

Similarly, the surface of the world isn’t actually split up into three-dimensional cells (you can think of them as cubes, even though they’re usually wedge-shaped) where every climate variable – temperature, pressure, precipitation, clouds – is exactly the same everywhere in that cell. Unfortunately, that’s how scientists have to represent the world in climate models, because that’s the only way computers work. The same strategy is used for the fourth dimension, time, with discrete “timesteps” in the model, indicating how often calculations are repeated.

It would be fine if the cells could be really tiny – like a high-resolution digital photo that looks continuous even though it’s discrete – but doing calculations on cells that small would take so much computer power that the model would run slower than real time. As it is, the cubes are on the order of 100 km wide in most GCMs, and timesteps are on the order of hours to minutes, depending on the calculation. That might seem huge, but it’s about as good as you can get on today’s supercomputers. Remember that doubling the resolution of the model won’t just double the running time – instead, the running time will increase by a factor of sixteen (one doubling for each dimension).

Despite the seemingly enormous computer power available to us today, GCMs have always been limited by it. In fact, early computers were developed, in large part, to facilitate atmospheric models for weather and climate prediction.

Cracking the code

A climate model is actually a collection of models – typically an atmosphere model, an ocean model, a land model, and a sea ice model. Some GCMs split up the sub-models (let’s call them components) a bit differently, but that’s the most common arrangement.

Each component represents a staggering amount of complex, specialized processes. Here are just a few examples from the Community Earth System Model, developed at the National Center for Atmospheric Research in Boulder, Colorado:

  • Atmosphere: sea salt suspended in the air, three-dimensional wind velocity, the wavelengths of incoming sunlight
  • Ocean: phytoplankton, the iron cycle, the movement of tides
  • Land: soil hydrology, forest fires, air conditioning in cities
  • Sea Ice: pollution trapped within the ice, melt ponds, the age of different parts of the ice

Each component is developed independently, and as a result, they are highly encapsulated (bundled separately in the source code). However, the real world is not encapsulated – the land and ocean and air are very interconnected. Some central code is necessary to tie everything together. This piece of code is called the coupler, and it has two main purposes:

  1. Pass data between the components. This can get complicated if the components don’t all use the same grid (system of splitting the Earth up into cells).
  2. Control the main loop, or “time stepping loop”, which tells the components to perform their calculations in a certain order, once per time step.

For example, take a look at the IPSL (Institut Pierre Simon Laplace) climate model architecture. In the diagram below, each bubble represents an encapsulated piece of code, and the number of lines in this code is roughly proportional to the bubble’s area. Arrows represent data transfer, and the colour of each arrow shows where the data originated:

We can see that IPSL’s major components are atmosphere, land, and ocean (which also contains sea ice). The atmosphere is the most complex model, and land is the least. While both the atmosphere and the ocean use the coupler for data transfer, the land model does not – it’s simpler just to connect it directly to the atmosphere, since it uses the same grid, and doesn’t have to share much data with any other component. Land-ocean interactions are limited to surface runoff and coastal erosion, which are passed through the atmosphere in this model.

You can see diagrams like this for seven different GCMs, as well as a comparison of their different approaches to software architecture, in this summary of my research.

Show time

When it’s time to run the model, you might expect that scientists initialize the components with data collected from the real world. Actually, it’s more convenient to “spin up” the model: start with a dark, stationary Earth, turn the Sun on, start the Earth spinning, and wait until the atmosphere and ocean settle down into equilibrium. The resulting data fits perfectly into the cells, and matches up really nicely with observations. It fits within the bounds of the real climate, and could easily pass for real weather.

Scientists feed input files into the model, which contain the values of certain parameters, particularly agents that can cause climate change. These include the concentration of greenhouse gases, the intensity of sunlight, the amount of deforestation, and volcanoes that should erupt during the simulation. It’s also possible to give the model a different map to change the arrangement of continents. Through these input files, it’s possible to recreate the climate from just about any period of the Earth’s lifespan: the Jurassic Period, the last Ice Age, the present day…and even what the future might look like, depending on what we do (or don’t do) about global warming.

The highest resolution GCMs, on the fastest supercomputers, can simulate about 1 year for every day of real time. If you’re willing to sacrifice some complexity and go down to a lower resolution, you can speed things up considerably, and simulate millennia of climate change in a reasonable amount of time. For this reason, it’s useful to have a hierarchy of climate models with varying degrees of complexity.

As the model runs, every cell outputs the values of different variables (such as atmospheric pressure, ocean salinity, or forest cover) into a file, once per time step. The model can average these variables based on space and time, and calculate changes in the data. When the model is finished running, visualization software converts the rows and columns of numbers into more digestible maps and graphs. For example, this model output shows temperature change over the next century, depending on how many greenhouse gases we emit:

Predicting the past

So how do we know the models are working? Should we trust the predictions they make for the future? It’s not reasonable to wait for a hundred years to see if the predictions come true, so scientists have come up with a different test: tell the models to predict the past. For example, give the model the observed conditions of the year 1900, run it forward to 2000, and see if the climate it recreates matches up with observations from the real world.

This 20th-century run is one of many standard tests to verify that a GCM can accurately mimic the real world. It’s also common to recreate the last ice age, and compare the output to data from ice cores. While GCMs can travel even further back in time – for example, to recreate the climate that dinosaurs experienced – proxy data is so sparse and uncertain that you can’t really test these simulations. In fact, much of the scientific knowledge about pre-Ice Age climates actually comes from models!

Climate models aren’t perfect, but they are doing remarkably well. They pass the tests of predicting the past, and go even further. For example, scientists don’t know what causes El Niño, a phenomenon in the Pacific Ocean that affects weather worldwide. There are some hypotheses on what oceanic conditions can lead to an El Niño event, but nobody knows what the actual trigger is. Consequently, there’s no way to program El Niños into a GCM. But they show up anyway – the models spontaneously generate their own El Niños, somehow using the basic principles of fluid dynamics to simulate a phenomenon that remains fundamentally mysterious to us.

In some areas, the models are having trouble. Certain wind currents are notoriously difficult to simulate, and calculating regional climates requires an unaffordably high resolution. Phenomena that scientists can’t yet quantify, like the processes by which glaciers melt, or the self-reinforcing cycles of thawing permafrost, are also poorly represented. However, not knowing everything about the climate doesn’t mean scientists know nothing. Incomplete knowledge does not imply nonexistent knowledge – you don’t need to understand calculus to be able to say with confidence that 9 x 3 = 27.

Also, history has shown us that when climate models make mistakes, they tend to be too stable, and underestimate the potential for abrupt changes. Take the Arctic sea ice: just a few years ago, GCMs were predicting it would completely melt around 2100. Now, the estimate has been revised to 2030, as the ice melts faster than anyone anticipated:

Answering the big questions

At the end of the day, GCMs are the best prediction tools we have. If they all agree on an outcome, it would be silly to bet against them. However, the big questions, like “Is human activity warming the planet?”, don’t even require a model. The only things you need to answer those questions are a few fundamental physics and chemistry equations that we’ve known for over a century.

You could take climate models right out of the picture, and the answer wouldn’t change. Scientists would still be telling us that the Earth is warming, humans are causing it, and the consequences will likely be severe – unless we take action to stop it.

“It’s Just a Natural Cycle”

My second rebuttal for Skeptical Science. Thanks to all the folks who helped to review it! Further suggestions are welcome, as always. -Kate

“What if global warming is just a natural cycle?” This argument is, perhaps, one of the most common raised by the average person, rather than someone who makes a career out of denying climate change. Cyclical variations in climate are well-known to the public; we all studied the ice ages in school. However, climate isn’t inherently cyclical.

A common misunderstanding of the climate system characterizes it like a pendulum. The planet will warm up to “cancel out” a previous period of cooling, spurred by some internal equilibrium. This view of the climate is incorrect. Internal variability will move energy between the ocean and the atmosphere, causing short-term warming and cooling of the surface in events such as El Nino and La Nina, and longer-term changes when similar cycles operate on decadal scales. However, internal forces do not cause climate change. Appreciable changes in climate are the result of changes in the energy balance of the Earth, which requires “external” forcings, such as changes in solar output, albedo, and atmospheric greenhouse gases. These forcings can be cyclical, as they are in the ice ages, but they can come in different shapes entirely.

For this reason, “it’s just a natural cycle” is a bit of a cop-out argument. The Earth doesn’t warm up because it feels like it. It warms up because something forces it to. Scientists keep track of natural forcings, but the observed warming of the planet over the second half of the 20th century can only be explained by adding in anthropogenic radiative forcings, namely increases in greenhouse gases such as carbon dioxide.

Of course, it’s always possible that some natural cycle exists, unknown to scientists and their instruments, that is currently causing the planet to warm. There’s always a chance that we could be totally wrong. This omnipresent fact of science is called irreducible uncertainty, because it can never be entirely eliminated. However, it’s very unlikely that such a cycle exists.

Firstly, the hypothetical natural cycle would have to explain the observed “fingerprints” of greenhouse gas-induced warming. Even if, for the sake of argument, we were to discount the direct measurements showing an increased greenhouse effect, other lines of evidence point to anthropogenic causes. For example, the troposphere (the lowest part of the atmosphere) is warming, but the levels above, from the stratosphere up, are cooling, as less radiation is escaping out to space. This rules out cycles related to the Sun, as solar influences would warm the entire atmosphere in a uniform fashion. The only explanation that makes sense is greenhouse gases.

What about an internal cycle, perhaps from volcanoes or the ocean, that releases massive amounts of greenhouse gases? This wouldn’t make sense either, not only because scientists keep track of volcanic and oceanic emissions of CO2 and know that they are small compared to anthropogenic emissions, but also because CO2 from fossil fuels has its own fingerprints. Its isotopic signature is depleted in the carbon-13 isotope, which explains why the atmospheric ratio of carbon-12 to carbon-13 has been going down as anthropogenic carbon dioxide goes up. Additionally, atmospheric oxygen (O2) is decreasing at the same rate that CO2 is increasing, because oxygen is consumed when fossil fuels combust.

A natural cycle that fits all these fingerprints is nearly unfathomable. However, that’s not all the cycle would have to explain. It would also have to tell us why anthropogenic greenhouse gases are not having an effect. Either a century of basic physics and chemistry studying the radiative properties of greenhouse gases would have to be proven wrong, or the natural cycle would have to be unbelievably complex to prevent such dramatic anthropogenic emissions from warming the planet.

It is indeed possible that multidecadal climate variabilityespecially cycles originating in the Atlantic, could be contributing to recent warming, particularly in the Arctic. However, the amplitude of the cycles simply can’t explain the observed temperature change. Internal variability has always been superimposed on top of global surface temperature trends, but the magnitude – as well as the fingerprints – of current warming clearly indicates that anthropogenic greenhouse gases are the dominant factor.

Despite all these lines of evidence, many known climatic cycles are often trumpeted to be the real cause, on the Internet and in the media. Many of these cycles have been debunked on Skeptical Science, and all of them either aren’t in the warming phases, don’t fit the fingerprints, or both.

For example, we are warming far too fast to be coming out of the last ice age, and the Milankovitch cycles that drive glaciation show that we should be, in fact, very slowly going into a new ice age (but anthropogenic warming is virtually certain to offset that influence).

The “1500-year cycle” that S. Fred Singer attributes warming to is, in fact, a change in distribution of thermal energy between the poles, not a net increase in global temperature, which is what we observe now.

The Little Ice Age following the Medieval Warm Period ended due to a slight increase in solar output (changes in both thermohaline circulation and volcanic activity also contributed), but that increase has since reversed, and global temperature and solar activity are now going in opposite directions. This also explains why the 11-year solar cycle could not be causing global warming.

ENSO (El Nino Southern Oscillation) and PDO (Pacific Decadal Oscillation) help to explain short-term variations, but have no long-term trend, warming or otherwise. Additionally, these cycles simply move thermal energy between the ocean and the atmosphere, and do not change the energy balance of the Earth.

As we can see, “it’s just a natural cycle” isn’t just a cop-out argument – it’s something that scientists have considered, studied, and ruled out long before you and I even knew what global warming was.