Feeds:
Posts
Comments

Archive for the ‘Science Lessons’ Category

Here in the northern mid-latitudes (much of Canada and the US, Europe, and the northern half of Asia) our weather is governed by the jet stream. This high-altitude wind current, flowing rapidly from west to east, separates cold Arctic air (to the north) from warmer temperate air (to the south). So on a given day, if you’re north of the jet stream, the weather will probably be cold; if you’re to the south, it will probably be warm; and if the jet stream is passing over you, you’re likely to get rain or snow.

The jet stream isn’t straight, though; it’s rather wavy in the north-south direction, with peaks and troughs. So it’s entirely possible for Calgary to experience a cold spell (sitting in a trough of the jet stream) while Winnipeg, almost directly to the east, has a heat wave (sitting in a peak). The farther north and south these peaks and troughs extend, the more extreme these temperature anomalies tend to be.

Sometimes a large peak or trough will hang around for weeks on end, held in place by certain air pressure patterns. This phenomenon is known as “blocking”, and is often associated with extreme weather. For example, the 2010 heat wave in Russia coincided with a large, stationary, long-lived peak in the polar jet stream. Wildfires, heat stroke, and crop failure ensued. Not a pretty picture.

As climate change adds more energy to the atmosphere, it would be naive to expect all the wind currents to stay exactly the same. Predicting the changes is a complicated business, but a recent study by Jennifer Francis and Stephen Vavrus made headway on the polar jet stream. Using North American and North Atlantic atmospheric reanalyses (models forced with observations rather than a spin-up) from 1979-2010, they found that Arctic amplification – the faster rate at which the Arctic warms, compared to the rest of the world – makes the jet stream slower and wavier. As a result, blocking events become more likely.

Arctic amplification occurs because of the ice-albedo effect: there is more snow and ice available in the Arctic to melt and decrease the albedo of the region. (Faster-than-average warming is not seen in much of Antarctica, because a great deal of thermal inertia is provided to the continent in the form of strong circumpolar wind and ocean currents.) This amplification is particularly strong in autumn and winter.

Now, remembering that atmospheric pressure is directly related to temperature, and pressure decreases with height, warming a region will increase the height at which pressure falls to 500 hPa. (That is, it will raise the 500 hPa “ceiling”.) Below that, the 1000 hPa ceiling doesn’t rise very much, because surface pressure doesn’t usually go much above 1000 hPa anyway. So in total, the vertical portion of the atmosphere that falls between 1000 and 500 hPa becomes thicker as a result of warming.

Since the Arctic is warming faster than the midlatitudes to the south, the temperature difference between these two regions is smaller. Therefore, the difference in 1000-500 hPa thickness is also smaller. Running through a lot of complicated physics equations, this has two main effects:

  1. Winds in the east-west direction (including the jet stream) travel more slowly.
  2. Peaks of the jet stream are pulled farther north, making the current wavier.

Also, both of these effects reinforce each other: slow jet streams tend to be wavier, and wavy jet streams tend to travel more slowly. The correlation between relative 1000-500 hPa thickness and these two effects is not statistically significant in spring, but it is in the other three seasons. Also, melting sea ice and declining snow cover on land are well correlated to relative 1000-500 hPa thickness, which makes sense because these changes are the drivers of Arctic amplification.

Consequently, there is now data to back up the hypothesis that climate change is causing more extreme fall and winter weather in the mid-latitudes, and in both directions: unusual cold as well as unusual heat. Saying that global warming can cause regional cold spells is not a nefarious move by climate scientists in an attempt to make every possible outcome support their theory, as some paranoid pundits have claimed. Rather, it is another step in our understanding of a complex, non-linear system with high regional variability.

Many recent events, such as record snowfalls in the US during the winters of 2009-10 and 2010-11, are consistent with this mechanism – it’s easy to see that they were caused by blocking in the jet stream when Arctic amplification was particularly high. They may or may not have happened anyway, if climate change wasn’t in the picture. However, if this hypothesis endures, we can expect more extreme weather from all sides – hotter, colder, wetter, drier – as climate change continues. Don’t throw away your snow shovels just yet.

Read Full Post »

Lately I have been reading a lot about the Paleocene-Eocene Thermal Maximum, or PETM, which is my favourite paleoclimatic event (is it weird to have a favourite?) This episode of rapid global warming 55 million years ago is particularly relevant to our situation today, because it was clearly caused by greenhouse gases. Unfortunately, the rest of the story is far less clear.

Paleocene mammals

The PETM happened about 10 million years after the extinction that killed the dinosaurs. The Age of Mammals was well underway, although humans wouldn’t appear in any form for another few million years. It was several degrees warmer, to start with, than today’s conditions. Sea levels would have been higher, and there were probably no polar ice caps.

Then, over several thousand years, the world warmed by between 5 and 8°C. It seems to have happened in a few bursts, against a background of slower temperature increase. Even the deep ocean, usually a very stable thermal environment, warmed by at least 5°C. It took around a hundred thousand years for the climate system to recover.

Such rapid global warming hasn’t been seen since, although it’s possible (probable?) that human-caused warming will surpass this rate, if it hasn’t already. It is particularly troubling to realize that our species has never before experienced an event like the one we’re causing today. The climate has changed before, but humans generally weren’t there to see it.

The PETM is marked in the geological record by a sudden jump in the amount of “light” carbon in the climate system. Carbon comes in different isotopes, two of which are most important for climate analysis: carbon with 7 neutrons (13C), and carbon with 6 neutrons (12C). Different carbon cycle processes sequester these forms of carbon in different amounts. Biological processes like photosynthesis preferentially take 12C out of the air in the form of CO2, while geological processes like subduction of the Earth’s crust take anything that’s part of the rock. When the carbon comes back up, the ratios of 12C to 13C are preserved: emissions from the burning of fossil fuels, for example, are relatively “light” because they originated from the tissues of living organisms; emissions from volcanoes are more or less “normal” because they came from molten crust that was once the ocean floor.

In order to explain the isotopic signature of the PETM, you need to add to the climate system either a massive amount of carbon that’s somewhat enriched in light carbon, or a smaller amount of carbon that’s extremely enriched in light carbon, or (most likely) something in the middle. The carbon came in the form of CO2, or possibly CH4 that soon oxidized to form CO2. That, in turn, almost certainly caused the warming.

There was a lot of warming, though, so there must have been a great deal of carbon. We don’t know exactly how much, because the warming power of CO2 depends on how much is already present in the atmosphere, and estimates for initial CO2 concentration during the PETM vary wildly. However, the carbon injection was probably something like 5 trillion tonnes. This is comparable to the amount of carbon we could emit today from burning all our fossil fuel reserves. That’s a heck of a lot of carbon, and what nobody can figure out is where did it all come from?

Arguably the most popular hypothesis is methane hydrates. On continental shelves, methane gas (CH4) is frozen into the ocean floor. Microscopic cages of water contain a single molecule of methane each, but when the water melts the methane is released and bubbles up to the surface. Today there are about 10 trillion tonnes of carbon stored in methane hydrates. In the PETM the levels were lower, but nobody is sure by how much.

The characteristics of methane hydrates seem appealing as an explanation for the PETM. They are very enriched in 12C, meaning less of them would be needed to cause the isotopic shift. They discharge rapidly and build back up slowly, mirroring the sudden onset and slow recovery of the PETM. The main problem with the methane hydrate hypothesis is that there might not have been enough of them to account for the warming observed in the fossil record.

However, remember that in order to release their carbon, methane hydrates must first warm up enough to melt. So some other agent could have started the warming, which then triggered the methane release and the sudden bursts of warming. There is no geological evidence for any particular source – everything is speculative, except for the fact that something spat out all this CO2.

Magnified foraminifera

Don’t forget that where there is greenhouse warming, there is ocean acidification. The ocean is great at soaking up greenhouse gases, but this comes at a cost to organisms that build shells out of calcium carbonate (CaCO3, the same chemical that makes up chalk). CO2 in the water forms carbonic acid, which starts to dissolve their shells. Likely for this reason, the PETM caused a mass extinction of benthic foraminifera (foraminifera = microscopic animals with CaCO3 shells; benthic = lives on the ocean floor).

Other groups of animals seemed to do okay, though. There was a lot of rearranging of habitats – species would disappear in one area but flourish somewhere else – but no mass extinction like the one that killed the dinosaurs. The fossil record can be deceptive in this manner, though, because it only preserves a small number of species. By sheer probability, the most abundant and widespread organisms are most likely to appear in the fossil record. There could be many organisms that were less common, or lived in restricted areas, that went extinct without leaving any signs that they ever existed.

Climate modellers really like the PETM, because it’s a historical example of exactly the kind of situation we’re trying to understand using computers. If you add a few trillion tonnes of carbon to the atmosphere in a relatively short period of time, how much does the world warm and what happens to its inhabitants? The PETM ran this experiment for us in the real world, and can give us some idea of what to expect in the centuries to come. If only it had left more data behind for us to discover.

References:
Pagani et al., 2006
Dickens, 2011
McInerney and Wing, 2011

Read Full Post »

During my summer at UVic, two PhD students at the lab (Andrew MacDougall and Chris Avis) as well as my supervisor (Andrew Weaver) wrote a paper modelling the permafrost carbon feedback, which was recently published in Nature Geoscience. I read a draft version of this paper several months ago, and am very excited to finally share it here.

Studying the permafrost carbon feedback is at once exciting (because it has been left out of climate models for so long) and terrifying (because it has the potential to be a real game-changer). There is about twice as much carbon frozen into permafrost than there is floating around in the entire atmosphere. As high CO2 levels cause the world to warm, some of the permafrost will thaw and release this carbon as more CO2 – causing more warming, and so on. Previous climate model simulations involving permafrost have measured the CO2 released during thaw, but haven’t actually applied it to the atmosphere and allowed it to change the climate. This UVic study is the first to close that feedback loop (in climate model speak we call this “fully coupled”).

The permafrost part of the land component was already in place – it was developed for Chris’s PhD thesis, and implemented in a previous paper. It involves converting the existing single-layer soil model to a multi-layer model where some layers can be frozen year-round. Also, instead of the four RCP scenarios, the authors used DEPs (Diagnosed Emission Pathways): exactly the same as RCPs, except that CO2 emissions, rather than concentrations, are given to the model as input. This was necessary so that extra emissions from permafrost thaw would be taken into account by concentration values calculated at the time.

As a result, permafrost added an extra 44, 104, 185, and 279 ppm of CO2 to the atmosphere for DEP 2.6, 4.5, 6.0, and 8.5 respectively. However, the extra warming by 2100 was about the same for each DEP, with central estimates around 0.25 °C. Interestingly, the logarithmic effect of CO2 on climate (adding 10 ppm to the atmosphere causes more warming when the background concentration is 300 ppm than when it is 400 ppm) managed to cancel out the increasing amounts of permafrost thaw. By 2300, the central estimates of extra warming were more variable, and ranged from 0.13 to 1.69 °C when full uncertainty ranges were taken into account. Altering climate sensitivity (by means of an artificial feedback), in particular, had a large effect.

As a result of the thawing permafrost, the land switched from a carbon sink (net CO2 absorber) to a carbon source (net CO2 emitter) decades earlier than it would have otherwise – before 2100 for every DEP. The ocean kept absorbing carbon, but in some scenarios the carbon source of the land outweighed the carbon sink of the ocean. That is, even without human emissions, the land was emitting more CO2 than the ocean could soak up. Concentrations kept climbing indefinitely, even if human emissions suddenly dropped to zero. This is the part of the paper that made me want to hide under my desk.

This scenario wasn’t too hard to reach, either – if climate sensitivity was greater than 3°C warming per doubling of CO2 (about a 50% chance, as 3°C is the median estimate by scientists today), and people followed DEP 8.5 to at least 2013 before stopping all emissions (a very intense scenario, but I wouldn’t underestimate our ability to dig up fossil fuels and burn them really fast), permafrost thaw ensured that CO2 concentrations kept rising on their own in a self-sustaining loop. The scenarios didn’t run past 2300, but I’m sure that if you left it long enough the ocean would eventually win and CO2 would start to fall. The ocean always wins in the end, but things can be pretty nasty until then.

As if that weren’t enough, the paper goes on to list a whole bunch of reasons why their values are likely underestimates. For example, they assumed that all emissions from permafrost were  CO2, rather than the much stronger CH4 which is easily produced in oxygen-depleted soil; the UVic model is also known to underestimate Arctic amplification of climate change (how much faster the Arctic warms than the rest of the planet). Most of the uncertainties – and there are many – are in the direction we don’t want, suggesting that the problem will be worse than what we see in the model.

This paper went in my mental “oh shit” folder, because it made me realize that we are starting to lose control over the climate system. No matter what path we follow – even if we manage slightly negative emissions, i.e. artificially removing CO2 from the atmosphere – this model suggests we’ve got an extra 0.25°C in the pipeline due to permafrost. It doesn’t sound like much, but add that to the 0.8°C we’ve already seen, and take technological inertia into account (it’s simply not feasible to stop all emissions overnight), and we’re coming perilously close to the big nonlinearity (i.e. tipping point) that many argue is between 1.5 and 2°C. Take political inertia into account (most governments are nowhere near even creating a plan to reduce emissions), and we’ve long passed it.

Just because we’re probably going to miss the the first tipping point, though, doesn’t mean we should throw up our hands and give up. 2°C is bad, but 5°C is awful, and 10°C is unthinkable. The situation can always get worse if we let it, and how irresponsible would it be if we did?

Read Full Post »

On the heels of my last post about iron fertilization of the ocean, I found another interesting paper on the topic. This one, written by Long Cao and Ken Caldeira in 2010, was much less hopeful.

Instead of a small-scale field test, Cao and Caldeira decided to model iron fertilization using the ocean GCM from Lawrence Livermore National Laboratory. To account for uncertainties, they chose to calculate an upper bound on iron fertilization rather than a most likely scenario. That is, they maxed out phytoplankton growth until something else became the limiting factor – in this case, phosphates. On every single cell of the sea surface, the model phytoplankton were programmed to grow until phosphate concentrations were zero.

A 2008-2100 simulation implementing this method was forced with CO2 emissions data from the A2 scenario. An otherwise identical A2 simulation did not include the ocean fertilization, to act as a control. Geoengineering modelling is strange that way, because there are multiple definitions of “control run”: a non-geoengineered climate that is allowed to warm unabated, as well as preindustrial conditions (the usual definition in climate modelling).

Without any geoengineering, atmospheric CO2 reached 965 ppm by 2100. With the maximum amount of iron fertilization possible, these levels only fell to 833 ppm. The mitigation of ocean acidification was also quite modest: the sea surface pH in 2100 was 7.74 without geoengineering, and 7.80 with. Given the potential side effects of iron fertilization, is such a small improvement worth the trouble?

Unfortunately, the ocean acidification doesn’t end there. Although the problem was lessened somewhat at the surface, deeper layers in the ocean actually became more acidic. There was less CO2 being gradually mixed in from the atmosphere, but another source of dissolved carbon appeared: as the phytoplankton died and sank, they decomposed a little bit and released enough CO2 to cause a net decrease in pH compared to the control run.

In the diagram below, compare the first row (A2 control run) to the second (A2 with iron fertilization). The more red the contours are, the more acidic that layer of the ocean is with respect to preindustrial conditions. The third row contains data from another simulation in which emissions were allowed to increase just enough to offest sequestration by phytoplankton, leading to the same CO2 concentrations as the control run. The general pattern – iron fertilization reduces some acidity at the surface, but increases it at depth – is clear.

depth vs. latitude at 2100 (left); depth vs. time (right)

The more I read about geoengineering, the more I realize how poor the associated cost-benefit ratios might be. The oft-repeated assertion is true: the easiest way to prevent further climate change is, by a long shot, to simply reduce our emissions.

Read Full Post »

While many forms of geoengineering involve counteracting global warming with induced cooling, others move closer to the source of the problem and target the CO2 increase. By artificially boosting the strength of natural carbon sinks, it might be possible to suck CO2 emissions right out of the air. Currently around 30% of human emissions are absorbed by these sinks; if we could make this metric greater than 100%, atmospheric CO2 concentrations would decline.

One of the most prominent proposals for carbon sink enhancement involves enlisting phytoplankton, photosynthetic organisms in the ocean which take the carbon out of carbon dioxide and use it to build their bodies. When nutrients are abundant, phytoplankton populations explode and create massive blue or green blooms visible from space. Very few animals enjoy eating these organisms, so they just float there for a while. Then they run out of nutrients, die, and sink to the bottom of the ocean, taking the carbon with them.

Phytoplankton blooms are a massive carbon sink, but they still can’t keep up with human emissions. This is because CO2 is not the limiting factor for their growth. In many parts of the ocean, the limiting factor is actually iron. So this geoengineering proposal, often known as “iron fertilization”, involves dumping iron compounds into the ocean and letting the phytoplankton go to work.

A recent study from Germany (see also the Nature news article) tested out this proposal on a small scale. The Southern Ocean, which surrounds Antarctica, was the location of their field tests, since it has a strong circumpolar current that kept the iron contained. After adding several tonnes of iron sulphate, the research ship tracked the phytoplankton as they bloomed, died, and sank.

Measurements showed that at least half of the phytoplankton sank below 1 km after they died, and “a substantial portion is likely to have reached the sea floor”. At this depth, which is below the mixed layer of the ocean, the water won’t be exposed to the atmosphere for centuries. The carbon from the phytoplankton’s bodies is safely stored away, without the danger of CO2 leakage that carbon capture and storage presents. Unlike in previous studies, the researchers were able to show that iron fertilization could be effective.

However, there are other potential side effects of large-scale iron fertilization. We don’t know what the impacts of so much iron might be on other marine life. Coating the sea surface with phytoplankton would block light from entering the mixed layer, decreasing photosynthesis in aquatic plants and possibly leading to oxygen depletion or “dead zones”. It’s also possible that toxic species of algae would get a hold of the nutrients and create poisonous blooms. On the other hand, the negative impacts of ocean acidification from high levels of CO2 would be lessened, a problem which is not addressed by solar radiation-based forms of geoengineering.

Evidently, the safest way to fix the global warming problem is to stop burning fossil fuels. Most scientists agree that geoengineering should be a last resort, an emergency measure to pull out if the Greenland ice sheet is about to go, rather than an excuse for nations to continue burning coal. And some scientists, myself included, fully expect that geoengineering will be necessary one day, so we might as well figure out the safest approach.

Read Full Post »

Because of our emissions of greenhouse gases like carbon dioxide, a little extra energy gets trapped in our atmosphere every day. Over time, this energy builds up. It manifests itself in the form of higher temperatures, stronger storms, larger droughts, and melting ice. Global warming, then, isn’t about temperatures as much as it is about energy.

The extra energy, and its consequences, don’t get distributed evenly around the world. Weather systems, which move heat and moisture around the planet, aren’t very fair: they tend to bully some places more than others. These days, it’s almost as if the weather picks geographical targets each season to bombard with extremes, then moves on to somewhere else. This season, the main target seems to be North America.

The warmest 12 months on record for the United States recently wrapped up with a continent-wide heat wave and drought. Thousands of temperature records were broken, placing millions of citizens in danger. By the end of June, 56% of the country was experiencing at least “moderate” drought levels – the largest drought since 1956. Wildfires took over Colorado, and extreme wind storms on the East Coast knocked out power lines and communication systems for a week. Conditions have been similar throughout much of Canada, although its climate and weather reporting systems are less accessible.

“This is what global warming looks like,”, said Professor Jonathan Overpeck from the University of Arizona, a sentiment that was echoed across the scientific community in the following weeks. By the end of the century, these conditions will be the new normal.

Does that mean that these particular events were caused by climate change? There’s no way of knowing. It could have just been a coincidence, but the extra energy global warming adds to our planet certainly made them more likely. Even without climate change, temperature records get broken all the time.

However, in an unchanging climate, there would be roughly the same amount of record highs as record lows. In a country like the United States, where temperature records are well catalogued and publicly available, it’s easy to see that this isn’t the case. From 2000-2009, there were twice as many record highs as record lows, and so far this year, there have been ten times as many:

The signal of climate change on extreme weather is slowly, but surely, emerging. For those who found this summer uncomfortable, the message from the skies is clear: Get used to it. This is only the beginning.

Read Full Post »

Later in my career as a climate modeller, I expect to spend a lot of time studying geoengineering. Given the near-total absence of policy responses to prevent climate change, I think it’s very likely that governments will soon start thinking seriously about ways to artificially cool the planet. Who will they come to for advice? The climate modellers.

Some scientists are pre-emptively recognizing this need for knowledge, and beginning to run simulations of geoengineering. In fact, there’s an entire model intercomparison project dedicated to this area of study. There’s only a small handful of publications so far, but the results are incredibly interesting. Here I summarize two recent papers that model solar radiation management: the practice of offsetting global warming by partially blocking sunlight, whether by seeding clouds, adding sulfate aerosols to the stratosphere, or placing giant mirrors in space. As an added bonus, both of these papers are open access.

A group of scientists from Europe ran the same experiment on four of the world’s most complex climate models. The simulation involved instantaneously quadrupling CO2 from preindustrial levels, but offsetting it with a reduction in the solar constant, such that the net forcing was close to zero.

The global mean temperature remained at preindustrial levels. “Great,” you might think, “we’re home free!” However, climate is far more than just one globally averaged metric. Even though the average temperature stayed the same, there were still regional changes, with cooling in the tropics and warming at both poles (particularly in their respective winters):

There were regional changes in precipitation, too, but they didn’t all cancel out like with temperature. Global mean precipitation decreased, due to cloud feedbacks which are influenced by sunlight but not greenhouse gases. There were significant changes in the monsoons of south Asia, but the models disagreed as to exactly what those changes would be.

This intercomparison showed that even with geoengineering, we’re still going to get a different climate. We won’t have to worry about some of the big-ticket items like sea level rise, but droughts and forest dieback will remain a major threat. Countries will still struggle to feed their people, and species will still face extinction.

On the other side of the Atlantic, Damon Matthews and Ken Caldeira took a different approach. (By the way, what is it about Damon Matthews? All the awesome papers out of Canada seem to have his name on them.) Using the UVic ESCM, they performed a more realistic experiment in which emissions varied with time. They offset emissions from the A2 scenario with a gradually decreasing solar constant. They found that the climate responds quickly to geoengineering, and their temperature and precipitation results were very similar to the European paper.

They also examined some interesting feedbacks in the carbon cycle. Carbon sinks (ecosystems which absorb CO2, like oceans and forests) respond to climate change in two different ways. First, they respond directly to increases in atmospheric CO2 – i.e., the fertilization effect. These feedbacks (lumped together in a term we call beta) are negative, because they tend to increase carbon uptake. Second, they respond to the CO2-induced warming, with processes like forest dieback and increased respiration. These feedbacks (a term called gamma) are positive, because they decrease uptake. Currently we have both beta and gamma, and they’re partially cancelling each other out. However, with geoengineering, the heat-induced gamma goes away, and beta is entirely unmasked. As a result, carbon sinks became more effective in this experiment, and sucked extra CO2 out of the atmosphere.

The really interesting part of the Matthews and Caldeira paper was when they stopped the geoengineering. This scenario is rather plausible – wars, recessions, or public disapproval could force the world to abandon the project. So, in the experiment, they brought the solar constant back to current levels overnight.

The results were pretty ugly. Global climate rapidly shifted back to the conditions it would have experienced without geoengineering. In other words, all the warming that we cancelled out came back at once. Global average temperature changed at a rate of up to 4°C per decade, or 20 times faster than at present. Given that biological, physical, and social systems worldwide are struggling to keep up with today’s warming, this rate of change would be devastating. To make things worse, gamma came back in full force, and carbon sinks spit out the extra CO2 they had soaked up. Atmospheric concentrations went up further, leading to more warming.

Essentially, if governments want to do geoengineering properly, they have to make a pact to do so forever, no matter what the side effects are or what else happens in the world. Given how much legislation is overturned every time a country has a change in government, such a promise would be almost impossible to uphold. Matthews and Caldeira consider this reality, and come to a sobering conclusion:

In the case of inconsistent or erratic deployment (either because of shifting public opinions or unilateral action by individual nations), there would be the potential for large and rapid temperature oscillations between cold and warm climate states.

Yikes. If that doesn’t scare you, what does?

Read Full Post »

Cross-posted from NextGen Journal

Ask most people to picture a scientist at work, and they’ll probably imagine someone in a lab coat and safety goggles, surrounded by test tubes and Bunsen burners. If they’re fans of The Big Bang Theory, maybe they’ll picture complicated equations being scribbled on whiteboards. Others might think of the Large Hadron Collider, or people wading through a swamp taking water samples.

All of these images are pretty accurate – real scientists, in one field or another, do these things as part of their job. But a large and growing approach to science, which is present in nearly every field, replaces the lab bench or swamp with a computer. Mathematical modelling, which essentially means programming the complicated equations from the whiteboard into a computer and solving them many times, is the science of today.

Computer models are used for all sorts of research questions. Epidemiologists build models of an avian flu outbreak, to see how the virus might spread through the population. Paleontologists build biomechanical models of different dinosaurs, to figure out how fast they could run or how high they could stretch their necks. I’m a research student in climate science, where we build models of the entire planet, to study the possible effects of global warming.

All of these models simulate systems which aren’t available in the real world. Avian flu hasn’t taken hold yet, and no sane scientist would deliberately start an outbreak just so they could study it! Dinosaurs are extinct, and playing around with their fossilized bones to see how they might move would be heavy and expensive. Finally, there’s only one Earth, and it’s currently in use. So models don’t replace lab and field work – rather, they add to it. Mathematical models let us perform controlled experiments that would otherwise be impossible.

If you’re interested in scientific modelling, spend your college years learning a lot of math, particularly calculus, differential equations, and numerical methods. The actual application of the modelling, like paleontology or climatology, is less important for now – you can pick that up later, or read about it on your own time. It might seem counter-intuitive to neglect the very system you’re planning to spend your life studying, but it’s far easier this way. A few weeks ago I was writing some computer code for our lab’s climate model, and I needed to calculate a double integral of baroclinic velocity in the Atlantic Ocean. I didn’t know what baroclinic velocity was, but it only took a few minutes to dig up a paper that defined it. My work would have been a lot harder if, instead, I hadn’t known what a double integral was.

It’s also important to become comfortable with computer programming. You might think it’s just the domain of software developers at Google or Apple, but it’s also the main tool of scientists all over the world. Two or three courses in computer science, where you’ll learn a multi-purpose language like C or Java, are all you need. Any other languages you need in the future will take you days, rather than months, to master. If you own a Mac or run Linux on a PC, spend a few hours learning some basic UNIX commands – it’ll save you a lot of time down the road. (Also, if the science plan falls through, computer science is one of the only majors which will almost definitely get you a high-paying job straight out of college.)

Computer models might seem mysterious, or even untrustworthy, when the news anchor mentions them in passing. In fact, they’re no less scientific than the equations that Sheldon Cooper scrawls on his whiteboard. They’re just packaged together in a different form.

Read Full Post »

Let’s all put on our science-fiction hats and imagine that humans get wiped off the face of the Earth tomorrow. Perhaps a mysterious superbug kills us all overnight, or maybe we organize a mass migration to live on the moon. In a matter of a day, we’re gone without a trace.

If your first response to this scenario is “What would happen to the climate now that fossil fuel burning has stopped?” then you may be afflicted with Climate Science. (I find myself reacting like this all the time now. I can’t watch The Lord of the Rings without imagining how one would model the climate of Middle Earth.)

A handful of researchers, particularly in Canada, recently became so interested in this question that they started modelling it. Their motive was more than just morbid fascination – in fact, the global temperature change that occurs in such a scenario is a very useful metric. It represents the amount of warming that we’ve already guaranteed, and a lower bound for the amount of warming we can expect.

Initial results were hopeful. Damon Matthews and Andrew Weaver ran the experiment on the UVic ESCM and published the results. In their simulations, global average temperature stabilized almost immediately after CO2 emissions dropped to zero, and stayed approximately constant for centuries. The climate didn’t recover from the changes we inflicted, but at least it didn’t get any worse. The “zero-emissions commitment” was more or less nothing. See the dark blue line in the graph below:

However, this experiment didn’t take anthropogenic impacts other than CO2 into account. In particular, the impacts of sulfate aerosols and additional (non-CO2) greenhouse gases currently cancel out, so it was assumed that they would keep cancelling and could therefore be ignored.

But is this a safe assumption? Sulfate aerosols have a very short atmospheric lifetime – as soon as it rains, they wash right out. Non-CO2 greenhouse gases last much longer (although, in most cases, not as long as CO2). Consequently, you would expect a transition period in which the cooling influence of aerosols had disappeared but the warming influence of additional greenhouse gases was still present. The two forcings would no longer cancel, and the net effect would be one of warming.

Damon Matthews recently repeated his experiment, this time with Kirsten Zickfeld, and took aerosols and additional greenhouse gases into account. The long-term picture was still the same – global temperature remaining at present-day levels for centuries – but the short-term response was different. For about the first decade after human influences disappeared, the temperature rose very quickly (as aerosols were eliminated from the atmosphere) but then dropped back down (as additional greenhouse gases were eliminated). This transition period wouldn’t be fun, but at least it would be short. See the light blue line in the graph below:

We’re still making an implicit assumption, though. By looking at the graphs of constant global average temperature and saying “Look, the problem doesn’t get any worse!”, we’re assuming that regional temperatures are also constant for every area on the planet. In fact, half of the world could be warming rapidly and the other half could be cooling rapidly, a bad scenario indeed. From a single global metric, you can’t just tell.

A team of researchers led by Nathan Gillett recently modelled regional changes to a sudden cessation of CO2 emissions (other gases were ignored). They used a more complex climate model from Environment Canada, which is better for regional projections than the UVic ESCM.

The results were disturbing: even though the average global temperature stayed basically constant after CO2 emissions (following the A2 scenario) disappeared in 2100, regional temperatures continued to change. Most of the world cooled slightly, but Antarctica and the surrounding ocean warmed significantly. By the year 3000, the coasts of Antarctica were 9°C above preindustrial temperatures. This might easily be enough for the West Antarctic Ice Sheet to collapse.

Why didn’t this continued warming happen in the Arctic? Remember that the Arctic is an ocean surrounded by land, and temperatures over land change relatively quickly in response to a radiative forcing. Furthermore, the Arctic Ocean is small enough that it’s heavily influenced by temperatures on the land around it. In this simulation, the Arctic sea ice actually recovered.

On the other hand, Antarctica is land surrounded by a large ocean that mixes heat particularly well. As a result, it has an extraordinarily high heat capacity, and takes a very long time to fully respond to changes in temperature. So, even by the year 3000, it was still reacting to the radiative forcing of the 21st century. The warming ocean surrounded the land and caused it to warm as well.

As a result of the cooling Arctic and warming Antarctic, the Intertropical Convergence Zone (an important wind current) shifted southward in the simulation. As a result, precipitation over North Africa continued to decrease – a situation that was already bad by 2100. Counterintuitively, even though global warming had ceased, some of the impacts of warming continued to worsen.

These experiments, assuming an overnight apocalypse, are purely hypothetical. By definition, we’ll never be able to test their accuracy in the real world. However, as a lower bound for the expected impacts of our actions, the results are sobering.

Read Full Post »

One of the most dangerous effects of climate change is its impact on extreme events. The extra energy that’s present on a warmer world doesn’t distribute itself uniformly – it can come out in large bursts, manifesting itself as heat waves, floods, droughts, hurricanes, and tornadoes, to name a few. Consequently, warming the world by an average of 2 degrees is a lot more complicated than adding 2 to every weather station reading around the world.

Scientists have a difficult time studying the impacts of climate change on extreme events, because all these events could happen anyway – how can you tell if Hurricane Something is a direct result of warming, or just a fluke? Indeed, for events involving precipitation, like hurricanes or droughts, it’s not possible to answer this question. However, research is advancing to the point where we can begin to attribute individual heat waves to climate change with fairly high levels of confidence. For example, the recent extended heat wave in Texas, which was particularly devastating for farmers, probably wouldn’t have happened if it weren’t for global warming.

Extreme heat is arguably the easiest event for scientists to model. Temperature is one-dimensional and more or less follows a normal distribution for a given region. As climate change continues, temperatures increase (shifting the bell curve to the right) and become more variable (flattening the bell curve). The end result, as shown in part (c) of the figure below, is a significant increase in extremely hot weather:

Now, imagine that you get a bunch of weather station data from all across the world in 1951-1980, back before the climate had really started to warm. For every single record, find the temperature anomaly (difference from the average value in that place and on that day of the year). Plot the results, and you will get a normal distribution centred at 0. So values in the middle of the bell curve – i.e., temperatures close to the average – are the most likely, and temperatures on the far tails of the bell curve – i.e. much warmer or much colder than the average – are far less likely.

As any statistics student knows, 99.7% of the Earth’s surface should have temperatures within three standard deviations of the mean (this is just an interval, with length dependent on how flat the bell curve is) at any given time. So if we still had the same climate we did between 1951 and 1980, temperatures more than three standard deviations above the mean would cover 0.15% of the Earth’s surface.

However, in the past few years, temperatures three standard deviations above average have covered more like 10% of the Earth’s surface. Even some individual heat waves – like the ones in Texas and Russia over the past few years – have covered so much of the Earth’s surface on their own that they blow the 0.15% statistic right out of the water. Under the “old” climate, they almost certainly wouldn’t have happened. You can only explain them by shifting the bell curve to the right and flattening it. For this reason, we can say that these heat waves were caused by global warming.

Here’s a graph of the bell curves we’re talking about, in this case for the months of June, July, and August. The red, yellow and green lines are the old climate; the blue and purple lines are the new climate. Look at the area under the curve to the right of x = 3: it’s almost nothing beneath the old climate, but quite significant beneath the new climate.

Using basic statistical methods, it’s very exciting that we can now attribute specific heat waves to climate change. On the other hand, it’s very depressing, because it goes to show that such events will become far more likely as the climate continues to change, and the bell curve shifts inexorably to the right.

References:

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 319 other followers