Two Great TED Talks

Both are about climate modelling, and both are definitely worth 10-20 minutes of your time.

The first is from Gavin Schmidt, NASA climate modeller and RealClimate author extraordinaire:

The second is from Steve Easterbrook, my current supervisor at the University of Toronto (this one is actually TEDxUofT, which is independent from TED):

Advertisement

Modularity

I’ve now taken a look at the code and structure of four different climate models: Model E, CESM, UVic ESCM, and the Met Office Unified Model (which contains all the Hadley models). I’m noticing all sorts of similarities and differences, many of which I didn’t expect.

For example, I didn’t anticipate any overlap in climate model components. I thought that every modelling group would build their own ocean, their own atmosphere, and so on, from scratch. In fact, what I think of as a “model” – a self-contained, independent piece of software – applies to components more accurately than it does to an Earth system model. The latter is more accurately described as a collection of models, each representing one piece of the climate system. Each modelling group has a different collection of models, but not every one of these models is unique to their lab.

Ocean models are a particularly good example. The Modular Ocean Model (MOM) is built by GFDL, but it’s also used in NASA’s Model E and the UVic Earth System Climate Model. Another popular ocean model is the Nucleus for European Modelling of the Ocean (NEMO, what a great acronym) which is used by the newer Hadley climate models, as well as the IPSL model from France (which is sitting on my desktop as my next project!)

Aside: Speaking of clever acronyms, I don’t know what the folks at NCAR were thinking when they created the Single Column Atmosphere Model. Really, how did they not see their mistake? And why haven’t Marc Morano et al latched onto this acronym and spread it all over the web by now?

In most cases, an Earth system model has a unique architecture to fit all the component models together – a different coupling process. However, with the rise of standard interfaces like the Earth System Modeling Framework, even couplers can be reused between modelling groups. For example, the Hadley Centre and IPSL both use the OASIS coupler.

There are benefits and drawbacks to the rising overlap and “modularity” of Earth system models. One could argue that it makes the models less independent. If they all agree closely, how much of that agreement is due to their physical grounding in reality, and how much is due to the fact that they all use a lot of the same code? However, modularity is clearly a more efficient process for model development. It allows larger communities of scientists from each sub-discipline of Earth system modelling to form, and – in the case of MOM and NEMO – make two or three really good ocean models, instead of a dozen mediocre ones. Concentrating our effort, and reducing unnecessary duplication of code, makes modularity an attractive strategy, if an imperfect one.

The least modular of all the Earth system models I’ve looked at is Model E. The documentation mentions different components for the atmosphere, sea ice, and so on, but these components aren’t separated into subdirectories, and the lines between them are blurry. Nearly all the fortran files sit in the same directory, “model”,  and some of them deal with two or more components. For example, how would you categorize a file that calculates surface-atmosphere fluxes? Even where Model E uses code from other institutions, such as the MOM ocean model, it’s usually adapted and integrated into their own files, rather than in a separate directory.

The most modular Earth system model is probably the Met Office Unified Model. They don’t appear to have adapted NEMO, CICE (the sea ice model from NCAR) and OASIS at all – in fact, they’re not present in the code repository they gave us. I was a bit confused when I discovered that their “ocean” directory, left over from the years when they wrote their own ocean code, was now completely empty! Encapsulation to the point where a component model can be stored completely externally to the structural code was unexpected.

An interesting example of the challenges of modularity appears in sea ice. Do you create a separate, independent sea ice component, like CESM did? Do you consider it part of the ocean, like NEMO? Or do you lump in lake ice along with sea ice and subsequently allow the component to float between the surface and the ocean, like Model E?

The real world isn’t modular. There are no clear boundaries between components on the physical Earth. But then, there’s only one physical Earth, whereas there are many virtual Earths in the form of climate modelling, and limited resources for developing the code in each component. In this spectrum of interconnection and encapsulation, is one end or the other our best bet? Or is there a healthy balance somewhere in the middle?

Working Away

The shape of my summer research is slowly becoming clearer. Basically, I’ll be writing a document comparing the architecture of different climate models. This, of course, involves getting access to the source code. Building on Steve’s list, here are my experiences:

NCAR, Community Earth System Model (CESM): Password-protected, but you can get access within an hour. After a quick registration, you’ll receive an automated email with a username and password. This login information gives you access to their Subversion repository. Registration links and further information are available here, under “Acquiring the CESM1.0 Release Code”.

University of Victoria, Earth System Climate Model (ESCM): Links to the source code can be found on this page, but they’re password-protected. You can request an account by sending an email – follow the link for more information.

Geophysical Fluid Dynamics Laboratory (GFDL), CM 2.1: Slightly more complicated. Create an account for their Gforge repository, which is an automated process. Then, request access to the MOM4P1 project – apparently CM 2.1 is included within that. Apparently, the server grants you request to a project, so it sounds automatic – but the only emails I’ve received from the server regard some kind of GFDL mailing list, and don’t mention the project request. I will wait and see.
Update (July 20): It looks like I got access to the project right after I requested it – I just never received an email!

Max Planck Institute (MPI), COSMOS: Code access involves signing a licence agreement, faxing it to Germany, and waiting for it to be approved and signed by MPI. The agreement is not very restrictive, though – it deals mainly with version control, documenting changes to the code, etc.

UK Met Office, Hadley Centre Coupled Model version 3 (HadCM3): Our lab already has a copy of the code for HadCM3, so I’m not really sure what the process is to get access, but apparently it involved a lot of government paperwork.

Institut Pierre Simon Laplace (IPSL), CM5: This one tripped me up for a while, largely because the user guide is difficult to find, and written in French. Google Translate helped me out there, but it also attempted to “translate” their command line samples! Make sure that you have ksh installed, too – it’s quick to fix, but I didn’t realize it right away. Some of the components for IPSLCM5 are open access, but others are password-protected. Follow the user guide’s instructions for who to email to request access.

Model E: This was the easiest of all. From the GISS website, you can access all the source code without any registration. They offer a frozen AR4 version, as well as nightly snapshots of the work-in-process for AR5 (frozen AR5 version soon to come). There is also a wealth of documentation on this site, such as an installation guide and a description of the model.

I’ve taken a look at the structural code for Model E, which is mostly contained in the file MODELE.f. The code is very clear and well commented, and the online documentation helped me out too. After drawing a lot of complicated diagrams with arrows and lists, I feel that I have a decent understanding of the Model E architecture.

Reading code can become monotonous, though, and every now and then I feel like a little computer trouble to keep things interesting. For that reason, I’m continuing to chip away at building and running two models, Model E and CESM. See my previous post for how this process started.

<TECHNICAL COMPUTER STUFF> (Feel free to skip ahead…)

I was still having trouble viewing the Model E output (only one file worked on Panoply, the rest created an empty map) so I emailed some of the lab’s contacts at NASA. They suggested I install CDAT, a process which nearly broke Ubuntu (haven’t we all been there?) Basically, because it’s an older program, it thought the newest version of Python was 2.5 – which it subsequently installed and set as the default in /usr/bin. Since I had Python 2.6 installed, and the versions are apparently very not-backwards-compatible, every program that depended on Python (i.e. almost everything on Ubuntu) stopped working. Our IT contact managed to set 2.6 back as the default, but I’m not about to try my hand at CDAT again…

I have moved forward very slightly on CESM. I’ve managed to build the model, but upon calling test.<machine name>.run, I get rather an odd error:

./Tools/ccsm_getenv: line 9: syntax error near unexpected token '('
./Tools/ccsm_getenv: line 9: 'foreach i (env_case.xml env_run.xml env_conf.xml env_build.xml env_mach_pes.xml)'

Now, I’m pretty new at shell scripting, but I can’t see the syntax error there – and wouldn’t syntax errors appear at compile-time, rather than run-time?

A post by Michael Tobis, who had a similar error, suggested that the issue had to do with qsub. Unfortunately, that meant I had to actually use qsub – I had previously given up trying to configure Torque to run on a single machine rather than many. I gave the installation another go, and now I can get scripts into the queue, but they never start running – their status stays as “Q” even if I leave the computer alone for an hour. Since the machine has a dual-core processor, I can’t see why it couldn’t run both a server and a node at once, but it doesn’t seem to be working for me.

</TECHNICAL COMPUTER STUFF>

Before I started this job, climate models seemed analogous to Antarctica – a distant, mysterious, complex system that I wanted to visit, but didn’t know how to get to. In fact, they’re far more accessible than Antarctica. More on the scale of a complicated bus trip across town, perhaps?

They are not perfect pieces of software, and they’re not very user friendly. However, all the struggles of installation pay off when you finally get some output, and open it up, and see realistic data representing the very same planet you’re sitting on! Even just reading the code for different models shows you many different ways to look at the same system – for example, is sea ice a realm of its own, or is it a subset of the ocean? In the real world the lines are blurry, but computation requires us to make clear divisions.

The code can be unintelligible (lndmaxjovrdmdni) or familiar (“The Stefan-Boltzmann constant! Finally I recognize something!”) or even entertaining (a seemingly random identification string, dozens of characters long, followed by the comment if you edit this you will get what you deserve). When you get tied up in the code, though, it’s easy to miss the bigger picture: the incredible fact that we can use the sterile, binary practice of computation to represent a system as messy and mysterious as the whole planet. Isn’t that something worth sitting and marveling over?

Climate Models on Ubuntu

Part 1: Model E

I felt a bit over my head attempting to port CESM, so I asked a grad student, who had done his Master’s on climate modelling, for help. He looked at the documentation, scratched his head, and suggested I start with NASA’s Model E instead, because it was easier to install. And was it ever! We had it up and running within an hour or so. It was probably so much easier because Model E comes with gfortran support, while CESM only has scripts written for commercial compilers like Intel or PGI.

Strangely, when using Model E, no matter what dates the rundeck sets for the simulation start and end, the subsequently generated I file always has December 1, 1949 as the start date and December 2, 1949 as the end date. We edited the I files after they were created, which seemed to fix the problem, but it was still kind of weird.

I set up Model E to run a ten-year simulation with fixed atmospheric concentration (really, I just picked a rundeck at random) over the weekend. It took it about 3 days to complete, so just over 7 hours per year of simulation time…not bad for a 32-bit desktop!

However, I’m having some weird problems with the output – after configuring the model to output files in NetCDF format and opening them in Panoply, only the file with all the sea ice variables worked. All the others either gave a blank map (array full of N/A’s) or threw errors when Panoply tried to read them. Perhaps the model isn’t enjoying having the I file edited?

Part 2: CESM

After exploring Model E, I felt like trying my hand at CESM again. Steve managed to port it onto his Macbook last year, and took detailed notes. Editing the scripts didn’t seem so ominous this time!

The CESM code can be downloaded using Subversion (instructions here) after a quick registration. Using the Ubuntu Software Center, I downloaded some necessary packages: libnetcdf-dev, mpich2, and torque-scheduler. I already had gfortran, which is sort of essential.

I used the Porting via user defined machine files method to configure the model for my machine, using the Hadley scripts as a starting point. Variables for the config_machines.xml are explained in Appendix D through H of the user’s guide (links in chapter 7). Mostly, you’re just pointing to folders where you want to store data and files. Here are a few exceptions:

  • DOUT_L_HTAR: I stuck with "TRUE", as that was the default.
  • CCSM_CPRNC: this tool already exists in the CESM source code, in /models/atm/cam/tools/cprnc.
  • BATCHQUERY and BATCHSUBMIT: the Hadley entry had “qstat” and “qsub”, respectively, so I Googled these terms to find out which batch submission software they referred to (Torque, which is freely available in the torque-scheduler package) and downloaded it so I could keep the commands the same!
  • GMAKE_J: this determines how many processors to commit to a certain task, and I wasn’t sure how many this machine had, so I just put “1”.
  • MAX_TASKS_PER_NODE: I chose "8", which the user’s guide had mentioned as an example.
  • MPISERIAL_SUPPORT: the default is “FALSE”.

The only file that I really needed to edit was Macros.<machine name>. The env_machopts.<machine name> file ended up being empty for me. I spent a while confused by the modules declarations, which turned out to refer to the Environment Modules software. Once I realized that, for this software to be helpful, I would have to write five or six modulefiles in a language I didn’t know, I decided that it probably wasn’t worth the effort, and took these declarations out. I left mkbatch.<machine name> alone, except for the first line which sets the machine, and then turned my attention to Macros.

“Getting this to work will be an iterative process”, the user’s guide says, and it certainly was (and still is). It’s never a good sign when the installation guide reminds you to be patient! Here is the sequence of each iteration:

  1. Edit the Macros file as best I can.
  2. Open up the terminal, cd to cesm1_0/scripts, and create a new case as follows: ./create_newcase -case test -res f19_g16 -compset X -mach <machine name>
  3. If this works, cd to test, and run configure: ./configure -case
  4. If all is well, try to build the case: ./test.<machine name>.build
  5. See where it fails and read the build log file it refers to for ideas as to what went wrong. Search on Google for what certain errors mean. Do some other work for a while, to let the ideas simmer.
  6. Set up for the next case: ./test.<machine name>.clean_build , cd .., and rm -rf test. This clears out old files so you can safely build a new case with the same name.
  7. See step 1.

I wasn’t really sure what the program paths were, as I couldn’t find a nicely contained folder for each one (like Windows has in “Program Files”), but I soon stumbled upon a nice little trick: look up the package on Ubuntu Package Manager, and click on “list of files” under the Download section. That should tell you what path the program used as its root.

I also discovered that setting FC and CC to gfortran and gcc, respectively, in the Macros file will throw errors. Instead, leave the variables as mpif90 and mpicc, which are linked to the GNU compilers. For example, when I type mpif90 in the terminal, the result is gfortran: no input files, just as if I had typed gfortran. For some reason, though, the errors go away.

As soon as I made it past building the mct and pio libraries, the build logs for each component (eg atm, ice) started saying gmake: command not found. This is one of the pitfalls of Ubuntu: it uses the command make for the same program that basically every other Unix-based OS calls gmake. So I needed to find and edit all the scripts that called gmake, or generated other scripts that called it, and so on. “There must be a way to automate this,” I thought, and from this article I found out how. In the terminal, cd to the CESM source code folder, and type the following:

grep -lr -e 'gmake' * | xargs sed -i 's/gmake/make/g'

You should only have to do this once. It’s case sensitive, so it will leave the xml variable GMAKE_J alone.

Then I turned my attention to compiler flags, which Steve chronicled quite well in his notes (see link above). I made most of the same changes that he did, except I didn’t need to change -DLINUX to -DDarwin. However, I needed some more compiler flags still. In the terminal, man gfortran brings up a list of all the options for gfortran, which was helpful.

The ccsm build log had hundreds of undefined reference errors as soon as it started to compile fortran. The way I understand it, many of the fortran files reference each other, but gfortran likes to append underscores to user-defined variables, and then it can’t find the file the variable is referencing! You can suppress this using the flag -fno-underscoring.

Now I am stuck on a new error. It looks like the ccsm script is almost reaching the end, as it’s using ld, the gcc linking mechanism, to tie all the files together. Then the build log says:

/usr/bin/ld: seq_domain_mct.o(.debug_info+0x1c32): unresolvable R_386_32 relocation against symbol 'mpi_fortran_argv_null'
/usr/bin/ld: final link failed: Nonrepresentable section on output
collect2: ld returned 1 exit status

I’m having trouble finding articles on the internet about similar errors, and the gcc and ld manpages are so long that trying every compiler flag isn’t really an option. Any ideas?

Update: Fixed it! In scripts/ccsm_utils/Build/Makefile, I changed LD := $(F90) to LD := gcc -shared. The build was finally successful! Now off to try and run it…

The good thing is that, since I re-started this project a few days ago, I haven’t spent very long stuck on any one error. I’m constantly having problems, but I move through them pretty quickly! In the meantime, I’m learning a lot about the model and how it fits everything together during installation. I’ve also come a long way with Linux programming in general. Considering that when I first installed Ubuntu a few months ago, and sheepishly called my friend to ask where to find the command line, I’m quite proud of my progress!

I hope this article will help future Ubuntu users install CESM, as it seems to have a few quirks that even Mac OS X doesn’t experience (eg make vs gmake). For the rest of you, apologies if I have bored you to tears!

Tornadoes and Climate Change

Cross-posted from NextGen Journal

It has been a bad season for tornadoes in the United States. In fact, this April shattered the previous record for the most tornadoes ever. Even though the count isn’t finalized yet, nobody doubts that it will come out on top:

In a warming world, many questions are common, and quite reasonable. Is this a sign of climate change? Will we experience more, or stronger, tornadoes as the planet warms further?

In fact, these are very difficult questions to answer. First of all, attributing a specific weather event, or even a series of weather events, to a change in the climate is extremely difficult. Scientists can do statistical analysis to estimate the probability of the event with and without the extra energy available in a warming world, but this kind of study takes years. Even so, nobody can say for certain whether an event wasn’t just a fluke. The recent tornadoes very well might have been caused by climate change, but they also might have happened anyway.

Will tornadoes become more common in the future, as global warming progresses? Tornado formation is complicated, and forecasting them requires an awful lot of calculations. Many processes in the climate system are this way, so scientists simulate them using computer models, which can do detailed calculations at an increasingly impressive speed.

However, individual tornadoes are relatively small compared to other kinds of storms, such as hurricanes or regular rainstorms. They are, in fact, smaller than a single square in the highest-resolution climate models around today. Therefore, it’s just not possible to directly project them using mathematical models.

However, we can project the conditions necessary for tornadoes to form. They don’t always lead to a tornado, but they make one more likely. Two main factors exist: high wind shear and high convective available potential energy (CAPE). Climate change is making the atmosphere warmer, and increasing specific humidity (but not relative humidity): both of these contribute to CAPE, so that factor will increase the likelihood of conditions favourable to tornadoes. However, climate change warms the poles faster than the equator, which will decrease the temperature difference between them, subsequently lowering wind shear. That will make tornadoes less likely (Diffenbaugh et al, 2008). Which factor will win out? Is there another factor involved that climate change could impact? Will we get more tornadoes in some areas and less in others? Will we get weaker tornadoes or stronger tornadoes? It’s very difficult to tell.

In 2007, NASA scientists used a climate model to project changes in severe storms, including tornadoes. (Remember, even though an individual tornado can’t be represented on a model, the conditions likely to cause a tornado can.) They predicted that the future will bring fewer storms overall, but that the ones that do form will be stronger. A plausible solution to the question, although not a very comforting one.

With uncertain knowledge, how should we approach this issue? Should we focus on the comforting possibility that the devastation in the United States might have nothing to do with our species’ actions? Or should we acknowledge that we might bear responsibility? Dr. Kevin Trenberth, a top climate scientist at the National Center for Atmospheric Research (NCAR), thinks that ignoring this possibility until it’s proven is a bad idea. “It’s irresponsible not to mention climate change,” he writes.

An Unmeasured Forcing

“It is remarkable and untenable that the second largest forcing
that drives global climate change remains unmeasured,” writes Dr. James Hansen, the head of NASA’s climate change research team, and arguably the world’s top climatologist.

The word “forcing” refers to a factor, such as changes in the Sun’s output or in atmospheric composition, that exerts a warming or cooling influence on the Earth’s climate. The climate doesn’t magically change for no reason – it is always driven by something. Scientists measure these forcings in Watts per square metre – imagine a Christmas tree lightbulb over every square metre of the Earth’s surface, and you have 1 W/m2 of positive forcing.

Currently, the largest forcing on the Earth’s climate is that of increasing greenhouse gases from burning fossil fuels. These exert a positive, or warming, forcing, hence the term “global warming”. However, a portion of this positive forcing is being cancelled out by the second-largest forcing, which is also anthropogenic. Many forms of air pollution, collectively known as aerosols, exert a negative (cooling) forcing on the Earth’s climate. They do this in two ways: the direct albedo effect (scattering solar radiation so it never reaches the planet), and the indirect albedo effect (providing surfaces for clouds to form and scatter radiation by themselves). A large positive forcing and a medium negative forcing sums out to a moderate increase in global temperatures.

Unfortunately, a catch-22 exists with aerosols. As many aerosols are directly harmful to human health, the world is beginning to regulate them through legislation such as the American Clean Air Act. As this pollution decreases, its detrimental health effects will lessen, but so will its ability to partially cancel out global warming.

The problem is that we don’t know how much warming the aerosols are cancelling – that is, we don’t know the magnitude of the forcing. So, if all air pollution ceased tomorrow, the world could experience a small jump in net forcing, or a large jump. Global warming would suddenly become much worse, but we don’t know just how much.

The forcing from greenhouse gases is known with a high degree of accuracy – it’s just under 3 W/m2. However, all we know about aerosol forcing is that it’s somewhere around -1 or -2 W/m2 – an estimate is the best we can do. The reason for this dichotomy lies in the ease of measurement. Greenhouse gases last a long time (on the order of centuries) in the atmosphere, and mix through the air, moving towards a uniform concentration. An air sample from a remote area of the world, such as Antarctica or parts of Hawaii, will be uncontaminated by cars and factories nearby, and will contain an accurate value of the global atmospheric carbon dioxide concentration (the same can be done for other greenhouse gases, such as methane) . From these measurements, molecular physics can tell us how large the forcing is. Direct records of carbon dioxide concentrations have been kept since the late 1950s:

However, aerosols only stay in the troposphere for a few days, as precipitation washes them out of the air. For this reason, they don’t have time to disperse evenly, and measurements are not so simple. The only way to gain accurate measurements of their concentrations is with a satellite. NASA recently launched the Glory satellite for just this purpose. Unfortunately, it failed to reach orbit (an inherent risk for satellites), and given the current political climate in the United States, it seems overly optimistic to hope for funding for a new one any time soon. Luckily, if this project was carried out by the private sector, without the need for money-draining government review panels, James Hansen estimates that it could be achieved with a budget of around $100 million.

An accurate value for aerosol forcing can only be achieved with accurate measurements of aerosol concentration. Knowing this forcing would be immensely helpful for climate researchers, as it impacts not only the amount of warming we can expect, but also how long it will take to play out, until the planet reaches thermal equilibrium. Aimed with better knowledge of these details will allow policymakers to better plan for the future, regarding both mitigation of and adaptation to climate change. Finally measuring the impact of aerosols, instead of just estimating, could give our understanding of the climate system the biggest bang for its buck.

In Other News…

The Arctic is getting so warm in winter that James Hansen had to add a new colour to the standard legend – pink, which is even warmer than dark red:

The official NASA maps – the ones you can generate yourself – didn’t add this new colour, though. They simply extended the range of dark red on the legend to whatever the maximum anomaly is – in some cases, as much as 11.1 C:

The legend goes up in small, smooth steps: a range of 0.3 C, 0.5 C, 1 C, 2 C. Then, suddenly, 6 or 7 C.

I’m sure this is a result of algorithms that haven’t been updated to accommodate such extreme anomalies. However, since very few people examine the legend beyond recognizing that red is warm and blue is cold, the current legend seems sort of misleading. Am I the only one who feels this way?

Technology as Communication

The relationship between technology and climate change is complex and multi-faceted. It was technology, in the form of fossil fuel combustion, that got us into this problem. Many uninformed politicians hold out hope that technology will miraculously save us in the future, so we can continue burning fossil fuels at our current rate. However, if we keep going along with such an attitude, risky geoengineering technologies may be required to keep the warming at a tolerable level.

However, we should never throw our hands in the air and give up, because we can always prevent the warming from getting worse. 2 C warming would be bad, but 3 or 4 C would be much worse, and 5 or 6 C would be devastating. We already possess many low-carbon, or even zero-carbon, forms of energy that could begin to replace the fossil fuel economy. The only thing missing is political will, and the only reason it’s missing, in my opinion, is that not enough people understand the magnitude and urgency of the problem.

Here is where technology comes in again – for purposes of communication. We live in an age of information and global interconnection, so ideas can travel at an unprecedented rate. It’s one thing for scientists to write an article about climate change and distribute it online, but there are many other, more engaging, forms of communication that harness today’s software and graphic technologies. Let’s look at a few recent examples.

Data clearly shows that the world is warming, but spreadsheets of temperature measurements are a little dry for public consumption. Graphs are better, but still cater to people with very specific kinds of intelligence. Since not everyone likes math, the climate team at NASA compressed all of their data into a 26-second video that shows changes in surface temperature anomalies (deviations from the average) from 1880 to 2010. The sudden warming over the past few decades even catches me by surprise.

Take a look – red is warm and blue is cool:

A more interactive visual expression of data comes from Penn State University. In this Flash application, you can play around with the amount of warming, latitude range, and type of crop, and see how yields change both with and without adaptation (changing farming practices to suit the warmer climate). Try it out here. A similar approach, where the user has control over the data selection, has been adopted by NOAA’s Climate Services website. Scroll down to “Climate Dashboard”, and you can compare temperature, carbon dioxide levels, energy from the sun, sea level, and Arctic sea ice on any timescale from 1880 to the present.

Even static images can be effective expressions of data. Take a look at this infographic, which examines the social dimensions of climate change. It does a great job of showing the problem we face: public understanding depends on media coverage, which doesn’t accurately reflect the scientific consensus. Click for a larger version:

Global Warming - the debate

Finally, a new computer game called Fate of the World allows you to try your hand at solving climate change. It adopts the same data and projections used by scientists to demonstrate to users what we can expect in the coming century, and how that changes based on our actions. Changing our lightbulbs and riding our bikes isn’t going to be enough, and, as PC Gamer discovered, even pulling out all the stops – nuclear power, a smart grid, cap-and-trade – doesn’t get us home free. You can buy the game for about $10 here (PC only, a Mac version is coming in April). I haven’t tried this game, but it looks pretty interesting – sort of like Civilization. Here is the trailer:

Take a look at these non-traditional forms of communication. Pass them along, and make your own if you’re so inclined. We need all the help we can get.

What’s the Warmest Year – and Does it Matter?

Cross-posted from NextGenJournal

Climate change is a worrying phenomenon, but watching it unfold can be fascinating. The beginning of a new year brings completed analysis of what last year’s conditions were like. Perhaps the most eagerly awaited annual statistic is global temperature.

This year was no different – partway through 2010, scientists could tell that it had a good chance of being the warmest year on record. It turned out to be more or less tied for first, as top temperature analysis centres recently announced:

Why the small discrepancy in the order of  1998, 2005, and 2010? The answer is mainly due to the Arctic. Weather stations in the Arctic region are few and far between, as it’s difficult to have a permanent station on ice floes that move around, and are melting away. Scientists, then, have two choices in their analyses: extrapolate Arctic temperature anomalies from the stations they do have, or just leave the missing areas out, assuming that they’re warming at the global average rate. The first choice might lead to results that are off in either direction…but the second choice almost certainly underestimates warming, as it’s clear that climate change is affecting the Arctic much more and much faster than the global average. Currently, NASA is the only centre to do extrapolation in Arctic data. A more detailed explanation is available here.

But how useful is an annual measurement of global temperature? Not very, as it turns out. Short-term climate variability, most prominently El Nino and La Nina, impact annual temperatures significantly. Furthermore, since this oscillation occurs in the winter, the thermal influence of El Nino or La Nina can fall entirely into one calendar year, or be split between two. The result is a graph that’s rather spiky:

A far more useful analysis involves plotting a 12-month running mean. Instead of measuring only from January to December, measurements are also compiled from February to January, March to February, and so on. This results in twelve times more data points, and prevents El Nino and La Nina events from being exaggerated:

This graph is better, but still not that useful. The natural spikiness of the El Nino cycle can, in the short term, get in the way of understanding the underlying trend. Since the El Nino cycle takes between 3 and 7 years to complete, a 60-month (5-year) running mean allows the resulting ups and downs to cancel each other out. Another cycle that impacts short-term temperature is the sunspot cycle, which operates on an 11-year cycle. A 132-month running mean smooths out that influence too. Both 60- and 132- month running means are shown below:

A statistic every month that shows the average global temperature over the last 5 or 11 years may not be as exciting as an annual measurement regarding the previous year. But that’s the reality of climate change. It doesn’t make every month or even every year warmer than the last, and a short-term trend line means virtually nothing. In the climate system, trends are always obscured by noise, and the nature of human psychology means we pay far more attention to noise. Nonetheless, the long-term warming trend since around 1975 is irrefutable when one is presented with the data. A gradual, persistent change might not make the greatest headline, but that doesn’t mean it’s worth ignoring.

Storms of my Grandchildren

I hope everyone had a fun and relaxing Christmas. Here’s a book I’ve been meaning to review for a while.

The worst part of the recent book by NASA climatologist James Hansen is, undoubtedly, the subtitle. The truth about the coming climate catastrophe and our last chance to save humanity – really? That doesn’t sound like the intrinsic, subdued style of Dr. Hansen. In my opinion, it simply alienates the very audience we’re trying to reach: moderate, concerned non-scientists.

The inside of the book is much better. While he couldn’t resist slipping in a good deal of hard science (and, in my opinion, these were the best parts), the real focus was on climate policy, and the relationship between science and policy. Hansen struggled with the prospect of becoming involved in policy discussions, but soon realized that he didn’t want his grandchildren, years from now, to look back at his work and say, “Opa understood what was happening, but he did not make it clear.”

Hansen is very good at distinguishing between his scientific work and his opinions on policy, and makes no secret of which he would rather spend time on. “I prefer to just do science,” he writes in the introduction. “It’s more pleasant, especially when you are having some success in your investigations. If I must serve as a witness, I intend to testify and then get back to the laboratory, where I am comfortable. That is what I intend to do when this book is finished.”

Hansen’s policy opinions centre on a cap-and-dividend system: a variant of a carbon tax, where revenue is divided evenly among citizens and returned to them. His argument for a carbon tax, rather than cap-and-trade, is compelling, and certainly convinced me. He also advocates the expansion of nuclear power (particularly “fourth-generation” fast nuclear reactors), a moratorium on new coal-generated power plants, and drastically improved efficiency measures.

These recommendations are robust, backed up with lots of empirical data to argue why they would be our best bet to minimize climate change and secure a stable future for generations to come. Hansen is always careful to say when he is speaking as a scientist and when he is speaking as a citizen, and provides a fascinating discussion of the connection between these two roles. As Bill Blakemore from ABC television wrote in correspondence with Hansen, “All communication is biased. What makes the difference between a propagandist on one side and a professional journalist or scientist on the other is not that the journalist or scientist ‘set their biases aside’ but that they are open about them and constantly putting them to the test, ready to change them.”

Despite all this, I love when Hansen puts on his scientist hat. The discussions of climate science in this book, particularly paleoclimate, were gripping. He explains our current knowledge of the climatic circumstances surrounding the Permian-Triassic extinction and the Paleocene-Eocene Thermal Maximum (usually referred to as the PETM). He explains why neither of these events is a suitable analogue for current climate change, as the current rate of introduction of the radiative forcing is faster than anything we can see in the paleoclimatic record.

Be prepared for some pretty terrifying facts about our planet’s “methane hydrate gun”, and how it wasn’t even fully loaded when it went off in the PETM. Also discussed is the dependence of climate sensitivity on forcing: the graph of these two variables is more or less a parabola, as climate sensitivity increases both in Snowball Earth conditions and in Runaway Greenhouse conditions. An extensive discussion of runaway greenhouse is provided, where the forcing occurs so quickly that negative feedbacks don’t have a chance to act before the positive water vapour feedback gets out of control, the oceans boil, and the planet becomes too hot for liquid water to exist. For those who are interested in this scenario, Hansen argues that, if we’re irresponsible about fossil fuels, it is quite possible for current climate change to reach this stage. For those who have less practice separating the scientific part of their brain from the emotional part, I suggest you skip this chapter.

I would recommend this book to everyone interested in climate change. James Hansen is such an important player in climate science, and has arguably contributed more to our knowledge of climate change than just about anyone. Whether it’s for the science, for the policy discussions, or for his try at science fiction in the last chapter, it’s well worth the cover price.

Thoughts from others who have read this book are welcome in the comments, as always.