Working Away

The shape of my summer research is slowly becoming clearer. Basically, I’ll be writing a document comparing the architecture of different climate models. This, of course, involves getting access to the source code. Building on Steve’s list, here are my experiences:

NCAR, Community Earth System Model (CESM): Password-protected, but you can get access within an hour. After a quick registration, you’ll receive an automated email with a username and password. This login information gives you access to their Subversion repository. Registration links and further information are available here, under “Acquiring the CESM1.0 Release Code”.

University of Victoria, Earth System Climate Model (ESCM): Links to the source code can be found on this page, but they’re password-protected. You can request an account by sending an email – follow the link for more information.

Geophysical Fluid Dynamics Laboratory (GFDL), CM 2.1: Slightly more complicated. Create an account for their Gforge repository, which is an automated process. Then, request access to the MOM4P1 project – apparently CM 2.1 is included within that. Apparently, the server grants you request to a project, so it sounds automatic – but the only emails I’ve received from the server regard some kind of GFDL mailing list, and don’t mention the project request. I will wait and see.
Update (July 20): It looks like I got access to the project right after I requested it – I just never received an email!

Max Planck Institute (MPI), COSMOS: Code access involves signing a licence agreement, faxing it to Germany, and waiting for it to be approved and signed by MPI. The agreement is not very restrictive, though – it deals mainly with version control, documenting changes to the code, etc.

UK Met Office, Hadley Centre Coupled Model version 3 (HadCM3): Our lab already has a copy of the code for HadCM3, so I’m not really sure what the process is to get access, but apparently it involved a lot of government paperwork.

Institut Pierre Simon Laplace (IPSL), CM5: This one tripped me up for a while, largely because the user guide is difficult to find, and written in French. Google Translate helped me out there, but it also attempted to “translate” their command line samples! Make sure that you have ksh installed, too – it’s quick to fix, but I didn’t realize it right away. Some of the components for IPSLCM5 are open access, but others are password-protected. Follow the user guide’s instructions for who to email to request access.

Model E: This was the easiest of all. From the GISS website, you can access all the source code without any registration. They offer a frozen AR4 version, as well as nightly snapshots of the work-in-process for AR5 (frozen AR5 version soon to come). There is also a wealth of documentation on this site, such as an installation guide and a description of the model.

I’ve taken a look at the structural code for Model E, which is mostly contained in the file MODELE.f. The code is very clear and well commented, and the online documentation helped me out too. After drawing a lot of complicated diagrams with arrows and lists, I feel that I have a decent understanding of the Model E architecture.

Reading code can become monotonous, though, and every now and then I feel like a little computer trouble to keep things interesting. For that reason, I’m continuing to chip away at building and running two models, Model E and CESM. See my previous post for how this process started.

<TECHNICAL COMPUTER STUFF> (Feel free to skip ahead…)

I was still having trouble viewing the Model E output (only one file worked on Panoply, the rest created an empty map) so I emailed some of the lab’s contacts at NASA. They suggested I install CDAT, a process which nearly broke Ubuntu (haven’t we all been there?) Basically, because it’s an older program, it thought the newest version of Python was 2.5 – which it subsequently installed and set as the default in /usr/bin. Since I had Python 2.6 installed, and the versions are apparently very not-backwards-compatible, every program that depended on Python (i.e. almost everything on Ubuntu) stopped working. Our IT contact managed to set 2.6 back as the default, but I’m not about to try my hand at CDAT again…

I have moved forward very slightly on CESM. I’ve managed to build the model, but upon calling test.<machine name>.run, I get rather an odd error:

./Tools/ccsm_getenv: line 9: syntax error near unexpected token '('
./Tools/ccsm_getenv: line 9: 'foreach i (env_case.xml env_run.xml env_conf.xml env_build.xml env_mach_pes.xml)'

Now, I’m pretty new at shell scripting, but I can’t see the syntax error there – and wouldn’t syntax errors appear at compile-time, rather than run-time?

A post by Michael Tobis, who had a similar error, suggested that the issue had to do with qsub. Unfortunately, that meant I had to actually use qsub – I had previously given up trying to configure Torque to run on a single machine rather than many. I gave the installation another go, and now I can get scripts into the queue, but they never start running – their status stays as “Q” even if I leave the computer alone for an hour. Since the machine has a dual-core processor, I can’t see why it couldn’t run both a server and a node at once, but it doesn’t seem to be working for me.

</TECHNICAL COMPUTER STUFF>

Before I started this job, climate models seemed analogous to Antarctica – a distant, mysterious, complex system that I wanted to visit, but didn’t know how to get to. In fact, they’re far more accessible than Antarctica. More on the scale of a complicated bus trip across town, perhaps?

They are not perfect pieces of software, and they’re not very user friendly. However, all the struggles of installation pay off when you finally get some output, and open it up, and see realistic data representing the very same planet you’re sitting on! Even just reading the code for different models shows you many different ways to look at the same system – for example, is sea ice a realm of its own, or is it a subset of the ocean? In the real world the lines are blurry, but computation requires us to make clear divisions.

The code can be unintelligible (lndmaxjovrdmdni) or familiar (“The Stefan-Boltzmann constant! Finally I recognize something!”) or even entertaining (a seemingly random identification string, dozens of characters long, followed by the comment if you edit this you will get what you deserve). When you get tied up in the code, though, it’s easy to miss the bigger picture: the incredible fact that we can use the sterile, binary practice of computation to represent a system as messy and mysterious as the whole planet. Isn’t that something worth sitting and marveling over?

The Dangers of Being a Scientist

In which occupations would you expect to be threatened with murder?

Soldiers, at the front lines of combat zones, are an obvious example. Police officers would often qualify, too. Even high-ranking government officials put their safety at risk – just look at the number of American presidents that have been assassinated. Gang leaders and drug dealers, if they can be called “occupations”, would be high on the list.

What about scientists?

They don’t spend their days suppressing violent criminals. Although they’ll occasionally speak to the media, they could hardly be called public or political figures. Their job is to learn about the world, whether they sit in a lab and crunch numbers or travel to the Antarctic and drill ice cores. Not exactly the kind of life where threats to personal safety seem likely.

Nevertheless, top climate scientists around the world have been receiving death threats for over a year now. This violent hate campaign recently reached Australia, where, as journalist Rosslyn Beeby writes, “Several universities…have been forced to upgrade security to protect scientists.”

Their names have been deleted from staff directories. One scientist’s office cannot be found by without photo identification and an official escort; another has a “panic button”, installed on advice of police.

Some researchers have installed advanced home security systems, and made their home addresses and phone numbers unlisted. They have deleted their accounts on social media sites. All because some people feel so threatened by the idea of human-caused climate change that they’d rather attack the scientists who study the problem than accept its reality and work to fix it.

In the United States, such threats to climate scientists are commonplace, but the hate speech is protected by the American freedom of speech laws, so there isn’t much police can do. The situation isn’t quite as widespread in the UK, although several scientists have been excessively targeted due to the “Climategate” campaign.

Nobody has been hurt, at least not yet. However, many researchers receive regular emails threatening murder, bodily harm, sexual assault, property damage, or attacks on family members. One anonymous scientist had a dead animal dumped on his doorstep and now travels with bodyguards. A young Australian woman who gave a speech at a library about carbon footprints had the words “Climate Turd” written in feces on her car.

Several American scientists say that the threats pick up whenever right-wing talk show hosts attack their reputations. It’s common for Glenn Beck or Rush Limbaugh to single out climate scientists as socialist frauds, or some variation of the sort. However, knowing that the more extreme viewers of Fox News will watch these baseless attacks and, subsequently, whip off threats of murder in emails to the scientists involved, is unsettling, to say the least.

We probably shouldn’t be surprised that some people who deny the reality of climate change are also denying the reality of these violent threats. In Australia, the Liberal spokesperson for science, Sophie Mirabella, stated that “the apparently false allegation of death threats have diminished the individuals involved and reflect poorly on the scientific community”. In some ironic twist of logic, the victims of hate crimes are now receiving even more public battering of their reputations, simply because they reported these crimes. There’s no way to win.

We can only hope that these threats will subside with time, and that nobody will get hurt in the process. We can only hope that governments and police agencies will take the threats seriously and pursue investigations. However, once climate change becomes so obvious that even extremists can’t deny it, we will all face a greater danger: the impacts of climate change itself. We can only hope that these hate crimes don’t frighten scientists into staying silent – because their knowledge and their voices might be our only chance.

References:

1) Beeby, Rosslyn. “Climate of fear: scientists face death threats.” The Canberra Times, 4 June 2011.
2) Beeby, Rosslyn. “Change of attitude needed as debate overheats.” The Canberra Times, 14 June 2011.
3) Hickman, Leo. “US climate scientists receive hate mail barrage in wake of UEA scandal.” The Guardian, 5 July 2010.

Climate Models on Ubuntu

Part 1: Model E

I felt a bit over my head attempting to port CESM, so I asked a grad student, who had done his Master’s on climate modelling, for help. He looked at the documentation, scratched his head, and suggested I start with NASA’s Model E instead, because it was easier to install. And was it ever! We had it up and running within an hour or so. It was probably so much easier because Model E comes with gfortran support, while CESM only has scripts written for commercial compilers like Intel or PGI.

Strangely, when using Model E, no matter what dates the rundeck sets for the simulation start and end, the subsequently generated I file always has December 1, 1949 as the start date and December 2, 1949 as the end date. We edited the I files after they were created, which seemed to fix the problem, but it was still kind of weird.

I set up Model E to run a ten-year simulation with fixed atmospheric concentration (really, I just picked a rundeck at random) over the weekend. It took it about 3 days to complete, so just over 7 hours per year of simulation time…not bad for a 32-bit desktop!

However, I’m having some weird problems with the output – after configuring the model to output files in NetCDF format and opening them in Panoply, only the file with all the sea ice variables worked. All the others either gave a blank map (array full of N/A’s) or threw errors when Panoply tried to read them. Perhaps the model isn’t enjoying having the I file edited?

Part 2: CESM

After exploring Model E, I felt like trying my hand at CESM again. Steve managed to port it onto his Macbook last year, and took detailed notes. Editing the scripts didn’t seem so ominous this time!

The CESM code can be downloaded using Subversion (instructions here) after a quick registration. Using the Ubuntu Software Center, I downloaded some necessary packages: libnetcdf-dev, mpich2, and torque-scheduler. I already had gfortran, which is sort of essential.

I used the Porting via user defined machine files method to configure the model for my machine, using the Hadley scripts as a starting point. Variables for the config_machines.xml are explained in Appendix D through H of the user’s guide (links in chapter 7). Mostly, you’re just pointing to folders where you want to store data and files. Here are a few exceptions:

  • DOUT_L_HTAR: I stuck with "TRUE", as that was the default.
  • CCSM_CPRNC: this tool already exists in the CESM source code, in /models/atm/cam/tools/cprnc.
  • BATCHQUERY and BATCHSUBMIT: the Hadley entry had “qstat” and “qsub”, respectively, so I Googled these terms to find out which batch submission software they referred to (Torque, which is freely available in the torque-scheduler package) and downloaded it so I could keep the commands the same!
  • GMAKE_J: this determines how many processors to commit to a certain task, and I wasn’t sure how many this machine had, so I just put “1”.
  • MAX_TASKS_PER_NODE: I chose "8", which the user’s guide had mentioned as an example.
  • MPISERIAL_SUPPORT: the default is “FALSE”.

The only file that I really needed to edit was Macros.<machine name>. The env_machopts.<machine name> file ended up being empty for me. I spent a while confused by the modules declarations, which turned out to refer to the Environment Modules software. Once I realized that, for this software to be helpful, I would have to write five or six modulefiles in a language I didn’t know, I decided that it probably wasn’t worth the effort, and took these declarations out. I left mkbatch.<machine name> alone, except for the first line which sets the machine, and then turned my attention to Macros.

“Getting this to work will be an iterative process”, the user’s guide says, and it certainly was (and still is). It’s never a good sign when the installation guide reminds you to be patient! Here is the sequence of each iteration:

  1. Edit the Macros file as best I can.
  2. Open up the terminal, cd to cesm1_0/scripts, and create a new case as follows: ./create_newcase -case test -res f19_g16 -compset X -mach <machine name>
  3. If this works, cd to test, and run configure: ./configure -case
  4. If all is well, try to build the case: ./test.<machine name>.build
  5. See where it fails and read the build log file it refers to for ideas as to what went wrong. Search on Google for what certain errors mean. Do some other work for a while, to let the ideas simmer.
  6. Set up for the next case: ./test.<machine name>.clean_build , cd .., and rm -rf test. This clears out old files so you can safely build a new case with the same name.
  7. See step 1.

I wasn’t really sure what the program paths were, as I couldn’t find a nicely contained folder for each one (like Windows has in “Program Files”), but I soon stumbled upon a nice little trick: look up the package on Ubuntu Package Manager, and click on “list of files” under the Download section. That should tell you what path the program used as its root.

I also discovered that setting FC and CC to gfortran and gcc, respectively, in the Macros file will throw errors. Instead, leave the variables as mpif90 and mpicc, which are linked to the GNU compilers. For example, when I type mpif90 in the terminal, the result is gfortran: no input files, just as if I had typed gfortran. For some reason, though, the errors go away.

As soon as I made it past building the mct and pio libraries, the build logs for each component (eg atm, ice) started saying gmake: command not found. This is one of the pitfalls of Ubuntu: it uses the command make for the same program that basically every other Unix-based OS calls gmake. So I needed to find and edit all the scripts that called gmake, or generated other scripts that called it, and so on. “There must be a way to automate this,” I thought, and from this article I found out how. In the terminal, cd to the CESM source code folder, and type the following:

grep -lr -e 'gmake' * | xargs sed -i 's/gmake/make/g'

You should only have to do this once. It’s case sensitive, so it will leave the xml variable GMAKE_J alone.

Then I turned my attention to compiler flags, which Steve chronicled quite well in his notes (see link above). I made most of the same changes that he did, except I didn’t need to change -DLINUX to -DDarwin. However, I needed some more compiler flags still. In the terminal, man gfortran brings up a list of all the options for gfortran, which was helpful.

The ccsm build log had hundreds of undefined reference errors as soon as it started to compile fortran. The way I understand it, many of the fortran files reference each other, but gfortran likes to append underscores to user-defined variables, and then it can’t find the file the variable is referencing! You can suppress this using the flag -fno-underscoring.

Now I am stuck on a new error. It looks like the ccsm script is almost reaching the end, as it’s using ld, the gcc linking mechanism, to tie all the files together. Then the build log says:

/usr/bin/ld: seq_domain_mct.o(.debug_info+0x1c32): unresolvable R_386_32 relocation against symbol 'mpi_fortran_argv_null'
/usr/bin/ld: final link failed: Nonrepresentable section on output
collect2: ld returned 1 exit status

I’m having trouble finding articles on the internet about similar errors, and the gcc and ld manpages are so long that trying every compiler flag isn’t really an option. Any ideas?

Update: Fixed it! In scripts/ccsm_utils/Build/Makefile, I changed LD := $(F90) to LD := gcc -shared. The build was finally successful! Now off to try and run it…

The good thing is that, since I re-started this project a few days ago, I haven’t spent very long stuck on any one error. I’m constantly having problems, but I move through them pretty quickly! In the meantime, I’m learning a lot about the model and how it fits everything together during installation. I’ve also come a long way with Linux programming in general. Considering that when I first installed Ubuntu a few months ago, and sheepishly called my friend to ask where to find the command line, I’m quite proud of my progress!

I hope this article will help future Ubuntu users install CESM, as it seems to have a few quirks that even Mac OS X doesn’t experience (eg make vs gmake). For the rest of you, apologies if I have bored you to tears!

Models and Books

Working as a summer student continues to be rewarding. I get to spend all day reading interesting things and playing with scientific software. What a great deal!

Over the weekend, I ran the “Global Warming_01” simulation from EdGCM, which is an old climate model from NASA with a graphical user interface. Strangely, they don’t support Linux, as their target audience is educators – I doubt there are very many high school teachers running open-source operating systems! So I ran the Windows version on my laptop, and it took about 36 hours. It all felt very authentic.

Unfortunately, as their Windows 7 support is fairly new, there were some bugs in the output. It refused to give me any maps at all! The terminal popped up for a few seconds, but it didn’t output any files. All I could get were zonal averages (and then only from January-March) and time series. Also, for some reason, none of the time series graphs had units on the Y axis. Anyway, here are some I found interesting:

CO2 concentrations increase linearly from 1958 to 2000, and then exponentially until 2100, with a doubling of CO2 (with respect to 1958) around 2062. (This data was output as a spreadsheet, and I got Excel to generate the graph, so it looks nicer than the others.)

Global cloud cover held steady until around 2070, when it decreased. I can’t figure out why this would be, as the water vapour content of the air should be increasing with warming – wouldn’t there be more clouds forming, not less?

Global precipitation increased, as I expected. This is an instance where I wish the maps would have worked, because it would be neat to look at how precipitation amount varied by location. I’ve been pretty interested in subtropical drought recently.

Albedo decreased about 1% – a nice example of the ice-albedo feedback (I presume) in action.

I also ran a simulation of the Last Glacial Maximum, from 21 thousand years ago. This run was much quicker than the first, as (since it was modeling a stable climate) it only simulated a decade, rather than 150 years. It took a few hours, and the same bugs in output were apparent. Time series graphs are less useful when studying stable conditions, but I found the albedo graph interesting:

Up a few percent from modern values, as expected.

It’s fairly expensive to purchase a licence for EdGCM, but they offer a free 30-day trial that I would recommend. I expect that it would run better on a  Mac, as that’s what they do most of the software development and testing on.

Now that I’ve played around with EdGCM, I’m working on porting CESM to a Linux machine. There’s been trial and error at every step, but everything went pretty smoothly until I reached the “build” phase, which requires the user to edit some of the scripts to suit the local machine (step 3 of the user guide). I’m still pretty new to Linux, so I’m having trouble working out the correct program paths, environment variables, modules, and so on. Oh well, the more difficult it is to get working, the more exciting it is when success finally comes!

I am also doing lots of background reading, as my project for the summer will probably be “some sort of written something-or-other” about climate models. Steve has a great collection of books about climate change, and keeps handing me interesting things to read. I’m really enjoying The Warming Papers, edited by David Archer and Ray Pierrehumbert. The book is a collection of landmark papers in climate science, with commentary from the editors. It’s pretty neat to read all the great works – from Fourier to Broecker to Hansen – in one place. Next on my list is A Vast Machine by Paul Edwards, which I’m very excited about.

A quick question, unrelated to my work – why do thunderstorms tend to happen at night? Perhaps it’s just a fluke, but we’ve had a lot of them recently, none of which have been in the daytime. Thoughts?

Quality, Transparency, and Rigour

The Intergovernmental Panel on Climate Change (IPCC) reports are likely the most cited documents on the subject of global warming. The organization, established by the United Nations, doesn’t do any original research – it simply summarizes the massive amount of scientific literature on the topic. Their reports, written and reviewed by volunteer scientists, and published approximately every six years, are a “one-stop shop” for credible information about climate change. When you have a question about climate science, it’s far easier to find the relevant section of the IPCC than it is to wade through thousands of results on Google Scholar.

The main problem with the IPCC, in my opinion, is that their reports are out of date as soon as they’re published, and then everyone has to wait another six years or so for the next version, which is subsequently out of date, and so on. Additionally, because there are so many authors, reviewers, and stakeholders involved in the IPCC, the reports come to reflect the lowest-common-denominator scientific understanding, rather than the median opinion of experts. In particular, government officials oversee the writing and reviewing of the Summary for Policymakers, to make sure that it’s relevant and clear. However, some governments are beginning to abuse their power in this process. The late Stephen Schneider, in his 2009 book Science as a Contact Sport, recounts his experiences with government representatives who absolutely refuse to allow certain conclusions to be published in the IPCC, regardless of their scientific backing.

The result is that the IPCC reports frequently underestimate the severity of climate change. For example, in the most recent report, the worst-case estimate of sea level rise by the end of this century was 0.59 m. Since then, scientists have revised this estimate to 1.9 m, but it won’t show up in the report until the next edition comes out around 2014.

Another example concerns Arctic sea ice: the worst-case scenario from the IPCC was an ice-free Arctic in the summer beginning around 2100. These estimates have come down so rapidly that there’s an outside chance the summer sea ice could be gone before the next IPCC report has a chance to correct it (presentation by Dr. David Barber, media coverage available here). It will more likely disappear around 2035, but that’s still a drastic change from what the IPCC said.

Despite this conservative stance, there are still some who think the IPCC is alarmist (this is usually paired with something about a New World Order and/or socialists using a carbon tax to take over the world). Naturally, the IPCC has become a favourite target of climate change deniers, who wish to obscure the reality of human-caused global warming. Last year, they claimed to have found all kinds of errors in the latest report, somehow proving that global warming wasn’t happening. In fact, most of these so-called “errors” were nothing of the sort, and the worst of the two real mistakes in the report involved a typo regarding which year certain glaicers were expected to disappear. Not bad, for a three-thousand-page document, but it created quite the media firestorm. Apparently scientists are expected to have 100% accuracy at all times, or else they are frauds.

Just a few weeks ago, the IPCC made some changes to their policies in response to these events. Their press release about the new policies featured the phrase “Boost Quality, Transparency and Rigour” in the title.

No, no, no. That’s not what the IPCC needs. These are very admirable goals, but they’re doing just fine as it is. Actions to “further minimize any possibility of errors in future reports” should not be their top priority. Further extending the review process will only further delay the publication of each report (making them even more out of date) and further enhance their lowest-common-denominator position. When you have an error rate on the order of 0.67 errors/1000 pages, should you spend your energy getting that all the way down to zero (a virtually impossible task) or on the real issues that need to be addressed?

I think the IPCC should adopt a continually-updating online version of their report. This would solve their chronic problem of being out of date, as well as help the organization adapt to the increasing role of the Internet in our world. Any future errors the deniers liked to yell about would be fixed immediately. Governments would be forming policies based on the best available evidence from today, not a decade ago. Everything would still be in one place, and version control would allow transparency to remain high.

The IPCC should also make it more clear when their estimates are too conservative. When a single sentence that didn’t even make it into the summary is shown to overestimate the problem, the climate science community ties itself up in knots trying to correct its tattered image. But prominent conclusions that underestimate the problem go unacknowledged for decades. If it were the other way around, can you imagine the field day deniers would have?

Luckily, the changes made to IPCC policy are not all aimed at appeasing the bullies. A long-overdue communications plan is in development: a rapid response team and Senior Communications Manager will develop formal strategies for public education and outreach. Hopefully, this will counteract the false claims and defamation the IPCC has been subject to since its creation.

Another new plan is to create an Executive Committee, composed of the Chair, Vice Chairs, Working Group Co-Chairs, and advisory members. This will “strengthen coordination and management of the IPCC” and allow for actions to be taken between reports, such as communication and responding to possible errors. A more structured administration will probably be helpful, given that the only people in the organization currently getting paid for their work are the office staff (even the Chair doesn’t make a cent). Coordinating overworked scientists who volunteer for a scientific undertaking that demands 100% accuracy can’t be an easy task.

Will the IPCC continue to be the best available source of credible information on climate change? Will its structure of endless review remain feasible in a world dominated by instant news? Should we continue to grant our governments control over the contents of scientific reports concerning an issue that they desperately want to avoid? Should we continue to play to the wants and needs of bullies? Or should we let scientists speak for themselves?

Tornadoes and Climate Change

Cross-posted from NextGen Journal

It has been a bad season for tornadoes in the United States. In fact, this April shattered the previous record for the most tornadoes ever. Even though the count isn’t finalized yet, nobody doubts that it will come out on top:

In a warming world, many questions are common, and quite reasonable. Is this a sign of climate change? Will we experience more, or stronger, tornadoes as the planet warms further?

In fact, these are very difficult questions to answer. First of all, attributing a specific weather event, or even a series of weather events, to a change in the climate is extremely difficult. Scientists can do statistical analysis to estimate the probability of the event with and without the extra energy available in a warming world, but this kind of study takes years. Even so, nobody can say for certain whether an event wasn’t just a fluke. The recent tornadoes very well might have been caused by climate change, but they also might have happened anyway.

Will tornadoes become more common in the future, as global warming progresses? Tornado formation is complicated, and forecasting them requires an awful lot of calculations. Many processes in the climate system are this way, so scientists simulate them using computer models, which can do detailed calculations at an increasingly impressive speed.

However, individual tornadoes are relatively small compared to other kinds of storms, such as hurricanes or regular rainstorms. They are, in fact, smaller than a single square in the highest-resolution climate models around today. Therefore, it’s just not possible to directly project them using mathematical models.

However, we can project the conditions necessary for tornadoes to form. They don’t always lead to a tornado, but they make one more likely. Two main factors exist: high wind shear and high convective available potential energy (CAPE). Climate change is making the atmosphere warmer, and increasing specific humidity (but not relative humidity): both of these contribute to CAPE, so that factor will increase the likelihood of conditions favourable to tornadoes. However, climate change warms the poles faster than the equator, which will decrease the temperature difference between them, subsequently lowering wind shear. That will make tornadoes less likely (Diffenbaugh et al, 2008). Which factor will win out? Is there another factor involved that climate change could impact? Will we get more tornadoes in some areas and less in others? Will we get weaker tornadoes or stronger tornadoes? It’s very difficult to tell.

In 2007, NASA scientists used a climate model to project changes in severe storms, including tornadoes. (Remember, even though an individual tornado can’t be represented on a model, the conditions likely to cause a tornado can.) They predicted that the future will bring fewer storms overall, but that the ones that do form will be stronger. A plausible solution to the question, although not a very comforting one.

With uncertain knowledge, how should we approach this issue? Should we focus on the comforting possibility that the devastation in the United States might have nothing to do with our species’ actions? Or should we acknowledge that we might bear responsibility? Dr. Kevin Trenberth, a top climate scientist at the National Center for Atmospheric Research (NCAR), thinks that ignoring this possibility until it’s proven is a bad idea. “It’s irresponsible not to mention climate change,” he writes.

Beautiful Things

This is what the last few days have taught me: even if the code for climate models can seem dense and confusing, the output is absolutely amazing.

Late yesterday I discovered a page of plots and animations from the Canadian Centre for Climate Modelling and Analysis. The most recent coupled global model represented on that page is CGCM3, so I looked at those animations. I noticed something very interesting: the North Atlantic, independent of the emissions scenario, was projected to cool slightly, while the world around it warmed up. Here is an example, from the A1B scenario. Don’t worry if the animation is already at the end, it will loop:

It turns out that this slight cooling is due to the North Atlantic circulation slowing down, as is very likely to happen from large additions of freshwater that change the salinity and density of the ocean (IPCC AR4 WG1, FAQ 10.2). This freshwater could come from either increased precipitation due to climate change, or meltwater from the Arctic ending up in the North Atlantic. Of course, we hear about this all the time – the unlikely prospect of the Gulf Stream completely shutting down and Europe going into an ice age, as displayed in The Day After Tomorrow – but, until now, I hadn’t realized that even a slight slowing of the circulation could cool the North Atlantic, while Europe remained unaffected.

Then, in chapter 8 of the IPCC, I read something that surprised me: climate models generate their own El Ninos and La Ninas. Scientists don’t understand quite what triggers the circulation patterns leading to these phenomena, so how can they be in the models? It turns out that the modellers don’t have to parameterize the ENSO cycles at all: they have done such a good job of reproducing global circulation from first principles that ENSO arises by itself, even though we don’t know why. How cool is that? (Thanks to Jim Prall and Things Break for their help with this puzzle.)

Jim Prall also pointed me to an HD animation of output from the UK-Japan Climate Collaboration. I can’t seem to embed the QuickTime movie (WordPress strips out some of the necessary HTML tags) so you will have to click on the link to watch it. It’s pretty long – almost 17 minutes – as it represents an entire year of the world’s climate system, in one-hour time steps. It shows 1978-79, starting from observational data, but from there it simulates its own circulation.

I am struck by the beauty of this output – the swirling cyclonic precipitation, the steady prevailing westerlies and trade winds, the subtropical high pressure belt clear from the relative absence of cloud cover in these regions. You can see storms sprinkling across the Amazon Basin, monsoons pounding South Asia, and sea ice at both poles advancing and retreating with the seasons. Scientists didn’t explicitly tell their models to do any of this. It all appeared from first principles.

Take 17 minutes out of your day to watch it – it’s an amazing stress reliever, sort of like meditation. Or maybe that’s just me…

One more quick observation: most of you are probably familiar with the naming conventions of IPCC reports. The First Assessment Report was FAR, the second was SAR, and so on, until the acronyms started to repeat themselves, so the Fourth Assessment Report was AR4. They’ll have to follow this alternate convention until the Eighth Annual Report, which will be EAR. Maybe they’ll stick with AR8, but that would be substantially less entertaining.