My Dishpan Climate Model

About two years ago, I discovered the concept of “dishpan climate models”, through Iain Stewart’s Climate Wars documentary. The experiment is pretty simple: a large bowl filled with water (representing one hemisphere of the Earth) with a block of ice in the middle (a polar region) rotates on a turntable with a Bunsen Burner (the Sun) heating it from one side. By injecting some dye into the water, you can see regular currents from heat transport and the Coriolis effect. Spencer Weart dug up some fascinating results from the days when dishpan climate models were the only sort available: researchers were able to simulate the Hadley circulation, Rossby waves, and the Gulf Stream.

I wanted to try this out for myself. Iain Stewart had made it look easy enough, and he got some really neat currents flowing. So one Saturday afternoon a friend and I got to work in my kitchen.

We started by figuring out how to rotate the bowl. My family doesn’t own a record player, so we couldn’t use that as a turntable. We tried to rig something up out of an old toy helicopter motor, but it wasn’t strong enough. Eventually we settled for a Lazy Susan which we spun by hand. It wasn’t a constant rotation, but it would have to do.

Then Antarctica, which consisted of a handful of ice cubes, kept floating away from the centre of the bowl. Soon the ice cubes melted and there were none left in the freezer. We filled a Ziploc bag with frozen corn, which wasn’t quite as buoyant, and used that for Antarctica instead.

Unsurprisingly, there was no Bunsen burner in my kitchen cupboard, so the Sun was represented by a paraffin candle that sort of smelled like cinnamon.

The only serious problem remaining was the dye. Every kind of dye we tried – food colouring, milk, food colouring mixed with milk – would completely homogenize with the water after just a few rotations, so all the currents were invisible.

The only liquid in my kitchen that wouldn’t mix with water was vegetable oil, so we dyed some of it blue and poured it in. This was a really really bad idea. The oil seemed to be attracted to the plastic bag keeping Antarctica together, so it all washed up onto the continent like some kind of awful petroleum spill in the Antarctic Ocean.

At that point, our climate model looked like this:

I would like to try this again some day, perhaps when I have access to a better laboratory than my kitchen. Any ideas for improvement (besides the obvious)? In particular, what kind of dye does work, and how does Antarctica stay together without being encased in plastic?

A Vast Machine

I read Paul Edward’s A Vast Machine this summer while working with Steve Easterbrook. It was highly relevant to my research, but I would recommend it to anyone interested in climate change or mathematical modelling. Think The Discovery of Global Warming, but more specialized.

Much of the public seems to perceive observational data as superior to scientific models. The U.S. government has even attempted to mandate that research institutions focus on data above models, as if it is somehow more trustworthy. This is not the case. Data can have just as many problems as models, and when the two disagree, either could be wrong. For example, in a high school physics lab, I once calculated the acceleration due to gravity to be about 30 m/s2. There was nothing wrong with Newton’s Laws of Motion – our instrumentation was just faulty.

Additionally, data and models are inextricably linked. In meteorology, GCMs produce forecasts from observational data, but that same data from surface stations was fed through a series of algorithms – a model for interpolation – to make it cover an entire region. “Without models, there are no data,” Edwards proclaims, and he makes a convincing case.

The majority of the book discussed the history of climate modelling, from the 1800s until today. There was Arrhenius, followed by Angstrom who seemed to discredit the entire greenhouse theory, which was not revived until Callendar came along in the 1930s with a better spectroscope. There was the question of the ice ages, and the mistaken perception that forcing from CO2 and forcing from orbital changes (the Milankovitch model) were mutually exclusive.

For decades, those who studied the atmosphere were split into three groups, with three different strategies. Forecasters needed speed in their predictions, so they used intuition and historical analogues rather than numerical methods. Theoretical meteorologists wanted to understand weather using physics, but numerical methods for solving differential equations didn’t exist yet, so nothing was actually calculated. Empiricists thought the system was too complex for any kind of theory, so they just described climate using statistics, and didn’t worry about large-scale explanations.

The three groups began to merge as the computer age dawned and large amounts of calculations became feasible. Punch-cards came first, speeding up numerical forecasting considerably, but not enough to make it practical. ENIAC, the first model on a digital computer, allowed simulations to run as fast as real time (today the model can run on a phone, and 24 hours are simulated in less than a second).

Before long, theoretical meteorologists “inherited” the field of climatology. Large research institutions, such as NCAR, formed in an attempt to pool computing resources. With incredibly simplistic models and primitive computers (2-3 KB storage), the physicists were able to generate simulations that looked somewhat like the real world: Hadley cells, trade winds, and so on.

There were three main fronts for progress in atmospheric modelling: better numerical methods, which decreased errors from approximation; higher resolution models with more gridpoints; and higher complexity, including more physical processes. As well as forecast GCMs, which are initialized with observations and run at maximum resolution for about a week of simulated time, scientists developed climate GCMs. These didn’t use any observational data at all; instead, the “spin-up” process fed known forcings into a static Earth, started the planet spinning, and waited until it settled down into a complex climate and circulation that looked a lot like the real world. There was still tension between empiricism and theory in models, as some factors were parameterized rather than being included in the spin-up.

The Cold War, despite what it did to international relations, brought tremendous benefits to atmospheric science. Much of our understanding of the atmosphere and the observation infrastructure traces back to this period, when governments were monitoring nuclear fallout, spying on enemy countries with satellites, and considering small-scale geoengineering as warfare.

I appreciated how up-to-date this book was, as it discussed AR4, the MSU “satellites show cooling!” controversy, Watt’s Up With That, and the Republican anti-science movement. In particular, Edwards emphasized the distinction between skepticism for scientific purposes and skepticism for political purposes. “Does this mean we should pay no attention to alternative explanations or stop checking the data?” he writes. “As a matter of science, no…As a matter of policy, yes.”

Another passage beautifully sums up the entire narrative: “Like data about the climate’s past, model predictions of its future shimmer. Climate knowledge is probabilistic. You will never get a single definitive picture, either of exactly how much the climate has already changed or of how much it will change in the future. What you will get, instead, is a range. What the range tells you is that “no change at all” is simply not in the cards, and that something closer to the high end of the range – a climate catastrophe – looks all the more likely as time goes on.”

Wrapping Up

My summer job as a research student of Steve Easterbrook is nearing an end. All of a sudden, I only have a few days left, and the weather is (thankfully) cooling down as autumn approaches. It feels like just a few weeks ago that this summer was beginning!

Over the past three months, I examined seven different GCMs from Canada, the United States, and Europe. Based on the source code, documentation, and correspondence with scientists, I uncovered the underlying architecture of each model. This was represented in a set of diagrams. You can view full-sized versions here:

The component bubbles are to scale (based on the size of the code base) within each model, but not between models. The size and complexity of each GCM varies greatly, as can be seen below. UVic is by far the least complex model – it is arguably closer to an EMIC than a full GCM.

I came across many insights while comparing GCM architectures, regarding how modular components are, how extensively the coupler is used, and how complexity is distributed between components. I wrote some of these observations up into the poster I presented last week to the computer science department. My references can be seen here.

A big thanks to the scientists who answered questions about their work developing GCMs: Gavin Schmidt (Model E); Michael Eby (UVic); Tim Johns (HadGEM3); Arnaud Caubel, Marie-Alice Foujols, and Anne Cozic (IPSL); and Gary Strand (CESM). Additionally, Michael Eby from the University of Victoria was instrumental in improving the diagram design.

Although the summer is nearly over, our research certainly isn’t. I have started writing a more in-depth paper that Steve and I plan to develop during the year. We are also hoping to present our work at the upcoming AGU Fall Meeting, if our abstract gets accepted. Beyond this project, we are also looking at a potential experiment to run on CESM.

I guess I am sort of a scientist now. The line between “student” and “scientist” is blurry. I am taking classes, but also writing papers. Where does one end and the other begin? Regardless of where I am on the spectrum, I think I’m moving in the right direction. If this is what Doing Science means – investigating whatever little path interests me – I’m certainly enjoying it.

Modularity

I’ve now taken a look at the code and structure of four different climate models: Model E, CESM, UVic ESCM, and the Met Office Unified Model (which contains all the Hadley models). I’m noticing all sorts of similarities and differences, many of which I didn’t expect.

For example, I didn’t anticipate any overlap in climate model components. I thought that every modelling group would build their own ocean, their own atmosphere, and so on, from scratch. In fact, what I think of as a “model” – a self-contained, independent piece of software – applies to components more accurately than it does to an Earth system model. The latter is more accurately described as a collection of models, each representing one piece of the climate system. Each modelling group has a different collection of models, but not every one of these models is unique to their lab.

Ocean models are a particularly good example. The Modular Ocean Model (MOM) is built by GFDL, but it’s also used in NASA’s Model E and the UVic Earth System Climate Model. Another popular ocean model is the Nucleus for European Modelling of the Ocean (NEMO, what a great acronym) which is used by the newer Hadley climate models, as well as the IPSL model from France (which is sitting on my desktop as my next project!)

Aside: Speaking of clever acronyms, I don’t know what the folks at NCAR were thinking when they created the Single Column Atmosphere Model. Really, how did they not see their mistake? And why haven’t Marc Morano et al latched onto this acronym and spread it all over the web by now?

In most cases, an Earth system model has a unique architecture to fit all the component models together – a different coupling process. However, with the rise of standard interfaces like the Earth System Modeling Framework, even couplers can be reused between modelling groups. For example, the Hadley Centre and IPSL both use the OASIS coupler.

There are benefits and drawbacks to the rising overlap and “modularity” of Earth system models. One could argue that it makes the models less independent. If they all agree closely, how much of that agreement is due to their physical grounding in reality, and how much is due to the fact that they all use a lot of the same code? However, modularity is clearly a more efficient process for model development. It allows larger communities of scientists from each sub-discipline of Earth system modelling to form, and – in the case of MOM and NEMO – make two or three really good ocean models, instead of a dozen mediocre ones. Concentrating our effort, and reducing unnecessary duplication of code, makes modularity an attractive strategy, if an imperfect one.

The least modular of all the Earth system models I’ve looked at is Model E. The documentation mentions different components for the atmosphere, sea ice, and so on, but these components aren’t separated into subdirectories, and the lines between them are blurry. Nearly all the fortran files sit in the same directory, “model”,  and some of them deal with two or more components. For example, how would you categorize a file that calculates surface-atmosphere fluxes? Even where Model E uses code from other institutions, such as the MOM ocean model, it’s usually adapted and integrated into their own files, rather than in a separate directory.

The most modular Earth system model is probably the Met Office Unified Model. They don’t appear to have adapted NEMO, CICE (the sea ice model from NCAR) and OASIS at all – in fact, they’re not present in the code repository they gave us. I was a bit confused when I discovered that their “ocean” directory, left over from the years when they wrote their own ocean code, was now completely empty! Encapsulation to the point where a component model can be stored completely externally to the structural code was unexpected.

An interesting example of the challenges of modularity appears in sea ice. Do you create a separate, independent sea ice component, like CESM did? Do you consider it part of the ocean, like NEMO? Or do you lump in lake ice along with sea ice and subsequently allow the component to float between the surface and the ocean, like Model E?

The real world isn’t modular. There are no clear boundaries between components on the physical Earth. But then, there’s only one physical Earth, whereas there are many virtual Earths in the form of climate modelling, and limited resources for developing the code in each component. In this spectrum of interconnection and encapsulation, is one end or the other our best bet? Or is there a healthy balance somewhere in the middle?

Climate Models on Ubuntu

Part 1: Model E

I felt a bit over my head attempting to port CESM, so I asked a grad student, who had done his Master’s on climate modelling, for help. He looked at the documentation, scratched his head, and suggested I start with NASA’s Model E instead, because it was easier to install. And was it ever! We had it up and running within an hour or so. It was probably so much easier because Model E comes with gfortran support, while CESM only has scripts written for commercial compilers like Intel or PGI.

Strangely, when using Model E, no matter what dates the rundeck sets for the simulation start and end, the subsequently generated I file always has December 1, 1949 as the start date and December 2, 1949 as the end date. We edited the I files after they were created, which seemed to fix the problem, but it was still kind of weird.

I set up Model E to run a ten-year simulation with fixed atmospheric concentration (really, I just picked a rundeck at random) over the weekend. It took it about 3 days to complete, so just over 7 hours per year of simulation time…not bad for a 32-bit desktop!

However, I’m having some weird problems with the output – after configuring the model to output files in NetCDF format and opening them in Panoply, only the file with all the sea ice variables worked. All the others either gave a blank map (array full of N/A’s) or threw errors when Panoply tried to read them. Perhaps the model isn’t enjoying having the I file edited?

Part 2: CESM

After exploring Model E, I felt like trying my hand at CESM again. Steve managed to port it onto his Macbook last year, and took detailed notes. Editing the scripts didn’t seem so ominous this time!

The CESM code can be downloaded using Subversion (instructions here) after a quick registration. Using the Ubuntu Software Center, I downloaded some necessary packages: libnetcdf-dev, mpich2, and torque-scheduler. I already had gfortran, which is sort of essential.

I used the Porting via user defined machine files method to configure the model for my machine, using the Hadley scripts as a starting point. Variables for the config_machines.xml are explained in Appendix D through H of the user’s guide (links in chapter 7). Mostly, you’re just pointing to folders where you want to store data and files. Here are a few exceptions:

  • DOUT_L_HTAR: I stuck with "TRUE", as that was the default.
  • CCSM_CPRNC: this tool already exists in the CESM source code, in /models/atm/cam/tools/cprnc.
  • BATCHQUERY and BATCHSUBMIT: the Hadley entry had “qstat” and “qsub”, respectively, so I Googled these terms to find out which batch submission software they referred to (Torque, which is freely available in the torque-scheduler package) and downloaded it so I could keep the commands the same!
  • GMAKE_J: this determines how many processors to commit to a certain task, and I wasn’t sure how many this machine had, so I just put “1”.
  • MAX_TASKS_PER_NODE: I chose "8", which the user’s guide had mentioned as an example.
  • MPISERIAL_SUPPORT: the default is “FALSE”.

The only file that I really needed to edit was Macros.<machine name>. The env_machopts.<machine name> file ended up being empty for me. I spent a while confused by the modules declarations, which turned out to refer to the Environment Modules software. Once I realized that, for this software to be helpful, I would have to write five or six modulefiles in a language I didn’t know, I decided that it probably wasn’t worth the effort, and took these declarations out. I left mkbatch.<machine name> alone, except for the first line which sets the machine, and then turned my attention to Macros.

“Getting this to work will be an iterative process”, the user’s guide says, and it certainly was (and still is). It’s never a good sign when the installation guide reminds you to be patient! Here is the sequence of each iteration:

  1. Edit the Macros file as best I can.
  2. Open up the terminal, cd to cesm1_0/scripts, and create a new case as follows: ./create_newcase -case test -res f19_g16 -compset X -mach <machine name>
  3. If this works, cd to test, and run configure: ./configure -case
  4. If all is well, try to build the case: ./test.<machine name>.build
  5. See where it fails and read the build log file it refers to for ideas as to what went wrong. Search on Google for what certain errors mean. Do some other work for a while, to let the ideas simmer.
  6. Set up for the next case: ./test.<machine name>.clean_build , cd .., and rm -rf test. This clears out old files so you can safely build a new case with the same name.
  7. See step 1.

I wasn’t really sure what the program paths were, as I couldn’t find a nicely contained folder for each one (like Windows has in “Program Files”), but I soon stumbled upon a nice little trick: look up the package on Ubuntu Package Manager, and click on “list of files” under the Download section. That should tell you what path the program used as its root.

I also discovered that setting FC and CC to gfortran and gcc, respectively, in the Macros file will throw errors. Instead, leave the variables as mpif90 and mpicc, which are linked to the GNU compilers. For example, when I type mpif90 in the terminal, the result is gfortran: no input files, just as if I had typed gfortran. For some reason, though, the errors go away.

As soon as I made it past building the mct and pio libraries, the build logs for each component (eg atm, ice) started saying gmake: command not found. This is one of the pitfalls of Ubuntu: it uses the command make for the same program that basically every other Unix-based OS calls gmake. So I needed to find and edit all the scripts that called gmake, or generated other scripts that called it, and so on. “There must be a way to automate this,” I thought, and from this article I found out how. In the terminal, cd to the CESM source code folder, and type the following:

grep -lr -e 'gmake' * | xargs sed -i 's/gmake/make/g'

You should only have to do this once. It’s case sensitive, so it will leave the xml variable GMAKE_J alone.

Then I turned my attention to compiler flags, which Steve chronicled quite well in his notes (see link above). I made most of the same changes that he did, except I didn’t need to change -DLINUX to -DDarwin. However, I needed some more compiler flags still. In the terminal, man gfortran brings up a list of all the options for gfortran, which was helpful.

The ccsm build log had hundreds of undefined reference errors as soon as it started to compile fortran. The way I understand it, many of the fortran files reference each other, but gfortran likes to append underscores to user-defined variables, and then it can’t find the file the variable is referencing! You can suppress this using the flag -fno-underscoring.

Now I am stuck on a new error. It looks like the ccsm script is almost reaching the end, as it’s using ld, the gcc linking mechanism, to tie all the files together. Then the build log says:

/usr/bin/ld: seq_domain_mct.o(.debug_info+0x1c32): unresolvable R_386_32 relocation against symbol 'mpi_fortran_argv_null'
/usr/bin/ld: final link failed: Nonrepresentable section on output
collect2: ld returned 1 exit status

I’m having trouble finding articles on the internet about similar errors, and the gcc and ld manpages are so long that trying every compiler flag isn’t really an option. Any ideas?

Update: Fixed it! In scripts/ccsm_utils/Build/Makefile, I changed LD := $(F90) to LD := gcc -shared. The build was finally successful! Now off to try and run it…

The good thing is that, since I re-started this project a few days ago, I haven’t spent very long stuck on any one error. I’m constantly having problems, but I move through them pretty quickly! In the meantime, I’m learning a lot about the model and how it fits everything together during installation. I’ve also come a long way with Linux programming in general. Considering that when I first installed Ubuntu a few months ago, and sheepishly called my friend to ask where to find the command line, I’m quite proud of my progress!

I hope this article will help future Ubuntu users install CESM, as it seems to have a few quirks that even Mac OS X doesn’t experience (eg make vs gmake). For the rest of you, apologies if I have bored you to tears!

Models and Books

Working as a summer student continues to be rewarding. I get to spend all day reading interesting things and playing with scientific software. What a great deal!

Over the weekend, I ran the “Global Warming_01” simulation from EdGCM, which is an old climate model from NASA with a graphical user interface. Strangely, they don’t support Linux, as their target audience is educators – I doubt there are very many high school teachers running open-source operating systems! So I ran the Windows version on my laptop, and it took about 36 hours. It all felt very authentic.

Unfortunately, as their Windows 7 support is fairly new, there were some bugs in the output. It refused to give me any maps at all! The terminal popped up for a few seconds, but it didn’t output any files. All I could get were zonal averages (and then only from January-March) and time series. Also, for some reason, none of the time series graphs had units on the Y axis. Anyway, here are some I found interesting:

CO2 concentrations increase linearly from 1958 to 2000, and then exponentially until 2100, with a doubling of CO2 (with respect to 1958) around 2062. (This data was output as a spreadsheet, and I got Excel to generate the graph, so it looks nicer than the others.)

Global cloud cover held steady until around 2070, when it decreased. I can’t figure out why this would be, as the water vapour content of the air should be increasing with warming – wouldn’t there be more clouds forming, not less?

Global precipitation increased, as I expected. This is an instance where I wish the maps would have worked, because it would be neat to look at how precipitation amount varied by location. I’ve been pretty interested in subtropical drought recently.

Albedo decreased about 1% – a nice example of the ice-albedo feedback (I presume) in action.

I also ran a simulation of the Last Glacial Maximum, from 21 thousand years ago. This run was much quicker than the first, as (since it was modeling a stable climate) it only simulated a decade, rather than 150 years. It took a few hours, and the same bugs in output were apparent. Time series graphs are less useful when studying stable conditions, but I found the albedo graph interesting:

Up a few percent from modern values, as expected.

It’s fairly expensive to purchase a licence for EdGCM, but they offer a free 30-day trial that I would recommend. I expect that it would run better on a  Mac, as that’s what they do most of the software development and testing on.

Now that I’ve played around with EdGCM, I’m working on porting CESM to a Linux machine. There’s been trial and error at every step, but everything went pretty smoothly until I reached the “build” phase, which requires the user to edit some of the scripts to suit the local machine (step 3 of the user guide). I’m still pretty new to Linux, so I’m having trouble working out the correct program paths, environment variables, modules, and so on. Oh well, the more difficult it is to get working, the more exciting it is when success finally comes!

I am also doing lots of background reading, as my project for the summer will probably be “some sort of written something-or-other” about climate models. Steve has a great collection of books about climate change, and keeps handing me interesting things to read. I’m really enjoying The Warming Papers, edited by David Archer and Ray Pierrehumbert. The book is a collection of landmark papers in climate science, with commentary from the editors. It’s pretty neat to read all the great works – from Fourier to Broecker to Hansen – in one place. Next on my list is A Vast Machine by Paul Edwards, which I’m very excited about.

A quick question, unrelated to my work – why do thunderstorms tend to happen at night? Perhaps it’s just a fluke, but we’ve had a lot of them recently, none of which have been in the daytime. Thoughts?

Tornadoes and Climate Change

Cross-posted from NextGen Journal

It has been a bad season for tornadoes in the United States. In fact, this April shattered the previous record for the most tornadoes ever. Even though the count isn’t finalized yet, nobody doubts that it will come out on top:

In a warming world, many questions are common, and quite reasonable. Is this a sign of climate change? Will we experience more, or stronger, tornadoes as the planet warms further?

In fact, these are very difficult questions to answer. First of all, attributing a specific weather event, or even a series of weather events, to a change in the climate is extremely difficult. Scientists can do statistical analysis to estimate the probability of the event with and without the extra energy available in a warming world, but this kind of study takes years. Even so, nobody can say for certain whether an event wasn’t just a fluke. The recent tornadoes very well might have been caused by climate change, but they also might have happened anyway.

Will tornadoes become more common in the future, as global warming progresses? Tornado formation is complicated, and forecasting them requires an awful lot of calculations. Many processes in the climate system are this way, so scientists simulate them using computer models, which can do detailed calculations at an increasingly impressive speed.

However, individual tornadoes are relatively small compared to other kinds of storms, such as hurricanes or regular rainstorms. They are, in fact, smaller than a single square in the highest-resolution climate models around today. Therefore, it’s just not possible to directly project them using mathematical models.

However, we can project the conditions necessary for tornadoes to form. They don’t always lead to a tornado, but they make one more likely. Two main factors exist: high wind shear and high convective available potential energy (CAPE). Climate change is making the atmosphere warmer, and increasing specific humidity (but not relative humidity): both of these contribute to CAPE, so that factor will increase the likelihood of conditions favourable to tornadoes. However, climate change warms the poles faster than the equator, which will decrease the temperature difference between them, subsequently lowering wind shear. That will make tornadoes less likely (Diffenbaugh et al, 2008). Which factor will win out? Is there another factor involved that climate change could impact? Will we get more tornadoes in some areas and less in others? Will we get weaker tornadoes or stronger tornadoes? It’s very difficult to tell.

In 2007, NASA scientists used a climate model to project changes in severe storms, including tornadoes. (Remember, even though an individual tornado can’t be represented on a model, the conditions likely to cause a tornado can.) They predicted that the future will bring fewer storms overall, but that the ones that do form will be stronger. A plausible solution to the question, although not a very comforting one.

With uncertain knowledge, how should we approach this issue? Should we focus on the comforting possibility that the devastation in the United States might have nothing to do with our species’ actions? Or should we acknowledge that we might bear responsibility? Dr. Kevin Trenberth, a top climate scientist at the National Center for Atmospheric Research (NCAR), thinks that ignoring this possibility until it’s proven is a bad idea. “It’s irresponsible not to mention climate change,” he writes.

Beautiful Things

This is what the last few days have taught me: even if the code for climate models can seem dense and confusing, the output is absolutely amazing.

Late yesterday I discovered a page of plots and animations from the Canadian Centre for Climate Modelling and Analysis. The most recent coupled global model represented on that page is CGCM3, so I looked at those animations. I noticed something very interesting: the North Atlantic, independent of the emissions scenario, was projected to cool slightly, while the world around it warmed up. Here is an example, from the A1B scenario. Don’t worry if the animation is already at the end, it will loop:

It turns out that this slight cooling is due to the North Atlantic circulation slowing down, as is very likely to happen from large additions of freshwater that change the salinity and density of the ocean (IPCC AR4 WG1, FAQ 10.2). This freshwater could come from either increased precipitation due to climate change, or meltwater from the Arctic ending up in the North Atlantic. Of course, we hear about this all the time – the unlikely prospect of the Gulf Stream completely shutting down and Europe going into an ice age, as displayed in The Day After Tomorrow – but, until now, I hadn’t realized that even a slight slowing of the circulation could cool the North Atlantic, while Europe remained unaffected.

Then, in chapter 8 of the IPCC, I read something that surprised me: climate models generate their own El Ninos and La Ninas. Scientists don’t understand quite what triggers the circulation patterns leading to these phenomena, so how can they be in the models? It turns out that the modellers don’t have to parameterize the ENSO cycles at all: they have done such a good job of reproducing global circulation from first principles that ENSO arises by itself, even though we don’t know why. How cool is that? (Thanks to Jim Prall and Things Break for their help with this puzzle.)

Jim Prall also pointed me to an HD animation of output from the UK-Japan Climate Collaboration. I can’t seem to embed the QuickTime movie (WordPress strips out some of the necessary HTML tags) so you will have to click on the link to watch it. It’s pretty long – almost 17 minutes – as it represents an entire year of the world’s climate system, in one-hour time steps. It shows 1978-79, starting from observational data, but from there it simulates its own circulation.

I am struck by the beauty of this output – the swirling cyclonic precipitation, the steady prevailing westerlies and trade winds, the subtropical high pressure belt clear from the relative absence of cloud cover in these regions. You can see storms sprinkling across the Amazon Basin, monsoons pounding South Asia, and sea ice at both poles advancing and retreating with the seasons. Scientists didn’t explicitly tell their models to do any of this. It all appeared from first principles.

Take 17 minutes out of your day to watch it – it’s an amazing stress reliever, sort of like meditation. Or maybe that’s just me…

One more quick observation: most of you are probably familiar with the naming conventions of IPCC reports. The First Assessment Report was FAR, the second was SAR, and so on, until the acronyms started to repeat themselves, so the Fourth Assessment Report was AR4. They’ll have to follow this alternate convention until the Eighth Annual Report, which will be EAR. Maybe they’ll stick with AR8, but that would be substantially less entertaining.

Moments of Revelation

Dr Iain Stewart holding a rock

Dr Iain Stewart holding a rock

Over the past few days I’ve worked my way through the three-part BBC series, Climate Wars, hosted by Dr Iain Stewart, a geology professor with a very cool Scottish accent. An excerpt from this series was featured in one of Peter Sinclair’s videos, which looked quite fascinating, and anything Peter refers to as “brilliant” is probably worth watching.

Worth watching indeed. I’d recommend anyone and everyone to watch this series. It’s basic enough for someone with little to no knowledge of this issue, yet presented in such a compelling way that the most experienced climate scientist wouldn’t get bored.

One of the film’s major strong points was simply the way it was organized. Dr Stewart traced the history of both the science and the politics around climate change, splitting it into three parts:

Part one: Scientists had known for decades that anthropogenic greenhouse gases could cause warming of the Earth, but now, following thirty years of aerosol-induced cooling, global warming was starting to show; almost every year was record-breaking. James Hansen was the first to “stick his neck out” – testifying to Congress that he believed anthropogenic climate change was underway. He later claimed that he had weighed the risks of being wrong and looking stupid, versus doing nothing and not telling the world about such a huge potential threat. Sort of like an early Greg Craven, I suppose. I found this part to be the least interesting of the three. It also began strangely – Stewart mentioned a letter to the US president, signed by top scientists, which warned of an impending ice age. I’d never heard about this before. Does anyone else know more about this letter?

Part two: The skeptics fought back as strongly as they could, questioning absolutely every scientific claim regarding global warming. I found this to be absolutely fascinating; it solidifed a lot of issues in my mind and helped to unify my knowledge on the topic. Stewart went through the research which showed that the Earth was warming as a result of human activities – and showed how all the yelling from skeptics helped to make the theory even stronger. He also “infiltrated the walls” of the Heartland Institute’s International Conference on Climate Change, which I found to be absolutely hilarious. They had a comedian making bad jokes about how New York could handle some global warming, Monckton and Singer making their usual accusations of fraud (Stewart remarked that “when these become the talking points, then I know that the scientific debate is really over”), and Patrick Michaels publicly admitting “Yes, the second half of the century did show some warming, and it was the result of human activities…..and now you all hate me for saying that…….” Dr Iain Stewart explained that, even though the controversy doesn’t really exist anymore in the scientific literature, the claims of skeptics still live on in the popular media and on the Internet. Instead of fighting a scientific battle, they’re now doing public relations.

Part three: Scientists knew that humans were causing global warming, but how bad would it be? After the brilliance of the second part, I wasn’t expecting to enjoy the last segment quite as much…….but I was proven very, very wrong. It both terrified and fascinated me. Terrified because it discussed the Younger Dryas, something I hadn’t really heard of before, where it warmed about 5 C in just a few years. So far beyond anything I thought was possible. When this research was released, the idea that the climate was steady and slow-moving could no longer be embraced.

And then it fascinated me because it was the first time that climate models seemed really, really cool.

The idea of modelling something – anything – on the computer is somewhat unremarkable to me. I am of the generation that literally grew up using computers; I vaguely remember playing astronaut addition games on Windows 3.1 when I was four. I have seen so many things digitalized; the prospect of modelling climate is obviously immense, but it doesn’t amaze me.

But then Dr Stewart made a “dishpan climate model” with a spinning bowl, water with some dye, an ice-cube Antarctica, and a Bunsen-burner Sun. He set it all up and before long…..you could actually see regular patterns in the water’s movements that looked like the prevailing winds. It was so, so amazing. Even more amazing than a complex model on the computer because it was real and tangible and you could touch it. Like a little Earth on the countertop. All of the complex processes of our climate eventually come back to these simple factors. (I want to make one myself. But I don’t have one of those spinny things.)

And then I started wondering what computer modelling would be like, and remembering how much I loved physics last year, how I liked to put four or five algebraic equations together and solve it all in one complicated step to reduce error. Manipulating variables and shifting things around. Like a little puzzle. I was remembering how much I love hard math problems, because you actually have to use your brain, try everything you can think of, stretch the limits of your logic…..and you feel such a sense of accomplishment when you finish that all the work is worth it.

Is a climate model just a really large and complex collection of equations and puzzles that have to fit together in the right way? It would be pretty cool if it was. I knew that studying climate change required a lot of math, but this is the first time that I can see a clear path showing how an issue I care deeply about could coincide with aptitudes I enjoy.