How much is most?

A growing body of research is showing that humans are likely causing more than 100% of global warming: without our influences on the climate, the planet would actually be cooling slightly.

In 2007, the Intergovernmental Panel on Climate Change published its fourth assessment report, internationally regarded as the most credible summary of climate science to date. It concluded that “most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations”.

A clear question remains: How much is “most”? 51%? 75%? 99%? At the time that the IPCC report was written, the answer was unclear. However, a new frontier of climate research has emerged since, and scientists are working hard to quantify the answer to this question.

I recently attended the 2011 American Geophysical Union Fall Meeting, a conference of over 20 000 scientists, many of whom study the climate system. This new area of research was a hot topic of discussion at AGU, and a phrase that came up many times was “more than 100%”.

That’s right, humans are probably causing more than 100% of observed global warming. That means that our influences are being offset by natural cooling factors. If we had never started burning fossil fuels, the world would be cooling slightly.

In the long term, oscillations of the Earth’s orbit show that, without human activity, we would be very slowly descending into a new ice age. There are other short-term cooling influences, though. Large volcanic eruptions, such as Mount Pinatubo in 1991, have thrown dust into the upper atmosphere where it blocks a small amount of sunlight. The sun, particularly in the last few years, has been less intense than usual, due to the 11-year sunspot cycle. We have also experienced several strong La Niña events in the Pacific Ocean, which move heat out of the atmosphere and into the ocean.

However, all of these cooling influences pale in comparison to the strength of the human-caused warming influences. The climate change communication project Skeptical Science recently summarized six scientific studies in this graphic:

Most of the studies estimated that humans caused over 100% of the warming since 1950, and all six put the number over 98%. Additionally, most of the studies find natural influences to be in the direction of cooling, and all six show that number to be close to zero.

If you are interested in the methodologies and uncertainty ranges of these six studies, Skeptical Science goes into more detail, and also provides links to the original journal articles.

To summarize, the perception that humans are accelerating a natural process of warming is false. We have created this problem entirely on our own. Luckily, that means we have the power to stop the problem in its tracks. We are in control, and we choose what happens in the future.

The Software Architecture of Global Climate Models

Last week at AGU, I presented the results of the project Steve Easterbrook and I worked on this summer. Click the thumbnail on the left for a full size PDF. Also, you can download the updated versions of our software diagrams:

  • COSMOS (COmmunity earth System MOdelS) 1.2.1
  • Model E: Oct. 11, 2011 snapshot
  • HadGEM3 (Hadley Centre Global Environmental Model, version 3): August 2009 snapshot
  • CESM (Community Earth System Model) 1.0.3
  • GFDL (Geophysical Fluid Dynamics Laboratory), Climate Model 2.1 coupled to MOM (Modular Ocean Model) 4.1
  • IPSL (Institut Pierre Simon Laplace), Climate Model 5A
  • UVic ESCM (Earth System Climate Model) 2.9

And, since the most important part of poster sessions is the schpiel you give and the conversations you have, here is my schpiel:

Steve and I realized that while comparisons of the output of global climate models are very common (for example, CMIP5: Coupled Model Intercomparison Project Phase 5), nobody has really sat down and compared their software structure. We tried to fill this gap in research with a qualitative comparison study of seven models. Six of them are GCMs (General Circulation Models – the most complex climate simulations) in the CMIP5 ensemble; one, the UVic model, is not in CMIP because it’s really more of an EMIC (Earth System Model of Intermediate Complexity – simpler than a GCM). However, it’s one of the most complex EMICs, and contains a full GCM ocean, so we thought it would present an interesting boundary case. (Also, the code was easier to get access to than the corresponding GCM from Environment Canada. When we write this up into a paper we will probably use that model instead.)

I created a diagram of each model’s architecture. The area of each bubble is roughly proportional to the lines of code in that component, which we think is a pretty good proxy for complexity – a more complex model will have more subroutines and functions than a simple one. The bubbles are to scale within each model, but not between models, as the total lines of code in a model varies by about a factor of 10. A bit difficult to fit on a poster and still make everything readable! Fluxes from each component are represented by coloured arrows (the same colour as the bubble), and often pass through the coupler before reaching another component.

We examined the amount of encapsulation of components, which varies widely between models. CESM, on one end of the spectrum, isolates every component completely, particularly in the directory structure. Model E, on the other hand, places nearly all of its files in the same directory, and has a much higher level of integration between components. This is more difficult for a user to read, but it has benefits for data transfer.

While component encapsulation is attractive from a software engineering perspective, it poses problems because the real world is not so encapsulated. Perhaps the best example of this is sea ice. It floats on the ocean, its extent changing continuously. It breaks up into separate chunks and can form slush with the seawater. How do you split up ocean code and ice code? CESM keeps the two components completely separate, with a transient boundary between them. IPSL represents ice as an encapsulated sub-component of their ocean model, NEMO (Nucleus for European Modeling of the Ocean). COSMOS integrates both ocean and ice code together in MPI-OM (Max Planck Institute Ocean Model).

GFDL took a completely different, and rather innovative, approach. Sea ice in the GFDL model is an interface, a layer over the ocean with boolean flags in each cell indicating whether or not ice is present. All fluxes to and from the ocean must pass through the “sea ice”, even if they’re at the equator and the interface is empty.

Encapsulation requires code to tie components together, since the climate system is so interconnected. Every model has a coupler, which fulfills two main functions: controlling the main time-stepping loop, and passing data between components. Some models, such as CESM, use the coupler for every interaction. However, if two components have the same grid, no interpolation is necessary, so it’s often simpler just to pass them directly. Sometimes this means a component can be completely disconnected from the coupler, such as the land model in IPSL; other times it still uses the coupler for other interactions, such as the HadGEM3 arrangement with direct ocean-ice fluxes but coupler-controlled ocean-atmosphere and ice-atmosphere fluxes.

While it’s easy to see that some models are more complex than others, it’s also interesting to look at the distribution of complexity within a model. Often the bulk of the code is concentrated in one component, due to historical software development as well as the institution’s conscious goals. Most of the models are atmosphere-centric, since they were created in the 1970s when numerical weather prediction was the focus of the Earth system modelling community. Weather models require a very complex atmosphere but not a lot else, so atmospheric routines dominated the code. Over time, other components were added, but the atmosphere remained at the heart of the models. The most extreme example is HadGEM3, which actually uses the same atmosphere model for both weather prediction and climate simulations!

The UVic model is quite different. The University of Victoria is on the west coast of Canada, and does a lot of ocean studies, so the model began as a branch of the MOM ocean model from GFDL. The developers could have coupled it to a complex atmosphere model in an effort to mimic full GCMs, but they consciously chose not to. Atmospheric routines need very short time steps, so they eat up most of the run time, and make very long simulations not feasible. In an effort to keep their model fast, UVic created EMBM (Energy Moisture Balance Model), an extremely simple atmospheric model (for example, it doesn’t include dynamic precipitation – it simply rains as soon as a certain humidity is reached). Since the ocean is the primary moderator of climate over the long run, the UVic ESCM still outputs global long-term averages that match up nicely with GCM results.

Finally, CESM and Model E could not be described as “land-centric”, but land is definitely catching up – it’s even surpassed the ocean model in both cases! These two GCMs are cutting-edge in terms of carbon cycle feedbacks, which are primarily terrestrial, and likely very important in determining how much warming we can expect in the centuries to come. They are currently poorly understood and difficult to model, so they are a new frontier for Earth system modelling. Scientists are moving away from a binary atmosphere-ocean paradigm and towards a more comprehensive climate system representation.

I presented this work to some computer scientists in the summer, and many of them asked, “Why do you need so many models? Wouldn’t it be better to just have one really good one that everyone collaborated on?” It might be simpler from a software engineering perspective, but for the purposes of science, a variety of diverse models is actually better. It means you can pick and choose which model suits your experiment. Additionally, it increases our confidence in climate model output, because if dozens of independent models are saying the same thing, they’re more likely to be accurate than if just one model made a given prediction. Diversity in model architecture arguably produces the software engineering equivalent of perturbed physics, although it’s not systematic or deliberate.

A common question people asked me at AGU was, “Which model do you think is the best?” This question is impossible to answer, because it depends on how you define “best”, which depends on what experiment you are running. Are you looking at short-term, regional impacts at a high resolution? HadGEM3 would be a good bet. Do you want to know what the world will be like in the year 5000? Go for UVic, otherwise you will run out of supercomputer time! Are you studying feedbacks, perhaps the Paleocene-Eocene Thermal Maximum? A good choice would be CESM. So you see, every model is the best at something, and no model can be the best at everything.

You might think the ideal climate model would mimic the real world perfectly. It would still have discrete grid cells and time steps, but it would be like a digital photo, where the pixels are so small that it looks continuous even when you zoom in. It would contain every single Earth system process known to science, and would represent their connections and interactions perfectly.

Such a model would also be a nightmare to use and develop. It would run slower than real time, making predictions of the future useless. The code would not be encapsulated, so organizing teams of programmers to work on certain aspects of the model would be nearly impossible. It would use more memory than computer hardware offers us – despite the speed of computers these days, they’re still too slow for many scientific models!

We need to balance complexity with feasibility. A hierarchy of complexity is important, as is a variety of models to choose from. Perfectly reproducing the system we’re trying to model actually isn’t the ultimate goal.

Please leave your questions below, and hopefully we can start a conversation – sort of a virtual poster session!

Labels

For a long time I have struggled with what to call the people who insist that climate change is natural/nonexistent/a global conspiracy. “Skeptics” is their preferred term, but I refuse to give such a compliment to those who don’t deserve it. Skepticism is a good thing in science, and it’s not being applied by self-professed “climate skeptics”. This worthy label has been hijacked by those who seek to redefine it.

“Deniers” is more accurate, in my opinion, but I feel uncomfortable using it. I don’t want to appear closed-minded and alienate those who are confused or undecided. Additionally, many people are in the audience of deniers, but aren’t in denial themselves. They repeat the myths they hear from other sources, but you can easily talk them out of their misconceptions using evidence.

I posed this question to some people at AGU. Which word did they use? “Pseudoskeptics” and “misinformants” are both accurate terms, but too difficult for a new reader to understand. My favourite answer, which I think I will adopt, was “contrarians”. Simple, clear, and non-judgmental. It emphasizes what they think, not how they think. Also, it hints that they are going against the majority in the scientific community. Another good suggestion was to say someone is “in denial”, rather than “a denier” – it depersonalizes the accusation.

John Cook, when I asked him this question, turned it around: “What should we call ourselves?” he asked, and I couldn’t come up with an answer. I feel that not being a contrarian is a default position that doesn’t require a qualifier. We are just scientists, communicators, and concerned citizens, and unless we say otherwise you can assume we follow the consensus. (John thinks we should call ourselves “hotties”, but apparently it hasn’t caught on.)

“What should I call myself?” is another puzzler, since I fall into multiple categories. Officially I’m an undergrad student, but I’m also getting into research, which isn’t a required part of undergraduate studies. In some ways I am a journalist too, but I see that as a side project rather than a career goal. So I can’t call myself a scientist, or even a fledgling scientist, but I feel like I’m on that path – a scientist larva, perhaps?

Thoughts?

General Thoughts on AGU

I returned home from the AGU Fall Meeting last night, and after a good night’s sleep I am almost recovered – it’s amazing how tired science can make you!

The whole conference felt sort of surreal. Meeting and conversing with others was definitely the best part. I shook the hand of James Hansen and assured him that he is making a difference. I talked about my research with Gavin Schmidt. I met dozens of people that were previously just names on a screen, from top scientists like Michael Mann and Ben Santer to fellow bloggers like Michael Tobis and John Cook.

I filled most of a journal with notes I took during presentations, and saw literally hundreds of posters. I attended a workshop on climate science communication, run by Susan Joy Hassol and Richard Sommerville, which fundamentally altered my strategies for public outreach. Be sure to check out their new website, and their widely acclaimed Physics Today paper that summarizes most of their work.

Speaking of fabulous communication, take a few minutes to watch this memorial video for Stephen Schneider – it’s by the same folks who turned Bill McKibben’s article into a video:

AGU inspired so many posts that I think I will publish something every day this week. Be sure to check back often!

AGU 2011

I know that many of you will be at the annual American Geophysical Union conference next week in San Francisco. If so, I’d invite you to come by and take a look at our poster! It will be up all Thursday morning in Halls A-C, Moscone South. I will be around for at least part of the morning to chat and answer questions.

You can view an electronic version of our poster, as well as read our abstract and leave comments, on the new AGU ePosters site.

Hope to see some of you next week!

Another Sporadic Open Thread

I keep forgetting to put these up.

Possible topics for discussion:

  • La Niña is expected to continue into the winter. This is definitely not what southern U.S. states, such as Texas, want – after a summer of intense drought, the drying effect of La Niña on that area of the world won’t bring any relief.
  • For those of you going to AGU, an itinerary planner is now available to browse the program and save sessions you’re interested in. I am compiling an awesome-looking list of presentations by the likes of James Hansen, Wally Broecker and Gavin Schmidt. Our poster is entitled “The Software Architecture of Global Climate Models”, and is on the Thursday morning.
  • Has anyone read Earth, an Operator’s Manual by Richard Alley? If so, would you recommend it?

Enjoy!

News

Two pieces of bad news:

  • Mountain pine beetles, whose range is expanding due to warmer winters, are beginning to infest jack pines as well as lodgepole pines. To understand the danger from this transition, one only needs to look at the range maps for each species:

    Lodgepole Pine

    Jack Pine

    A study from Molecular Ecology, published last April, has the details.

  • Arctic sea ice extent was either the lowest on record or the second lowest on record, depending on how you collect and analyze the data. Sea ice volume, a much more important metric for climate change, was the lowest on record:

And one piece of good news:

  • Our abstract was accepted to AGU! I have been wanting to go to this conference for two years, and now I will get to!