A Little Bit of Hope

I went to a public lecture on climate change last night (because I just didn’t get enough of that last week at AGU, apparently), where four professors from different departments at my university spoke about their work. They were great speeches – it sort of reminded me of TED Talks – but I was actually most interested in the audience questions and comments afterward.

There was the token crazy guy who stood up and said “The sun is getting hotter every day and one day we’re all going to FRY! So what does that say about your global warming theory? Besides, if it was CO2 we could all just stop breathing!” Luckily, everybody laughed at his comments…

There were also some more reasonable-sounding people, repeating common myths like “It’s a natural cycle” and “Volcanoes emit more CO2 than humans“. The speakers did a good job of explaining why these claims were false, but I still wanted to pull out the Skeptical Science app and wave it in the air…

Overall, though, the audience seemed to be composed of concerned citizens who understood the causes and severity of climate change, and were eager to learn about impacts, particularly on extreme weather. It was nice to see an audience moving past this silly public debate into a more productive one about risk management.

The best moment, though, was on the bus home. There was a first-year student in the seat behind me – I assume he came to see the lecture as well, but maybe he just talks about climate change on the bus all the time. He was telling his friend about sea level rise, and he was saying all the right things – we can expect one or two metres by the end of the century, which doesn’t sound like a lot, but it’s enough to endanger many densely populated coastal cities, as well as kill vegetation due to seawater seeping in.

He even had the statistics right! I was so proud! I was thinking about turning around to join in the conversation, but by then I had been listening in for so long that it would have been embarrassing.

It’s nice to see evidence of a shift in public understanding, even if it’s only anecdotal. Maybe we’re doing something right after all.

The Software Architecture of Global Climate Models

Last week at AGU, I presented the results of the project Steve Easterbrook and I worked on this summer. Click the thumbnail on the left for a full size PDF. Also, you can download the updated versions of our software diagrams:

  • COSMOS (COmmunity earth System MOdelS) 1.2.1
  • Model E: Oct. 11, 2011 snapshot
  • HadGEM3 (Hadley Centre Global Environmental Model, version 3): August 2009 snapshot
  • CESM (Community Earth System Model) 1.0.3
  • GFDL (Geophysical Fluid Dynamics Laboratory), Climate Model 2.1 coupled to MOM (Modular Ocean Model) 4.1
  • IPSL (Institut Pierre Simon Laplace), Climate Model 5A
  • UVic ESCM (Earth System Climate Model) 2.9

And, since the most important part of poster sessions is the schpiel you give and the conversations you have, here is my schpiel:

Steve and I realized that while comparisons of the output of global climate models are very common (for example, CMIP5: Coupled Model Intercomparison Project Phase 5), nobody has really sat down and compared their software structure. We tried to fill this gap in research with a qualitative comparison study of seven models. Six of them are GCMs (General Circulation Models – the most complex climate simulations) in the CMIP5 ensemble; one, the UVic model, is not in CMIP because it’s really more of an EMIC (Earth System Model of Intermediate Complexity – simpler than a GCM). However, it’s one of the most complex EMICs, and contains a full GCM ocean, so we thought it would present an interesting boundary case. (Also, the code was easier to get access to than the corresponding GCM from Environment Canada. When we write this up into a paper we will probably use that model instead.)

I created a diagram of each model’s architecture. The area of each bubble is roughly proportional to the lines of code in that component, which we think is a pretty good proxy for complexity – a more complex model will have more subroutines and functions than a simple one. The bubbles are to scale within each model, but not between models, as the total lines of code in a model varies by about a factor of 10. A bit difficult to fit on a poster and still make everything readable! Fluxes from each component are represented by coloured arrows (the same colour as the bubble), and often pass through the coupler before reaching another component.

We examined the amount of encapsulation of components, which varies widely between models. CESM, on one end of the spectrum, isolates every component completely, particularly in the directory structure. Model E, on the other hand, places nearly all of its files in the same directory, and has a much higher level of integration between components. This is more difficult for a user to read, but it has benefits for data transfer.

While component encapsulation is attractive from a software engineering perspective, it poses problems because the real world is not so encapsulated. Perhaps the best example of this is sea ice. It floats on the ocean, its extent changing continuously. It breaks up into separate chunks and can form slush with the seawater. How do you split up ocean code and ice code? CESM keeps the two components completely separate, with a transient boundary between them. IPSL represents ice as an encapsulated sub-component of their ocean model, NEMO (Nucleus for European Modeling of the Ocean). COSMOS integrates both ocean and ice code together in MPI-OM (Max Planck Institute Ocean Model).

GFDL took a completely different, and rather innovative, approach. Sea ice in the GFDL model is an interface, a layer over the ocean with boolean flags in each cell indicating whether or not ice is present. All fluxes to and from the ocean must pass through the “sea ice”, even if they’re at the equator and the interface is empty.

Encapsulation requires code to tie components together, since the climate system is so interconnected. Every model has a coupler, which fulfills two main functions: controlling the main time-stepping loop, and passing data between components. Some models, such as CESM, use the coupler for every interaction. However, if two components have the same grid, no interpolation is necessary, so it’s often simpler just to pass them directly. Sometimes this means a component can be completely disconnected from the coupler, such as the land model in IPSL; other times it still uses the coupler for other interactions, such as the HadGEM3 arrangement with direct ocean-ice fluxes but coupler-controlled ocean-atmosphere and ice-atmosphere fluxes.

While it’s easy to see that some models are more complex than others, it’s also interesting to look at the distribution of complexity within a model. Often the bulk of the code is concentrated in one component, due to historical software development as well as the institution’s conscious goals. Most of the models are atmosphere-centric, since they were created in the 1970s when numerical weather prediction was the focus of the Earth system modelling community. Weather models require a very complex atmosphere but not a lot else, so atmospheric routines dominated the code. Over time, other components were added, but the atmosphere remained at the heart of the models. The most extreme example is HadGEM3, which actually uses the same atmosphere model for both weather prediction and climate simulations!

The UVic model is quite different. The University of Victoria is on the west coast of Canada, and does a lot of ocean studies, so the model began as a branch of the MOM ocean model from GFDL. The developers could have coupled it to a complex atmosphere model in an effort to mimic full GCMs, but they consciously chose not to. Atmospheric routines need very short time steps, so they eat up most of the run time, and make very long simulations not feasible. In an effort to keep their model fast, UVic created EMBM (Energy Moisture Balance Model), an extremely simple atmospheric model (for example, it doesn’t include dynamic precipitation – it simply rains as soon as a certain humidity is reached). Since the ocean is the primary moderator of climate over the long run, the UVic ESCM still outputs global long-term averages that match up nicely with GCM results.

Finally, CESM and Model E could not be described as “land-centric”, but land is definitely catching up – it’s even surpassed the ocean model in both cases! These two GCMs are cutting-edge in terms of carbon cycle feedbacks, which are primarily terrestrial, and likely very important in determining how much warming we can expect in the centuries to come. They are currently poorly understood and difficult to model, so they are a new frontier for Earth system modelling. Scientists are moving away from a binary atmosphere-ocean paradigm and towards a more comprehensive climate system representation.

I presented this work to some computer scientists in the summer, and many of them asked, “Why do you need so many models? Wouldn’t it be better to just have one really good one that everyone collaborated on?” It might be simpler from a software engineering perspective, but for the purposes of science, a variety of diverse models is actually better. It means you can pick and choose which model suits your experiment. Additionally, it increases our confidence in climate model output, because if dozens of independent models are saying the same thing, they’re more likely to be accurate than if just one model made a given prediction. Diversity in model architecture arguably produces the software engineering equivalent of perturbed physics, although it’s not systematic or deliberate.

A common question people asked me at AGU was, “Which model do you think is the best?” This question is impossible to answer, because it depends on how you define “best”, which depends on what experiment you are running. Are you looking at short-term, regional impacts at a high resolution? HadGEM3 would be a good bet. Do you want to know what the world will be like in the year 5000? Go for UVic, otherwise you will run out of supercomputer time! Are you studying feedbacks, perhaps the Paleocene-Eocene Thermal Maximum? A good choice would be CESM. So you see, every model is the best at something, and no model can be the best at everything.

You might think the ideal climate model would mimic the real world perfectly. It would still have discrete grid cells and time steps, but it would be like a digital photo, where the pixels are so small that it looks continuous even when you zoom in. It would contain every single Earth system process known to science, and would represent their connections and interactions perfectly.

Such a model would also be a nightmare to use and develop. It would run slower than real time, making predictions of the future useless. The code would not be encapsulated, so organizing teams of programmers to work on certain aspects of the model would be nearly impossible. It would use more memory than computer hardware offers us – despite the speed of computers these days, they’re still too slow for many scientific models!

We need to balance complexity with feasibility. A hierarchy of complexity is important, as is a variety of models to choose from. Perfectly reproducing the system we’re trying to model actually isn’t the ultimate goal.

Please leave your questions below, and hopefully we can start a conversation – sort of a virtual poster session!

Labels

For a long time I have struggled with what to call the people who insist that climate change is natural/nonexistent/a global conspiracy. “Skeptics” is their preferred term, but I refuse to give such a compliment to those who don’t deserve it. Skepticism is a good thing in science, and it’s not being applied by self-professed “climate skeptics”. This worthy label has been hijacked by those who seek to redefine it.

“Deniers” is more accurate, in my opinion, but I feel uncomfortable using it. I don’t want to appear closed-minded and alienate those who are confused or undecided. Additionally, many people are in the audience of deniers, but aren’t in denial themselves. They repeat the myths they hear from other sources, but you can easily talk them out of their misconceptions using evidence.

I posed this question to some people at AGU. Which word did they use? “Pseudoskeptics” and “misinformants” are both accurate terms, but too difficult for a new reader to understand. My favourite answer, which I think I will adopt, was “contrarians”. Simple, clear, and non-judgmental. It emphasizes what they think, not how they think. Also, it hints that they are going against the majority in the scientific community. Another good suggestion was to say someone is “in denial”, rather than “a denier” – it depersonalizes the accusation.

John Cook, when I asked him this question, turned it around: “What should we call ourselves?” he asked, and I couldn’t come up with an answer. I feel that not being a contrarian is a default position that doesn’t require a qualifier. We are just scientists, communicators, and concerned citizens, and unless we say otherwise you can assume we follow the consensus. (John thinks we should call ourselves “hotties”, but apparently it hasn’t caught on.)

“What should I call myself?” is another puzzler, since I fall into multiple categories. Officially I’m an undergrad student, but I’m also getting into research, which isn’t a required part of undergraduate studies. In some ways I am a journalist too, but I see that as a side project rather than a career goal. So I can’t call myself a scientist, or even a fledgling scientist, but I feel like I’m on that path – a scientist larva, perhaps?

Thoughts?

General Thoughts on AGU

I returned home from the AGU Fall Meeting last night, and after a good night’s sleep I am almost recovered – it’s amazing how tired science can make you!

The whole conference felt sort of surreal. Meeting and conversing with others was definitely the best part. I shook the hand of James Hansen and assured him that he is making a difference. I talked about my research with Gavin Schmidt. I met dozens of people that were previously just names on a screen, from top scientists like Michael Mann and Ben Santer to fellow bloggers like Michael Tobis and John Cook.

I filled most of a journal with notes I took during presentations, and saw literally hundreds of posters. I attended a workshop on climate science communication, run by Susan Joy Hassol and Richard Sommerville, which fundamentally altered my strategies for public outreach. Be sure to check out their new website, and their widely acclaimed Physics Today paper that summarizes most of their work.

Speaking of fabulous communication, take a few minutes to watch this memorial video for Stephen Schneider – it’s by the same folks who turned Bill McKibben’s article into a video:

AGU inspired so many posts that I think I will publish something every day this week. Be sure to check back often!

AGU 2011

I know that many of you will be at the annual American Geophysical Union conference next week in San Francisco. If so, I’d invite you to come by and take a look at our poster! It will be up all Thursday morning in Halls A-C, Moscone South. I will be around for at least part of the morning to chat and answer questions.

You can view an electronic version of our poster, as well as read our abstract and leave comments, on the new AGU ePosters site.

Hope to see some of you next week!

Good News

Two events to celebrate today:

First, the Australian Parliament passed a carbon tax last week. Although it is relatively weak (oil for cars is exempt, and most emission permits are given out for free), it gets the country off the ground, and will hopefully strengthen in the future. It will be interesting to watch the effectiveness of this tax compared to cap-and-trade systems in other countries.

Additionally, income taxes have been reworked to offset the revenue from the carbon tax, to the point where most households, particularly low-income ones, will benefit financially. So much for “the new Dark Age”!

Secondly, Obama has delayed a decision on the Keystone pipeline until after the 2012 elections, due to environmental concerns with the planned pipeline route. A few months ago, it was fully expected that Obama would approve the pipeline by the end of the year, but opposition from scientists, Nobel Laureates, environmental organizations, and most of Nebraska seems to have tipped the scales.

Canadian coverage of Obama’s announcement is both amusing and infuriating. I read the Globe and Mail, which I would describe as fiscally conservative but socially liberal (really, I just read it because its science coverage is substantially more accurate ahead than my local newspaper). The Globe and Mail seems to define Canada as the tar sands industry and nothing else. Check out this article: a decision regarding a single Canadian oil company is now “a setback for Canada-U.S. relations“, to the point where “Canada is going to have to diversify away from the United States, not just in energy but in everything else we can“, because “they don’t treat us as nicely as their self-interest suggests they should“. And finally, “Canada’s challenge is to ensure other potential markets for Alberta’s crude are not hobbled by the same anti-oil-sands forces“.

Canada’s challenge? How is international anti-environmental lobbying anything but the industry’s challenge? Canada includes millions of young people who will grow up to face the consequences of climate change, millions of Aboriginals whose lives and livelihoods have been damaged by tar sands extraction, and millions of citizens already opposed to the industry. To ignore all of these groups, and to imply that Canada is the oil industry, is frankly quite insulting.

I am a Canadian, and I don’t want this fundamentally unethical industry to define my country. TransCanada’s interests are not necessarily Canada’s interests, and Canada-U.S. relations do not revolve around this single sector of the economy. Maybe the Canadian government doesn’t see this yet, but the American government seems to.

Between Australia and the United States, is the tide turning? Is the pendulum swinging? I’m not sure, but I think I will take advantage of these two small reasons for hope.

Uncertainty

Part 5 in a series of 5 for NextGen Journal
Read Part 1, Part 2, Part 3, and Part 4

Scientists can never say that something is 100% certain, but they can come pretty close. After a while, a theory becomes so strong that the academic community accepts it and moves on to more interesting problems. Replicating an experiment for the thousandth time just isn’t a good use of scientific resources. For example, conducting a medical trial to confirm that smoking increases one’s risk of cancer is no longer very useful; we covered that decades ago. Instead, a medical trial to test the effectiveness of different strategies to help people quit smoking will lead to much greater scientific and societal benefit.

In the same manner, scientists have known since the 1970s that human emissions of greenhouse gases are exerting a warming force on the climate. More recently, the warming started to show up, in certain patterns that confirm it is caused by our activities. These facts are no longer controversial in the scientific community (the opinion pages of newspapers are another story, though). While they will always have a tiny bit of uncertainty, it’s time to move on to more interesting problems. So where are the real uncertainties? What are the new frontiers of climate science?

First of all, projections of climate change depend on what the world decides to do about climate change – a metric that is more uncertain than any of the physics underlying our understanding of the problem. If we collectively step up and reduce our emissions, both quickly and significantly, the world won’t warm too much. If we ignore the problem and do nothing, it will warm a great deal. At this point, our actions could go either way.

Additionally, even though we know the world is going to warm, we don’t know exactly how much, even given a particular emission scenario. We don’t know exactly how sensitive the climate system is, because it’s quite a complex beast. However, using climate models and historical data, we can get an idea. Here is a probability density function for climate sensitivity: the greater the area under the curve at a specific point on the x-axis, the greater the probability that the climate sensitivity is equal to that value of x (IPCC, 2007):

This curve shows us that climate sensitivity is most likely around 3 degrees Celsius for every doubling of atmospheric carbon dixoide, since that’s where the area peaks. There’s a small chance that it’s less than that, so the world might warm a little less. But there’s a greater chance that climate sensitivity is greater than 3 degrees so the world will warm more. So this graph tells us something kind of scary: if we’re wrong about climate sensitivity being about 3 degrees, we’re probably wrong in the direction we don’t want – that is, the problem being worse than we expect. This metric has a lot to do with positive feedbacks (“vicious cycles” of warming) in the climate system.

Another area of uncertainty is precipitation. Temperature is a lot easier to forecast than precipitation, both regionally and globally. With global warming, the extra thermal energy in the climate system will lead to more water in the air, so there will be more precipitation overall – but the extra energy also drives evaporation of surface water to increase. Some areas will experience flooding, and some will experience drought; many areas will experience some of each, depending on the time of year. In summary, we will have more of each extreme when it comes to precipitation, but the when and where is highly uncertain.

Scientists are also unsure about the rate and extent of future sea level rise. Warming causes the sea to rise for two different reasons:

  1. Water expands as it warms, which is easy to model;
  2. Glaciers and ice sheets melt and fall into the ocean, which is very difficult to model.

If we cause the Earth to warm indefinitely, all the ice in the world will turn into water, but we won’t get that far (hopefully). So how much ice will melt, and how fast will it go? This depends on feedbacks in the climate system, glacial dynamics, and many other phenomena that are quantitatively poorly understood.

These examples of uncertainty in climate science, just a few of many, don’t give us an excuse to do nothing about the problem. As Brian, a Master’s student from Canada, wrote, “You don’t have to have the seventh decimal place filled in to see that the number isn’t looking good.”. We know that there is a problem, and it might be somewhat better or somewhat worse than scientists are currently predicting, but it won’t go away. As we noted above, in many cases it’s more likely to be worse than it is to be better. Even a shallow understanding of the implications of “worse” should be enough for anyone to see the necessity of action.

My Dishpan Climate Model

About two years ago, I discovered the concept of “dishpan climate models”, through Iain Stewart’s Climate Wars documentary. The experiment is pretty simple: a large bowl filled with water (representing one hemisphere of the Earth) with a block of ice in the middle (a polar region) rotates on a turntable with a Bunsen Burner (the Sun) heating it from one side. By injecting some dye into the water, you can see regular currents from heat transport and the Coriolis effect. Spencer Weart dug up some fascinating results from the days when dishpan climate models were the only sort available: researchers were able to simulate the Hadley circulation, Rossby waves, and the Gulf Stream.

I wanted to try this out for myself. Iain Stewart had made it look easy enough, and he got some really neat currents flowing. So one Saturday afternoon a friend and I got to work in my kitchen.

We started by figuring out how to rotate the bowl. My family doesn’t own a record player, so we couldn’t use that as a turntable. We tried to rig something up out of an old toy helicopter motor, but it wasn’t strong enough. Eventually we settled for a Lazy Susan which we spun by hand. It wasn’t a constant rotation, but it would have to do.

Then Antarctica, which consisted of a handful of ice cubes, kept floating away from the centre of the bowl. Soon the ice cubes melted and there were none left in the freezer. We filled a Ziploc bag with frozen corn, which wasn’t quite as buoyant, and used that for Antarctica instead.

Unsurprisingly, there was no Bunsen burner in my kitchen cupboard, so the Sun was represented by a paraffin candle that sort of smelled like cinnamon.

The only serious problem remaining was the dye. Every kind of dye we tried – food colouring, milk, food colouring mixed with milk – would completely homogenize with the water after just a few rotations, so all the currents were invisible.

The only liquid in my kitchen that wouldn’t mix with water was vegetable oil, so we dyed some of it blue and poured it in. This was a really really bad idea. The oil seemed to be attracted to the plastic bag keeping Antarctica together, so it all washed up onto the continent like some kind of awful petroleum spill in the Antarctic Ocean.

At that point, our climate model looked like this:

I would like to try this again some day, perhaps when I have access to a better laboratory than my kitchen. Any ideas for improvement (besides the obvious)? In particular, what kind of dye does work, and how does Antarctica stay together without being encased in plastic?

A Vast Machine

I read Paul Edward’s A Vast Machine this summer while working with Steve Easterbrook. It was highly relevant to my research, but I would recommend it to anyone interested in climate change or mathematical modelling. Think The Discovery of Global Warming, but more specialized.

Much of the public seems to perceive observational data as superior to scientific models. The U.S. government has even attempted to mandate that research institutions focus on data above models, as if it is somehow more trustworthy. This is not the case. Data can have just as many problems as models, and when the two disagree, either could be wrong. For example, in a high school physics lab, I once calculated the acceleration due to gravity to be about 30 m/s2. There was nothing wrong with Newton’s Laws of Motion – our instrumentation was just faulty.

Additionally, data and models are inextricably linked. In meteorology, GCMs produce forecasts from observational data, but that same data from surface stations was fed through a series of algorithms – a model for interpolation – to make it cover an entire region. “Without models, there are no data,” Edwards proclaims, and he makes a convincing case.

The majority of the book discussed the history of climate modelling, from the 1800s until today. There was Arrhenius, followed by Angstrom who seemed to discredit the entire greenhouse theory, which was not revived until Callendar came along in the 1930s with a better spectroscope. There was the question of the ice ages, and the mistaken perception that forcing from CO2 and forcing from orbital changes (the Milankovitch model) were mutually exclusive.

For decades, those who studied the atmosphere were split into three groups, with three different strategies. Forecasters needed speed in their predictions, so they used intuition and historical analogues rather than numerical methods. Theoretical meteorologists wanted to understand weather using physics, but numerical methods for solving differential equations didn’t exist yet, so nothing was actually calculated. Empiricists thought the system was too complex for any kind of theory, so they just described climate using statistics, and didn’t worry about large-scale explanations.

The three groups began to merge as the computer age dawned and large amounts of calculations became feasible. Punch-cards came first, speeding up numerical forecasting considerably, but not enough to make it practical. ENIAC, the first model on a digital computer, allowed simulations to run as fast as real time (today the model can run on a phone, and 24 hours are simulated in less than a second).

Before long, theoretical meteorologists “inherited” the field of climatology. Large research institutions, such as NCAR, formed in an attempt to pool computing resources. With incredibly simplistic models and primitive computers (2-3 KB storage), the physicists were able to generate simulations that looked somewhat like the real world: Hadley cells, trade winds, and so on.

There were three main fronts for progress in atmospheric modelling: better numerical methods, which decreased errors from approximation; higher resolution models with more gridpoints; and higher complexity, including more physical processes. As well as forecast GCMs, which are initialized with observations and run at maximum resolution for about a week of simulated time, scientists developed climate GCMs. These didn’t use any observational data at all; instead, the “spin-up” process fed known forcings into a static Earth, started the planet spinning, and waited until it settled down into a complex climate and circulation that looked a lot like the real world. There was still tension between empiricism and theory in models, as some factors were parameterized rather than being included in the spin-up.

The Cold War, despite what it did to international relations, brought tremendous benefits to atmospheric science. Much of our understanding of the atmosphere and the observation infrastructure traces back to this period, when governments were monitoring nuclear fallout, spying on enemy countries with satellites, and considering small-scale geoengineering as warfare.

I appreciated how up-to-date this book was, as it discussed AR4, the MSU “satellites show cooling!” controversy, Watt’s Up With That, and the Republican anti-science movement. In particular, Edwards emphasized the distinction between skepticism for scientific purposes and skepticism for political purposes. “Does this mean we should pay no attention to alternative explanations or stop checking the data?” he writes. “As a matter of science, no…As a matter of policy, yes.”

Another passage beautifully sums up the entire narrative: “Like data about the climate’s past, model predictions of its future shimmer. Climate knowledge is probabilistic. You will never get a single definitive picture, either of exactly how much the climate has already changed or of how much it will change in the future. What you will get, instead, is a range. What the range tells you is that “no change at all” is simply not in the cards, and that something closer to the high end of the range – a climate catastrophe – looks all the more likely as time goes on.”

A Conversation with Gavin Schmidt

Cross-posted from NextGenJournal

Dr. Gavin Schmidt is a climate modeller at NASA’s Goddard Institute for Space Studies, as well as the editor at RealClimate. I recently had the opportunity to interview Dr. Schmidt, one of the top scientists in his field, on what we can expect from the climate in the coming decades. Here is the entirety of the interview we completed for my article Climate Change and Young People.

Kate: In a business-as-usual scenario, what range of warming can we expect within the lifetimes of today’s young people – so to about 2070 or 2080?

Gavin: Well, we don’t have a perfect crystal ball for exactly what “business-as-usual” means, but the kind of projections that people have been looking at – which involve quite high increases in population and minimal changes in technology – you are talking about global temperature changes, by about 2070, of somewhere between two, three, five degrees Celsius, depending a little bit on the scenario, and a little bit on how sensitive the climate actually is.

That metric is a bit abstract to most people, so how will that amount of warming actually impact people’s lives?

That’s a very good question, because most people don’t live in the global mean temperature, or the global mean anything. Those kinds of numbers translate to larger changes, between four and six degrees of warming, over the land. As you go towards the poles it becomes larger as well, because of the amplifying feedbacks of ice albedo changes and reductions in snow cover.

Right now the range between a cold summer and a warm summer, in most mid-latitude places, is on the order of a couple of degrees. You’ll be looking at summers then – the normal summer then – will be warmer than the warmest summers that you have now, and significantly warmer than the coldest summers. The same will be true in winter time and other seasons.

How will that impact metrics such as agriculture, food prices, the economy…?

It’s easy enough to say that there are going to be some impacts – obviously agriculture depends on the climate that exists. People will adapt to that, they’ll plant earlier, but crops are very sensitive to peak summer temperatures. So you’ll see losses in the fatally sensitive crops. But then you’ll see movement north of crops that were grown further south. You have to deal with the other changes – in nutrient balances, water availability, soil quality. We’re not talking about just moving the subtropics further toward the poles.

Lots of other things are going to change as well. Pests travel much faster with climate than do other kinds of species: invasive species tend to increase faster, because they’re moving into an empty niche, than species that are already well established. There’s going to be changes to rainfall regimes, whether it snows or rains, how heavily it rains – a lot of those things will tax infrastructure.

You’ve got changes for people living on the coast related to sea level rise. That will lead to changes in the damaging effects of storm surges when any particular storm comes through. We’re also looking at more subtle changes to the storms themselves, which could even amplify that effect.

How much of this warming, and these impacts, are now inevitable? Do we have the ability to prevent most of it, and what would that take?

Some further changes are inevitable. The system has so much inertia, and it hasn’t even caught up with what we’ve put into the atmosphere so far. As it continues to catch up, even if we don’t do anything else to the atmosphere from now on, we’ll still see further warming and further changes to the climate. But we do have a choice as to whether we try and minimize these changes in the future, or we allow the maximum change to occur. And the maximum changes really are very large. It’s been said that if we allow that to happen, we’ll end up living on a different planet, and I think there’s some certain truth to that.

I hear you talking a lot about uncertainty, and that’s something a lot of people are paralyzed by: they don’t want us to take these actions because they think everything might be fine on its own. What’s your response to that attitude?

Any decision that you’re making now that has to do with the future is uncertain. We make decisions all the time: where to invest money, whether to buy a house – these things aren’t certain, and we still have to make decisions. The issue with climate is that no action is a decision in and of itself. That one is actually laden with far more uncertainty than if we actually try and produce energy more efficiently, try and use more renewables, adjust the way we live so that we have a more sustainable future. The uncertainty comes with what would happen if we don’t make a decision, and I find that to be the dominant uncertainty. But climate change is not unique in having to deal with decision making under uncertainty. All decisions are like that. It’s nothing special about climate change in that there’s uncertainty about what’s going to happen in the future. Any time we decide to do anything, there’s uncertainty about the future, yet we still manage to get out of bed in the morning.

Probably in response to this attitude, climate science has got a lot of bad press in the past couple years. What have your experiences been – what sort of reactions have there been to your research?

There are a lot of people, particularly in the US, who perceive the science itself – just describing what’s going on and why – as a threat to their interests. To my mind, knowing what’s going on in the planet and trying to understand why should just be information, it shouldn’t be a threat. But other people see it as a threat, and instead of dealing with either their perceptions or what the science actually means, they choose to attack the science and they choose to attack the scientists. Basically, you just have people adopting a “shoot the messenger” strategy, which plays well in the media. It doesn’t get us very far in terms of better understanding what’s going on. But it does add a sort of smokescreen to divert people’s attention from what the real issues are. That’s regrettable, but I don’t think it’s at all surprising.

And finally, are you at all optimistic about the future?

It depends on the day.