A New Kind of Science

Cross-posted from NextGen Journal

Ask most people to picture a scientist at work, and they’ll probably imagine someone in a lab coat and safety goggles, surrounded by test tubes and Bunsen burners. If they’re fans of The Big Bang Theory, maybe they’ll picture complicated equations being scribbled on whiteboards. Others might think of the Large Hadron Collider, or people wading through a swamp taking water samples.

All of these images are pretty accurate – real scientists, in one field or another, do these things as part of their job. But a large and growing approach to science, which is present in nearly every field, replaces the lab bench or swamp with a computer. Mathematical modelling, which essentially means programming the complicated equations from the whiteboard into a computer and solving them many times, is the science of today.

Computer models are used for all sorts of research questions. Epidemiologists build models of an avian flu outbreak, to see how the virus might spread through the population. Paleontologists build biomechanical models of different dinosaurs, to figure out how fast they could run or how high they could stretch their necks. I’m a research student in climate science, where we build models of the entire planet, to study the possible effects of global warming.

All of these models simulate systems which aren’t available in the real world. Avian flu hasn’t taken hold yet, and no sane scientist would deliberately start an outbreak just so they could study it! Dinosaurs are extinct, and playing around with their fossilized bones to see how they might move would be heavy and expensive. Finally, there’s only one Earth, and it’s currently in use. So models don’t replace lab and field work – rather, they add to it. Mathematical models let us perform controlled experiments that would otherwise be impossible.

If you’re interested in scientific modelling, spend your college years learning a lot of math, particularly calculus, differential equations, and numerical methods. The actual application of the modelling, like paleontology or climatology, is less important for now – you can pick that up later, or read about it on your own time. It might seem counter-intuitive to neglect the very system you’re planning to spend your life studying, but it’s far easier this way. A few weeks ago I was writing some computer code for our lab’s climate model, and I needed to calculate a double integral of baroclinic velocity in the Atlantic Ocean. I didn’t know what baroclinic velocity was, but it only took a few minutes to dig up a paper that defined it. My work would have been a lot harder if, instead, I hadn’t known what a double integral was.

It’s also important to become comfortable with computer programming. You might think it’s just the domain of software developers at Google or Apple, but it’s also the main tool of scientists all over the world. Two or three courses in computer science, where you’ll learn a multi-purpose language like C or Java, are all you need. Any other languages you need in the future will take you days, rather than months, to master. If you own a Mac or run Linux on a PC, spend a few hours learning some basic UNIX commands – it’ll save you a lot of time down the road. (Also, if the science plan falls through, computer science is one of the only majors which will almost definitely get you a high-paying job straight out of college.)

Computer models might seem mysterious, or even untrustworthy, when the news anchor mentions them in passing. In fact, they’re no less scientific than the equations that Sheldon Cooper scrawls on his whiteboard. They’re just packaged together in a different form.

Advertisement

Modelling the Apocalypse

Let’s all put on our science-fiction hats and imagine that humans get wiped off the face of the Earth tomorrow. Perhaps a mysterious superbug kills us all overnight, or maybe we organize a mass migration to live on the moon. In a matter of a day, we’re gone without a trace.

If your first response to this scenario is “What would happen to the climate now that fossil fuel burning has stopped?” then you may be afflicted with Climate Science. (I find myself reacting like this all the time now. I can’t watch The Lord of the Rings without imagining how one would model the climate of Middle Earth.)

A handful of researchers, particularly in Canada, recently became so interested in this question that they started modelling it. Their motive was more than just morbid fascination – in fact, the global temperature change that occurs in such a scenario is a very useful metric. It represents the amount of warming that we’ve already guaranteed, and a lower bound for the amount of warming we can expect.

Initial results were hopeful. Damon Matthews and Andrew Weaver ran the experiment on the UVic ESCM and published the results. In their simulations, global average temperature stabilized almost immediately after CO2 emissions dropped to zero, and stayed approximately constant for centuries. The climate didn’t recover from the changes we inflicted, but at least it didn’t get any worse. The “zero-emissions commitment” was more or less nothing. See the dark blue line in the graph below:

However, this experiment didn’t take anthropogenic impacts other than CO2 into account. In particular, the impacts of sulfate aerosols and additional (non-CO2) greenhouse gases currently cancel out, so it was assumed that they would keep cancelling and could therefore be ignored.

But is this a safe assumption? Sulfate aerosols have a very short atmospheric lifetime – as soon as it rains, they wash right out. Non-CO2 greenhouse gases last much longer (although, in most cases, not as long as CO2). Consequently, you would expect a transition period in which the cooling influence of aerosols had disappeared but the warming influence of additional greenhouse gases was still present. The two forcings would no longer cancel, and the net effect would be one of warming.

Damon Matthews recently repeated his experiment, this time with Kirsten Zickfeld, and took aerosols and additional greenhouse gases into account. The long-term picture was still the same – global temperature remaining at present-day levels for centuries – but the short-term response was different. For about the first decade after human influences disappeared, the temperature rose very quickly (as aerosols were eliminated from the atmosphere) but then dropped back down (as additional greenhouse gases were eliminated). This transition period wouldn’t be fun, but at least it would be short. See the light blue line in the graph below:

We’re still making an implicit assumption, though. By looking at the graphs of constant global average temperature and saying “Look, the problem doesn’t get any worse!”, we’re assuming that regional temperatures are also constant for every area on the planet. In fact, half of the world could be warming rapidly and the other half could be cooling rapidly, a bad scenario indeed. From a single global metric, you can’t just tell.

A team of researchers led by Nathan Gillett recently modelled regional changes to a sudden cessation of CO2 emissions (other gases were ignored). They used a more complex climate model from Environment Canada, which is better for regional projections than the UVic ESCM.

The results were disturbing: even though the average global temperature stayed basically constant after CO2 emissions (following the A2 scenario) disappeared in 2100, regional temperatures continued to change. Most of the world cooled slightly, but Antarctica and the surrounding ocean warmed significantly. By the year 3000, the coasts of Antarctica were 9°C above preindustrial temperatures. This might easily be enough for the West Antarctic Ice Sheet to collapse.

Why didn’t this continued warming happen in the Arctic? Remember that the Arctic is an ocean surrounded by land, and temperatures over land change relatively quickly in response to a radiative forcing. Furthermore, the Arctic Ocean is small enough that it’s heavily influenced by temperatures on the land around it. In this simulation, the Arctic sea ice actually recovered.

On the other hand, Antarctica is land surrounded by a large ocean that mixes heat particularly well. As a result, it has an extraordinarily high heat capacity, and takes a very long time to fully respond to changes in temperature. So, even by the year 3000, it was still reacting to the radiative forcing of the 21st century. The warming ocean surrounded the land and caused it to warm as well.

As a result of the cooling Arctic and warming Antarctic, the Intertropical Convergence Zone (an important wind current) shifted southward in the simulation. As a result, precipitation over North Africa continued to decrease – a situation that was already bad by 2100. Counterintuitively, even though global warming had ceased, some of the impacts of warming continued to worsen.

These experiments, assuming an overnight apocalypse, are purely hypothetical. By definition, we’ll never be able to test their accuracy in the real world. However, as a lower bound for the expected impacts of our actions, the results are sobering.

Climate Change and Heat Waves

One of the most dangerous effects of climate change is its impact on extreme events. The extra energy that’s present on a warmer world doesn’t distribute itself uniformly – it can come out in large bursts, manifesting itself as heat waves, floods, droughts, hurricanes, and tornadoes, to name a few. Consequently, warming the world by an average of 2 degrees is a lot more complicated than adding 2 to every weather station reading around the world.

Scientists have a difficult time studying the impacts of climate change on extreme events, because all these events could happen anyway – how can you tell if Hurricane Something is a direct result of warming, or just a fluke? Indeed, for events involving precipitation, like hurricanes or droughts, it’s not possible to answer this question. However, research is advancing to the point where we can begin to attribute individual heat waves to climate change with fairly high levels of confidence. For example, the recent extended heat wave in Texas, which was particularly devastating for farmers, probably wouldn’t have happened if it weren’t for global warming.

Extreme heat is arguably the easiest event for scientists to model. Temperature is one-dimensional and more or less follows a normal distribution for a given region. As climate change continues, temperatures increase (shifting the bell curve to the right) and become more variable (flattening the bell curve). The end result, as shown in part (c) of the figure below, is a significant increase in extremely hot weather:

Now, imagine that you get a bunch of weather station data from all across the world in 1951-1980, back before the climate had really started to warm. For every single record, find the temperature anomaly (difference from the average value in that place and on that day of the year). Plot the results, and you will get a normal distribution centred at 0. So values in the middle of the bell curve – i.e., temperatures close to the average – are the most likely, and temperatures on the far tails of the bell curve – i.e. much warmer or much colder than the average – are far less likely.

As any statistics student knows, 99.7% of the Earth’s surface should have temperatures within three standard deviations of the mean (this is just an interval, with length dependent on how flat the bell curve is) at any given time. So if we still had the same climate we did between 1951 and 1980, temperatures more than three standard deviations above the mean would cover 0.15% of the Earth’s surface.

However, in the past few years, temperatures three standard deviations above average have covered more like 10% of the Earth’s surface. Even some individual heat waves – like the ones in Texas and Russia over the past few years – have covered so much of the Earth’s surface on their own that they blow the 0.15% statistic right out of the water. Under the “old” climate, they almost certainly wouldn’t have happened. You can only explain them by shifting the bell curve to the right and flattening it. For this reason, we can say that these heat waves were caused by global warming.

Here’s a graph of the bell curves we’re talking about, in this case for the months of June, July, and August. The red, yellow and green lines are the old climate; the blue and purple lines are the new climate. Look at the area under the curve to the right of x = 3: it’s almost nothing beneath the old climate, but quite significant beneath the new climate.

Using basic statistical methods, it’s very exciting that we can now attribute specific heat waves to climate change. On the other hand, it’s very depressing, because it goes to show that such events will become far more likely as the climate continues to change, and the bell curve shifts inexorably to the right.

References:

How do climate models work?

Also published at Skeptical Science

This is a climate model:

T = [(1-α)S/(4εσ)]1/4

(T is temperature, α is the albedo, S is the incoming solar radiation, ε is the emissivity, and σ is the Stefan-Boltzmann constant)

An extremely simplified climate model, that is. It’s one line long, and is at the heart of every computer model of global warming. Using basic thermodynamics, it calculates the temperature of the Earth based on incoming sunlight and the reflectivity of the surface. The model is zero-dimensional, treating the Earth as a point mass at a fixed time. It doesn’t consider the greenhouse effect, ocean currents, nutrient cycles, volcanoes, or pollution.

If you fix these deficiencies, the model becomes more and more complex. You have to derive many variables from physical laws, and use empirical data to approximate certain values. You have to repeat the calculations over and over for different parts of the Earth. Eventually the model is too complex to solve using pencil, paper and a pocket calculator. It’s necessary to program the equations into a computer, and that’s what climate scientists have been doing ever since computers were invented.

A pixellated Earth

Today’s most sophisticated climate models are called GCMs, which stands for General Circulation Model or Global Climate Model, depending on who you talk to. On average, they are about 500 000 lines of computer code long, and mainly written in Fortran, a scientific programming language. Despite the huge jump in complexity, GCMs have much in common with the one-line climate model above: they’re just a lot of basic physics equations put together.

Computers are great for doing a lot of calculations very quickly, but they have a disadvantage: computers are discrete, while the real world is continuous. To understand the term “discrete”, think about a digital photo. It’s composed of a finite number of pixels, which you can see if you zoom in far enough. The existence of these indivisible pixels, with clear boundaries between them, makes digital photos discrete. But the real world doesn’t work this way. If you look at the subject of your photo with your own eyes, it’s not pixellated, no matter how close you get – even if you look at it through a microscope. The real world is continuous (unless you’re working at the quantum level!)

Similarly, the surface of the world isn’t actually split up into three-dimensional cells (you can think of them as cubes, even though they’re usually wedge-shaped) where every climate variable – temperature, pressure, precipitation, clouds – is exactly the same everywhere in that cell. Unfortunately, that’s how scientists have to represent the world in climate models, because that’s the only way computers work. The same strategy is used for the fourth dimension, time, with discrete “timesteps” in the model, indicating how often calculations are repeated.

It would be fine if the cells could be really tiny – like a high-resolution digital photo that looks continuous even though it’s discrete – but doing calculations on cells that small would take so much computer power that the model would run slower than real time. As it is, the cubes are on the order of 100 km wide in most GCMs, and timesteps are on the order of hours to minutes, depending on the calculation. That might seem huge, but it’s about as good as you can get on today’s supercomputers. Remember that doubling the resolution of the model won’t just double the running time – instead, the running time will increase by a factor of sixteen (one doubling for each dimension).

Despite the seemingly enormous computer power available to us today, GCMs have always been limited by it. In fact, early computers were developed, in large part, to facilitate atmospheric models for weather and climate prediction.

Cracking the code

A climate model is actually a collection of models – typically an atmosphere model, an ocean model, a land model, and a sea ice model. Some GCMs split up the sub-models (let’s call them components) a bit differently, but that’s the most common arrangement.

Each component represents a staggering amount of complex, specialized processes. Here are just a few examples from the Community Earth System Model, developed at the National Center for Atmospheric Research in Boulder, Colorado:

  • Atmosphere: sea salt suspended in the air, three-dimensional wind velocity, the wavelengths of incoming sunlight
  • Ocean: phytoplankton, the iron cycle, the movement of tides
  • Land: soil hydrology, forest fires, air conditioning in cities
  • Sea Ice: pollution trapped within the ice, melt ponds, the age of different parts of the ice

Each component is developed independently, and as a result, they are highly encapsulated (bundled separately in the source code). However, the real world is not encapsulated – the land and ocean and air are very interconnected. Some central code is necessary to tie everything together. This piece of code is called the coupler, and it has two main purposes:

  1. Pass data between the components. This can get complicated if the components don’t all use the same grid (system of splitting the Earth up into cells).
  2. Control the main loop, or “time stepping loop”, which tells the components to perform their calculations in a certain order, once per time step.

For example, take a look at the IPSL (Institut Pierre Simon Laplace) climate model architecture. In the diagram below, each bubble represents an encapsulated piece of code, and the number of lines in this code is roughly proportional to the bubble’s area. Arrows represent data transfer, and the colour of each arrow shows where the data originated:

We can see that IPSL’s major components are atmosphere, land, and ocean (which also contains sea ice). The atmosphere is the most complex model, and land is the least. While both the atmosphere and the ocean use the coupler for data transfer, the land model does not – it’s simpler just to connect it directly to the atmosphere, since it uses the same grid, and doesn’t have to share much data with any other component. Land-ocean interactions are limited to surface runoff and coastal erosion, which are passed through the atmosphere in this model.

You can see diagrams like this for seven different GCMs, as well as a comparison of their different approaches to software architecture, in this summary of my research.

Show time

When it’s time to run the model, you might expect that scientists initialize the components with data collected from the real world. Actually, it’s more convenient to “spin up” the model: start with a dark, stationary Earth, turn the Sun on, start the Earth spinning, and wait until the atmosphere and ocean settle down into equilibrium. The resulting data fits perfectly into the cells, and matches up really nicely with observations. It fits within the bounds of the real climate, and could easily pass for real weather.

Scientists feed input files into the model, which contain the values of certain parameters, particularly agents that can cause climate change. These include the concentration of greenhouse gases, the intensity of sunlight, the amount of deforestation, and volcanoes that should erupt during the simulation. It’s also possible to give the model a different map to change the arrangement of continents. Through these input files, it’s possible to recreate the climate from just about any period of the Earth’s lifespan: the Jurassic Period, the last Ice Age, the present day…and even what the future might look like, depending on what we do (or don’t do) about global warming.

The highest resolution GCMs, on the fastest supercomputers, can simulate about 1 year for every day of real time. If you’re willing to sacrifice some complexity and go down to a lower resolution, you can speed things up considerably, and simulate millennia of climate change in a reasonable amount of time. For this reason, it’s useful to have a hierarchy of climate models with varying degrees of complexity.

As the model runs, every cell outputs the values of different variables (such as atmospheric pressure, ocean salinity, or forest cover) into a file, once per time step. The model can average these variables based on space and time, and calculate changes in the data. When the model is finished running, visualization software converts the rows and columns of numbers into more digestible maps and graphs. For example, this model output shows temperature change over the next century, depending on how many greenhouse gases we emit:

Predicting the past

So how do we know the models are working? Should we trust the predictions they make for the future? It’s not reasonable to wait for a hundred years to see if the predictions come true, so scientists have come up with a different test: tell the models to predict the past. For example, give the model the observed conditions of the year 1900, run it forward to 2000, and see if the climate it recreates matches up with observations from the real world.

This 20th-century run is one of many standard tests to verify that a GCM can accurately mimic the real world. It’s also common to recreate the last ice age, and compare the output to data from ice cores. While GCMs can travel even further back in time – for example, to recreate the climate that dinosaurs experienced – proxy data is so sparse and uncertain that you can’t really test these simulations. In fact, much of the scientific knowledge about pre-Ice Age climates actually comes from models!

Climate models aren’t perfect, but they are doing remarkably well. They pass the tests of predicting the past, and go even further. For example, scientists don’t know what causes El Niño, a phenomenon in the Pacific Ocean that affects weather worldwide. There are some hypotheses on what oceanic conditions can lead to an El Niño event, but nobody knows what the actual trigger is. Consequently, there’s no way to program El Niños into a GCM. But they show up anyway – the models spontaneously generate their own El Niños, somehow using the basic principles of fluid dynamics to simulate a phenomenon that remains fundamentally mysterious to us.

In some areas, the models are having trouble. Certain wind currents are notoriously difficult to simulate, and calculating regional climates requires an unaffordably high resolution. Phenomena that scientists can’t yet quantify, like the processes by which glaciers melt, or the self-reinforcing cycles of thawing permafrost, are also poorly represented. However, not knowing everything about the climate doesn’t mean scientists know nothing. Incomplete knowledge does not imply nonexistent knowledge – you don’t need to understand calculus to be able to say with confidence that 9 x 3 = 27.

Also, history has shown us that when climate models make mistakes, they tend to be too stable, and underestimate the potential for abrupt changes. Take the Arctic sea ice: just a few years ago, GCMs were predicting it would completely melt around 2100. Now, the estimate has been revised to 2030, as the ice melts faster than anyone anticipated:

Answering the big questions

At the end of the day, GCMs are the best prediction tools we have. If they all agree on an outcome, it would be silly to bet against them. However, the big questions, like “Is human activity warming the planet?”, don’t even require a model. The only things you need to answer those questions are a few fundamental physics and chemistry equations that we’ve known for over a century.

You could take climate models right out of the picture, and the answer wouldn’t change. Scientists would still be telling us that the Earth is warming, humans are causing it, and the consequences will likely be severe – unless we take action to stop it.

What Can One Person Do?

Next week, I will be giving a speech on climate change to the green committee of a local United Church. They are particularly interested in science and solutions, so I wrote the following script, drawing heavily from my previous presentations. I would really appreciate feedback and suggestions for this presentation.

Citations will be on the slides (which I haven’t made yet), so they’re not in the text of this script. Let me know if there’s a particular reference you’re wondering about, but they’re probably common knowledge within this community by now.

Enjoy!

Climate change is depressing. I know that really well, because I’ve been studying it for over two years. I’m quite practiced at keeping the scary stuff contained in the analytical part of my brain, and not thinking of the implications – because the implications make you feel powerless. I’m sure that all of us here wish we could stop global warming on our own. So we work hard to reduce our carbon footprints, and then we feel guilty every time we take the car out or buy something that was made in China or turn up the heat a degree.

The truth is, though, the infrastructure of our society doesn’t support a low-carbon lifestyle. Look at the quality of public transit in Winnipeg, or the price of local food. We can work all we want at changing our practices, but it’s an uphill battle. If we change the infrastructure, though – if we put a price on carbon so that sustainable practices are cheaper and easier than using fossil fuels – people everywhere will subsequently change their practices.

Currently, governments – particularly in North America – aren’t too interested in sustainable infrastructure, because they don’t think people care. Politicians only say what they think people want to hear. So, should we go dress up as polar bears and protest in front of Parliament to show them we care? That might work, but they will probably just see us as crazy environmentalists, a fringe group. We need a critical mass of people that care about climate change, understand the problem, and want to fix it. An effective solution requires top-down organization, but that won’t happen until there’s a bottom-up, grassroots movement of people who care.

I believe that the most effective action one person can take in the fight against global warming is to talk to others and educate others. I believe most people are good, and sane, and reasonable. They do the best they can, given their level of awareness. If we increase that awareness, we’ll gain political will for a solution. And so, in an effort to practice what I preach, I’m going to talk to you about the issue.

The science that led us to the modern concern about climate change began all the way back in 1824, when a man named Joseph Fourier discovered the greenhouse effect. Gases such as carbon dioxide make up less than one percent of the Earth’s atmosphere, but they trap enough heat to keep the Earth over 30 degrees Celsius warmer than it would be otherwise.

Without greenhouse gases, there could be no life on Earth, so they’re a very good thing – until their concentration changes. If you double the amount of CO2 in the air, the planet will warm, on average, somewhere around 3 degrees. The first person to realize that humans could cause this kind of a change, through the burning of fossil fuels releasing CO2, was Svante Arrhenius, in 1897. So this is not a new theory by any means.

For a long time, scientists assumed that any CO2 we emitted would just get absorbed by the oceans. In 1957, Roger Revelle showed that wasn’t true. The very next year, Charles Keeling decided to test this out, and started measuring the carbon dioxide content of the atmosphere. Now, Arrhenius had assumed that it would take thousands of years to double CO2 from the preindustrial value of 280 ppm (which we know from ice cores), but the way we’re going, we’ll get there in just a few decades. We’ve already reached 390 ppm. That might not seem like a lot, but 390 ppm of arsenic in your coffee would kill you. Small changes can have big effects.

Around the 1970s, scientists realized that people were exerting another influence on the climate. Many forms of air pollution, known as aerosols, have a cooling effect on the planet. In the 70s, the warming from greenhouse gases and the cooling from aerosols were cancelling each other out, and scientists were split as to which way it would go. There was one paper, by Stephen Schneider, which even said it could be possible to cause an ice age, if we put out enough aerosols and greenhouse gases stayed constant. However, as climate models improved, and governments started to regulate air pollution, a scientific consensus emerged that greenhouse gases would win out. Global warming was coming – it was just a question of when.

In 1988, James Hansen, who is arguably the top climate scientist in the world today, claimed it had arrived. In a famous testimony to the U.S. Congress, he said that “the greenhouse effect has been detected, and it is changing our climate now.” Many scientists weren’t so sure, and thought it was too early to make such a bold statement, but Hansen turned out to be right. Since about 1975, the world has been warming, more quickly than it has for at least the last 55 million years.

Over the past decade, scientists have even been able to rule out the possibility that the warming is caused by something else, like a natural cycle. Different causes of climate change have slightly different effects – like the pattern of warming in different layers of the atmosphere, the amount of warming in summer compared to winter, or at night compared to in the day, and so on. Ben Santer pioneered attribution studies: examining these effects in order to pinpoint a specific cause. And so far, nobody has been able to explain how the recent warming could not be caused by us.

Today, there is a remarkable amount of scientific agreement surrounding this issue. Between 97 and 98% of climate scientists, virtually 100% of peer-reviewed studies, and every scientific organization in the world agree that humans are causing the Earth to warm. The evidence for climate change is not a house of cards, where you take one piece out and the whole theory falls apart. It’s more like a mountain. Scrape a handful of pebbles off the top, but the mountain is still there.

However, if you take a step outside of the academic community, this convergence of evidence is more or less invisible. The majority of newspaper articles, from respected outlets like the New York Times or the Wall Street Journal, spend at least as much time arguing against this consensus as they do arguing for it. They present ideas such as “maybe it’s a natural cycle” or “CO2 has no effect on climate” that scientists disproved years ago. The media is stuck in the past. Some of them are only stuck in the 1980s, but others are stuck all the way back in 1800. Why is it like this?

Part of it comes from good, but misguided, intentions. When it comes to climate change, most journalists follow the rule of balance: presenting “two equal sides”, staying neutral, letting the reader form their own opinion. This works well when the so-called controversy is one of political or social nature, like tax levels or capital punishment. In these cases, there is no right answer, and people are usually split into two camps. But when the question at hand is one of science, there is a right answer – even if we haven’t found it yet – so some explanations are better than others, and some can be totally wrong. Would you let somebody form their own opinion on Newton’s Laws of Motion or the reality of photosynthesis? Sometimes scientists are split into two equal groups, but sometimes they’re split into three or four or even a dozen. How do you represent that as two equal sides? Sometimes, like we see with climate change, pretty much all the scientists are in agreement, and the two or three percent which aren’t don’t really publish, because they can’t back up their statements and nobody really takes them seriously. So framing these two groups as having equal weight in the scientific community is completely incorrect. It exaggerates the extreme minority, and suppresses everyone else. Being objective is not always the same as being neutral, and it’s particularly important to remember that when our future is at stake.

Another reason to frame climate science as controversial is that it makes for a much better story. Who really wants to read about scientists agreeing on everything? Journalists try to write stories that are exciting. Unfortunately, that goal can begin to overshadow accuracy.

Also, there are fewer journalists than there used to be, and there are almost no science journalists in the mainstream media – general reporters cover science issues instead. Also, a few decades ago, journalists used to get a week or two to write a story. Now they often have less than a day, because speed and availability of news has become more important than quality.

However, perhaps the most important – and disturbing – explanation for this inaccurate framing is that the media has been very compliant in spreading the message of climate change deniers. They call themselves skeptics, but I don’t think that’s accurate. A true skeptic will only accept a claim given sufficient evidence. That’s a good thing, and all scientists should be skeptics. But it’s easy to see that these people will never accept human-caused climate change, no matter what the evidence. At the same time, they blindly accept any shred of information that seems to support their cause, without applying any skepticism at all. That’s denial, so let’s not compliment them by calling them skeptics.

Climate change deniers will use whatever they can get – whether or not it’s legitimate, whether or not it’s honest – as proof that climate change is natural, or nonexistent, or a global conspiracy. They’ll tell you that volcanoes emit more CO2 than humans, but volcanoes actually emit about 1% of what we do. They’ll say that global warming has stopped because 2008 was cooler than 2007. If climatologists organize a public lecture in effort to communicate accurate scientific information, they’ll say that scientists are dogmatic and subscribe to censorship and will not allow any other opinions to be considered.

Some of these questionable sources are organizations, like a dozen or so lobby groups that have been paid a lot of money by oil companies to say that global warming is fake. Some of them are individuals, like US Senator James Inhofe, who was the environment chair under George W. Bush, and says that “global warming is the greatest hoax ever imposed upon the American people.” Some of them have financial motivations, and some of them have ideological motivations, but their motivations don’t really matter – all that matters is that they are saying things that are inaccurate, and misleading, and just plain wrong.

There has been a recent, and very disturbing, new tactic of deniers. Instead of attacking the science, they’ve begun to attack the integrity of individual scientists. In November 2009, they stole thirteen years of emails from a top climate research group in the UK, and spread stories all over the media that said scientists were caught fudging their data and censoring critics. Since then, they’ve been cleared of these charges by eight independent investigations, but you wouldn’t know it by reading the newspaper. For months, nearly every media outlet in the developed world spread what was, essentially, libel, and the only one that has formally apologized for its inaccurate coverage is the BBC.

In the meantime, there has been tremendous personal impact on the scientists involved. Many of them have received death threats, and Phil Jones, the director of the research group, was nearly driven to suicide. Another scientist, who wishes to remain anonymous, had a dead animal dumped on his doorstep and now travels with bodyguards. The Republican Party, which prides itself on fiscal responsibility, is pushing for more and more investigations, because they just can’t accept that the scientists are innocent…and James Inhofe, the “global warming is a hoax” guy, attempted to criminally prosecute seventeen researchers, most of whom had done nothing but occasionally correspond with the scientists who had their emails stolen. It’s McCarthyism all over again.

So this is where we are. Where are we going?

The Intergovernmental Panel on Climate Change, or IPCC, which collects and summarizes all the scientific literature about climate change, said in 2007 that under a business-as-usual scenario, where we keep going the way we’re going, the world will warm somewhere around 4 degrees Celsius by 2100. Unfortunately, this report was out of date almost as soon as it was published, and has widely been criticized for being too conservative. The British Meteorological Office published an updated figure in 2009 that estimated we will reach 4 degrees by the 2070s.

I will still be alive then (I hope!). I will likely have kids and even grandkids by then. I’ve spent a lot of time researching climate change, and the prospect of a 4 degree rise is terrifying to me. At 4 degrees, we will have lost control of the climate – even if we stop emitting greenhouse gases, positive feedbacks in the climate system will make sure the warming continues. We will have committed somewhere between 40 and 70 percent of the world’s species to extinction. Prehistoric records indicate that we can expect 40 to 80 metres of eventual sea level rise – it will take thousands of years to get there, but many coastal cities will be swamped within the first century. Countries – maybe even developed countries – will be at war over food and water. All this…within my lifetime.

And look at our current response. We seem to be spending more time attacking the scientists who discovered the problem than we are negotiating policy to fix it. We should have started reducing our greenhouse gas emissions twenty years ago, but if we start now, and work really hard, we do have a shot at stopping the warming at a point where we stay in control. Technically, we can do it. It’s going to take an unprecedented amount of political will and international communication

Everybody wants to know, “What can I do?” to fix the problem. Now, magazines everywhere are happy to tell you “10 easy ways to reduce your carbon footprint” – ride your bike, and compost, and buy organic spinach. That’s not really going to help. Say that enough people reduce their demand on fossil fuels: supply and demand dictates that the price will go down, and someone else will say, “Hey, gas is cheap!” and use more of it. Grassroots sentiment isn’t going to be enough. We need a price on carbon, whether it’s a carbon tax or cap-and-trade…but governments won’t do that until a critical mass of people demand it.

So what can you do? You can work on achieving that critical mass. Engage the apathetic. Educate people. Talk to them about climate change – it’s scary stuff, but suck it up. We’re all going to need to face it. Help them to understand and care about the problem. Don’t worry about the crazy people who shout about socialist conspiracies, they’re not worth your time. They’re very loud, but there’s not really very many of them. And in the end, we all get one vote.

An Unmeasured Forcing

“It is remarkable and untenable that the second largest forcing
that drives global climate change remains unmeasured,” writes Dr. James Hansen, the head of NASA’s climate change research team, and arguably the world’s top climatologist.

The word “forcing” refers to a factor, such as changes in the Sun’s output or in atmospheric composition, that exerts a warming or cooling influence on the Earth’s climate. The climate doesn’t magically change for no reason – it is always driven by something. Scientists measure these forcings in Watts per square metre – imagine a Christmas tree lightbulb over every square metre of the Earth’s surface, and you have 1 W/m2 of positive forcing.

Currently, the largest forcing on the Earth’s climate is that of increasing greenhouse gases from burning fossil fuels. These exert a positive, or warming, forcing, hence the term “global warming”. However, a portion of this positive forcing is being cancelled out by the second-largest forcing, which is also anthropogenic. Many forms of air pollution, collectively known as aerosols, exert a negative (cooling) forcing on the Earth’s climate. They do this in two ways: the direct albedo effect (scattering solar radiation so it never reaches the planet), and the indirect albedo effect (providing surfaces for clouds to form and scatter radiation by themselves). A large positive forcing and a medium negative forcing sums out to a moderate increase in global temperatures.

Unfortunately, a catch-22 exists with aerosols. As many aerosols are directly harmful to human health, the world is beginning to regulate them through legislation such as the American Clean Air Act. As this pollution decreases, its detrimental health effects will lessen, but so will its ability to partially cancel out global warming.

The problem is that we don’t know how much warming the aerosols are cancelling – that is, we don’t know the magnitude of the forcing. So, if all air pollution ceased tomorrow, the world could experience a small jump in net forcing, or a large jump. Global warming would suddenly become much worse, but we don’t know just how much.

The forcing from greenhouse gases is known with a high degree of accuracy – it’s just under 3 W/m2. However, all we know about aerosol forcing is that it’s somewhere around -1 or -2 W/m2 – an estimate is the best we can do. The reason for this dichotomy lies in the ease of measurement. Greenhouse gases last a long time (on the order of centuries) in the atmosphere, and mix through the air, moving towards a uniform concentration. An air sample from a remote area of the world, such as Antarctica or parts of Hawaii, will be uncontaminated by cars and factories nearby, and will contain an accurate value of the global atmospheric carbon dioxide concentration (the same can be done for other greenhouse gases, such as methane) . From these measurements, molecular physics can tell us how large the forcing is. Direct records of carbon dioxide concentrations have been kept since the late 1950s:

However, aerosols only stay in the troposphere for a few days, as precipitation washes them out of the air. For this reason, they don’t have time to disperse evenly, and measurements are not so simple. The only way to gain accurate measurements of their concentrations is with a satellite. NASA recently launched the Glory satellite for just this purpose. Unfortunately, it failed to reach orbit (an inherent risk for satellites), and given the current political climate in the United States, it seems overly optimistic to hope for funding for a new one any time soon. Luckily, if this project was carried out by the private sector, without the need for money-draining government review panels, James Hansen estimates that it could be achieved with a budget of around $100 million.

An accurate value for aerosol forcing can only be achieved with accurate measurements of aerosol concentration. Knowing this forcing would be immensely helpful for climate researchers, as it impacts not only the amount of warming we can expect, but also how long it will take to play out, until the planet reaches thermal equilibrium. Aimed with better knowledge of these details will allow policymakers to better plan for the future, regarding both mitigation of and adaptation to climate change. Finally measuring the impact of aerosols, instead of just estimating, could give our understanding of the climate system the biggest bang for its buck.

Ozone Depletion and Climate Change

“Global warming…doesn’t that have something to do with the ozone?” Well, no. Environmental issues are not all the same. It’s common for people to confuse climate change and ozone depletion, but they are separate issues – although they are indirectly connected in some interesting ways.

Ozone, which is made of three oxygen atoms stuck together (instead of two, which is what normal oxygen gas is made of), is vital to life on Earth. It forms a layer in the stratosphere, the second layer up in the atmosphere, that is very good at absorbing ultraviolet (UV) radiation from the Sun. UV radiation severely damages organisms if enough of it reaches the surface. The 3% or less that gets through the ozone already gives us sunburns and skin cancer, so you can imagine what the situation would be like if the ozone layer wasn’t there at all.

In the middle of the 20th century, synthetic gases known as chlorofluorocarbons (CFCs) became popular for use in refrigerators and aerosol products, among other applications. They were non-toxic, and did not react easily with other substances, so they were used widely. However, their chemical stability allowed them to last long enough to drift into the stratosphere after they were emitted.

Once in the stratosphere, the CFCs were exposed to UV radiation, which was able to break them down. Free chlorine atoms (Cl) were liberated, a substance that is very reactive indeed. In fact, Cl acts as a catalyst in the decomposition of ozone, allowing two ozone molecules to become three oxygen molecules, losing their UV absorbing power in the process. Since catalysts are not used up in a reaction, the same Cl radical can continue to destroy ozone until it reacts with something else in the atmosphere and is removed.

Over the poles, the stratosphere is cold enough for polar stratospheric clouds (PSCs) to form. These PSCs provided optimum conditions for the most reactive chlorine gas of all to form: ClO (chlorine monoxide). Now there wasn’t just a catalytic cycle of free Cl radicals depleting the ozone, there was also a cycle of ClO. It turns out that Antarctica was more favourable for ozone depletion than the Arctic, both because its temperatures were lower and because its system of wind currents prevented the ozone-depleting substances from drifting out of the area.

Before long, there was a hole in the ozone layer over Antarctica (due to the PSCs), and concentrations were declining in other locations too (due to the basic Cl reactions). The issue became a frontier for scientific research, and scientists Crutzen, Rowland, and Molina won the 1995 Nobel Prize in Chemistry for their work with atmospheric ozone.

In 1987, politicians worldwide decided to ban CFCs under the Montreal Protocol. This movement was largely successful, and the use of CFCs has become nearly negligible, especially in developed nations. They have been replaced with gases that safely decompose before they reach the stratosphere, so they don’t interfere with ozone. The regulations are working: the ozone hole in Antarctica has stabilized, and global stratospheric ozone concentrations have been on the rise since 1993.

In contrast, climate change is a product of greenhouse gases such as carbon dioxide. Unlike CFCs, most of them are not synthetic, and they are released from the burning of fossil fuels (coal, oil, and natural gas), not specific products such as refrigerators. Rather than destroying a natural process, like CFCs do, they strengthen one to the point of harm: the greenhouse effect. This phenomenon, which traps heat in the atmosphere, is absolutely vital, as the Earth would be too cold to support life without it. Increasing the concentrations of greenhouse gases with fossil fuels becomes too much of a good thing, though, as the greenhouse effect traps more heat, warming the planet up.

Just a few degrees Celsius of warming can cause major problems, as agricultural zones, wind and ocean currents, and precipitation patterns shift. The sea level rises, submerging coastal cities. Many species go extinct, as the climate changes faster than they can adapt. Basically, the definition of “normal” in which our civilization has developed and thrived is changing, and we can’t count on that stability any more.

Unlike the Montreal Protocol, efforts to reduce greenhouse gas emissions have more or less failed. Fossil fuels permeate every part of our lives, and until we shift the economy to run on clean energy instead, convincing governments to commit to reductions will be difficult at best. It remains to be seen whether or not we can successfully address this problem, like we did with ozone depletion.

Although these two issues are separate, they have some interesting connections. For example, PSCs form in cold areas of the stratosphere. That’s why the ozone hole is over Antarctica, and not somewhere else. Unfortunately, global warming is, paradoxically, cooling the stratosphere, as a stronger greenhouse effect means that less heat reaches the stratosphere. Therefore, as climate change progresses, it will make it easier for the ozone depletion reactions to occur, even though there are fewer CFCs.

Additionally, CFCs are very strong greenhouse gases, but their use has drastically reduced so their radiative effects are of lesser concern to us. However, some of their replacements, HFCs, are greenhouse gases of similar strength. They don’t deplete the ozone, but, per molecule, they can be thousands of times stronger than carbon dioxide at trapping heat. Currently, their atmospheric concentrations are low enough that they contribute far less forcing than carbon dioxide, but it wouldn’t take a large increase in HFCs to put us in a bad situation, simply because they are so potent.

Finally, these two issues are similar in that ozone depletion provides a smaller-scale analogue for the kinds of political and economic changes we will have to make to address climate change:

  1. Unintended chemical side effects of our economy posed a serious threat to all species, including our own.
  2. Industry representatives and free-market fundamentalists fought tooth and nail against conclusive scientific findings, and the public became bewildered in a sea of misinformation.
  3. Governments worked together to find sensible alternatives and more or less solved the problem.

We’ve already seen the first two events happen with climate change. Will we see the third as well?

Extinction and Climate

Life on Earth does not enjoy change, and climate change is something it likes least of all. Every aspect of an organism’s life depends on climate, so if that variable changes, everything else changes too – the availability of food and water, the timing of migration or hibernation, even the ability of bodily systems to keep running.

Species can adapt to gradual changes in their environment through evolution, but climate change often moves too quickly for them to do so. It’s not the absolute temperature, then, but the rate of change that matters. Woolly mammoths and saber-toothed tigers thrived during the Ice Ages, but if the world were to shift back to that climate overnight, we would be in trouble.

Put simply, if climate change is large enough, quick enough, and on a global scale, it can be the perfect ingredient for a mass extinction. This is worrying, as we are currently at the crux of a potentially devastating period of global warming, one that we are causing. Will our actions cause a mass extinction a few centuries down the line? We can’t tell the future of evolution, but we can look at the past for reference points.

There have been five major extinction events in the Earth’s history, which biologists refer to as “The Big Five”. The Ordovician-Silurian, Late Devonian, Permian-Triassic, Late Triassic, Cretaceous-Tertiary…they’re a bit of a mouthful, but all five happened before humans were around, and all five are associated with climate change. Let’s look at a few examples.

The most recent extinction event, the Cretaceous-Tertiary (K-T) extinction, is also the most well-known and extensively studied: it’s the event that killed the dinosaurs. Scientists are quite sure that the trigger for this extinction was an asteroid that crashed into the planet, leaving a crater near the present-day Yucatan Peninsula of Mexico. Devastation at the site would have been massive, but it was the indirect, climatic effects of the impact that killed species across the globe. Most prominently, dust and aerosols kicked up by the asteroid became trapped in the atmosphere, blocking and reflecting sunlight. As well as causing a dramatic, short-term cooling, the lack of sunlight reaching the Earth inhibited photosynthesis, so many plant species became extinct. This effect was carried up the food chain, as first herbivorous, then carnivorous, species became extinct. Dinosaurs, the dominant life form during the Cretaceous Period, completely died out, while insects, early mammals, and bird-like reptiles survived, as their small size and scavenging habits made it easier to find food.

However, life on Earth has been through worse than this apocalyptic scenario. The
largest extinction in the Earth’s history, the Permian-Triassic extinction, occurred about 250 million years ago, right before the time of the dinosaurs. Up to 95% of all species on Earth were killed in this event, and life in the oceans was particularly hard-hit. It took 100 million years for the remaining species to recover from this extinction, nicknamed “The Great Dying”, and we are very lucky that life recovered at all.

So what caused the Permian-Triassic extinction? After the discovery of the K-T crater, many scientists assumed that impact events were a prerequisite for extinctions, but that probably isn’t the case. We can’t rule out the possibility that an asteroid aggravated existing conditions at the end of the Permian period. However, over the past few years, scientists have pieced together a plausible explanation for the Great Dying. It points to a trigger that is quite disturbing, given our current situation – global warming from greenhouse gases.

In the late Permian, a huge expanse of active volcanoes existed in what is now Siberia. They covered 4 million square kilometres, which is fifteen times the area of modern-day Britain (White, 2002). Over the years, these volcanoes pumped out massive quantities of carbon dioxide, increasing the average temperature of the planet. However, as the warming continued, a positive feedback kicked in: ice and permafrost melted, releasing methane that was previously safely frozen in. Methane is a far stronger greenhouse gas than carbon dioxide – over 100 years, it traps approximately 21 times more heat per molecule (IPCC AR4). Consequently, the warming became much more severe.

When the planet warms a lot in a relatively short period of time, a particularly nasty condition can develop in the oceans, known as anoxia. Since the polar regions warm more than the equator, the temperature difference between latitudes decreases. As global ocean circulation is driven by this temperature difference, ocean currents weaken significantly and the water becomes relatively stagnant. Without ocean turnover, oxygen doesn’t get mixed in – and it doesn’t help that warmer water can hold less oxygen to begin with. As a result of this oxygen depletion, bacteria in the ocean begins to produce hydrogen sulfide (H2S). That’s what makes rotten eggs smell bad, and it’s actually poisonous in large enough quantities. So if an organism wasn’t killed off by abrupt global warming, and was able to survive without much oxygen in the ocean (or didn’t live in the ocean at all), it would probably soon be poisoned by the hydrogen sulfide being formed in the oceans and eventually released into the atmosphere.

The Permian-Triassic extinction wasn’t the only time anoxia developed. It may have been a factor in the Late Triassic extinction, as well as smaller extinctions between the Big Five. Overall, it’s one reason why a warm planet tends to be less favourable to life than a cold one, as a 2008 study in the UK showed. The researchers examined 520 million years of data on fossils and temperature reconstructions, which encompasses almost the entire history of multicellular life on Earth. They found that high global temperatures were correlated with low levels of biodiversity (the number of species on Earth) and high levels of extinction, while cooler periods enjoyed high biodiversity and low extinction.

Our current situation is looking worse by the minute. Not only is the climate changing, but it’s changing in the direction which could be the least favourable to life. We don’t have volcanic activity anywhere near the scale of the Siberian Traps, but we have a source of carbon dioxide that could be just as bad: ourselves. And worst of all, we could prevent much of the coming damage if we wanted to, but political will is disturbingly low.

How bad will it get? Only time, and our decisions, will tell. A significant number of the world’s species will probably become extinct. It’s conceivable that we could cause anoxia in the oceans, if we are both irresponsible and unlucky. It wouldn’t be too hard to melt most of the world’s ice, committing ourselves to an eventual sea level rise in the tens of metres. These long-range consequences would take centuries to develop, so none of us has to worry about experiencing them. Instead, they would fall to those who come after us, who would have had no part in causing – and failing to solve – the problem.

References:

Mayhew et al (2008). A long-term association between global temperature and biodiversity, origination and extinction in the fossil record. Proceedings of the Royal Society: Biological Sciences, 275: 47-53. Read online

Twitchett (2006). The paleoclimatology, paleoecology, and paleoenvironmental analysis of mass extinction events. Paleogeography, Paleoclimatology, Paleoecology, 234(2-4): 190-213. Read online

White (2002). Earth’s biggest “whodunnit”: unravelling the clues in the case of the end-Permian mass extinction. Philosophical Transactions of the Royal Society: Mathematical, Physical, & Engineering Sciences, 360: 2963-2985. Read online

Benton and Twitchett (2003). How to kill (almost) all life: the end-Permian extinction event. Trends in Ecology & Evolution, 18(7): 358-365. Read online

What’s the Warmest Year – and Does it Matter?

Cross-posted from NextGenJournal

Climate change is a worrying phenomenon, but watching it unfold can be fascinating. The beginning of a new year brings completed analysis of what last year’s conditions were like. Perhaps the most eagerly awaited annual statistic is global temperature.

This year was no different – partway through 2010, scientists could tell that it had a good chance of being the warmest year on record. It turned out to be more or less tied for first, as top temperature analysis centres recently announced:

Why the small discrepancy in the order of  1998, 2005, and 2010? The answer is mainly due to the Arctic. Weather stations in the Arctic region are few and far between, as it’s difficult to have a permanent station on ice floes that move around, and are melting away. Scientists, then, have two choices in their analyses: extrapolate Arctic temperature anomalies from the stations they do have, or just leave the missing areas out, assuming that they’re warming at the global average rate. The first choice might lead to results that are off in either direction…but the second choice almost certainly underestimates warming, as it’s clear that climate change is affecting the Arctic much more and much faster than the global average. Currently, NASA is the only centre to do extrapolation in Arctic data. A more detailed explanation is available here.

But how useful is an annual measurement of global temperature? Not very, as it turns out. Short-term climate variability, most prominently El Nino and La Nina, impact annual temperatures significantly. Furthermore, since this oscillation occurs in the winter, the thermal influence of El Nino or La Nina can fall entirely into one calendar year, or be split between two. The result is a graph that’s rather spiky:

A far more useful analysis involves plotting a 12-month running mean. Instead of measuring only from January to December, measurements are also compiled from February to January, March to February, and so on. This results in twelve times more data points, and prevents El Nino and La Nina events from being exaggerated:

This graph is better, but still not that useful. The natural spikiness of the El Nino cycle can, in the short term, get in the way of understanding the underlying trend. Since the El Nino cycle takes between 3 and 7 years to complete, a 60-month (5-year) running mean allows the resulting ups and downs to cancel each other out. Another cycle that impacts short-term temperature is the sunspot cycle, which operates on an 11-year cycle. A 132-month running mean smooths out that influence too. Both 60- and 132- month running means are shown below:

A statistic every month that shows the average global temperature over the last 5 or 11 years may not be as exciting as an annual measurement regarding the previous year. But that’s the reality of climate change. It doesn’t make every month or even every year warmer than the last, and a short-term trend line means virtually nothing. In the climate system, trends are always obscured by noise, and the nature of human psychology means we pay far more attention to noise. Nonetheless, the long-term warming trend since around 1975 is irrefutable when one is presented with the data. A gradual, persistent change might not make the greatest headline, but that doesn’t mean it’s worth ignoring.

“It’s Just a Natural Cycle”

My second rebuttal for Skeptical Science. Thanks to all the folks who helped to review it! Further suggestions are welcome, as always. -Kate

“What if global warming is just a natural cycle?” This argument is, perhaps, one of the most common raised by the average person, rather than someone who makes a career out of denying climate change. Cyclical variations in climate are well-known to the public; we all studied the ice ages in school. However, climate isn’t inherently cyclical.

A common misunderstanding of the climate system characterizes it like a pendulum. The planet will warm up to “cancel out” a previous period of cooling, spurred by some internal equilibrium. This view of the climate is incorrect. Internal variability will move energy between the ocean and the atmosphere, causing short-term warming and cooling of the surface in events such as El Nino and La Nina, and longer-term changes when similar cycles operate on decadal scales. However, internal forces do not cause climate change. Appreciable changes in climate are the result of changes in the energy balance of the Earth, which requires “external” forcings, such as changes in solar output, albedo, and atmospheric greenhouse gases. These forcings can be cyclical, as they are in the ice ages, but they can come in different shapes entirely.

For this reason, “it’s just a natural cycle” is a bit of a cop-out argument. The Earth doesn’t warm up because it feels like it. It warms up because something forces it to. Scientists keep track of natural forcings, but the observed warming of the planet over the second half of the 20th century can only be explained by adding in anthropogenic radiative forcings, namely increases in greenhouse gases such as carbon dioxide.

Of course, it’s always possible that some natural cycle exists, unknown to scientists and their instruments, that is currently causing the planet to warm. There’s always a chance that we could be totally wrong. This omnipresent fact of science is called irreducible uncertainty, because it can never be entirely eliminated. However, it’s very unlikely that such a cycle exists.

Firstly, the hypothetical natural cycle would have to explain the observed “fingerprints” of greenhouse gas-induced warming. Even if, for the sake of argument, we were to discount the direct measurements showing an increased greenhouse effect, other lines of evidence point to anthropogenic causes. For example, the troposphere (the lowest part of the atmosphere) is warming, but the levels above, from the stratosphere up, are cooling, as less radiation is escaping out to space. This rules out cycles related to the Sun, as solar influences would warm the entire atmosphere in a uniform fashion. The only explanation that makes sense is greenhouse gases.

What about an internal cycle, perhaps from volcanoes or the ocean, that releases massive amounts of greenhouse gases? This wouldn’t make sense either, not only because scientists keep track of volcanic and oceanic emissions of CO2 and know that they are small compared to anthropogenic emissions, but also because CO2 from fossil fuels has its own fingerprints. Its isotopic signature is depleted in the carbon-13 isotope, which explains why the atmospheric ratio of carbon-12 to carbon-13 has been going down as anthropogenic carbon dioxide goes up. Additionally, atmospheric oxygen (O2) is decreasing at the same rate that CO2 is increasing, because oxygen is consumed when fossil fuels combust.

A natural cycle that fits all these fingerprints is nearly unfathomable. However, that’s not all the cycle would have to explain. It would also have to tell us why anthropogenic greenhouse gases are not having an effect. Either a century of basic physics and chemistry studying the radiative properties of greenhouse gases would have to be proven wrong, or the natural cycle would have to be unbelievably complex to prevent such dramatic anthropogenic emissions from warming the planet.

It is indeed possible that multidecadal climate variabilityespecially cycles originating in the Atlantic, could be contributing to recent warming, particularly in the Arctic. However, the amplitude of the cycles simply can’t explain the observed temperature change. Internal variability has always been superimposed on top of global surface temperature trends, but the magnitude – as well as the fingerprints – of current warming clearly indicates that anthropogenic greenhouse gases are the dominant factor.

Despite all these lines of evidence, many known climatic cycles are often trumpeted to be the real cause, on the Internet and in the media. Many of these cycles have been debunked on Skeptical Science, and all of them either aren’t in the warming phases, don’t fit the fingerprints, or both.

For example, we are warming far too fast to be coming out of the last ice age, and the Milankovitch cycles that drive glaciation show that we should be, in fact, very slowly going into a new ice age (but anthropogenic warming is virtually certain to offset that influence).

The “1500-year cycle” that S. Fred Singer attributes warming to is, in fact, a change in distribution of thermal energy between the poles, not a net increase in global temperature, which is what we observe now.

The Little Ice Age following the Medieval Warm Period ended due to a slight increase in solar output (changes in both thermohaline circulation and volcanic activity also contributed), but that increase has since reversed, and global temperature and solar activity are now going in opposite directions. This also explains why the 11-year solar cycle could not be causing global warming.

ENSO (El Nino Southern Oscillation) and PDO (Pacific Decadal Oscillation) help to explain short-term variations, but have no long-term trend, warming or otherwise. Additionally, these cycles simply move thermal energy between the ocean and the atmosphere, and do not change the energy balance of the Earth.

As we can see, “it’s just a natural cycle” isn’t just a cop-out argument – it’s something that scientists have considered, studied, and ruled out long before you and I even knew what global warming was.