Who are the Skeptics?

Part 3 in a series of 5 for NextGen Journal
Adapted from part of an earlier post

As we discussed last time, there is a remarkable level of scientific consensus on the reality and severity of human-caused global warming. However, most members of the public are unaware of this consensus – a topic which we will focus on in the next installment. Anyone with an Internet connection or a newspaper subscription will be able to tell you that many scientists think global warming is natural or nonexistent. As we know, these scientists are in the vast minority, but they have enjoyed widespread media coverage. Let’s look at three of the most prominent skeptics, and examine what they’re saying.

S. Fred Singer is an atmospheric physicist and retired environmental science professor. He has rarely published in scientific journals since the 1960s, but he is very visible in the media. In recent years, he has claimed that the Earth has been cooling since 1998 (in 2006), that the Earth is warming, but it is natural and unstoppable (in 2007), and that the warming is artificial and due to the urban heat island effect (in 2009).

Richard Lindzen, also an atmospheric physicist, is far more active in the scientific community than Singer. However, most of his publications, including the prestigious IPCC report to which he contributed, conclude that climate change is real and caused by humans. He has published two papers stating that climate change is not serious: a 2001 paper hypothesizing that clouds would provide a negative feedback to cancel out global warming, and a 2009 paper claiming that climate sensitivity (the amount of warming caused by a doubling of carbon dioxide) was very low. Both of these ideas were rebutted by the academic community, and Lindzen’s methodology criticized. Lindzen has even publicly retracted his 2001 cloud claim. Therefore, in his academic life, Lindzen appears to be a mainstream climate scientist – contributing to assessment reports, abandoning theories that are disproved, and publishing work that affirms the theory of anthropogenic climate change. However, when Lindzen talks to the media, his statements change. He has implied that the world is not warming by calling attention to the lack of warming in the Antarctic (in 2004) and the thickening of some parts of the Greenland ice sheet (in 2006), without explaining that both of these apparent contradictions are well understood by scientists and in no way disprove warming. He has also claimed that the observed warming is minimal and natural (in 2006).

Finally, Patrick Michaels is an ecological climatologist who occasionally publishes peer-reviewed studies, but none that support his more outlandish claims. In 2009 alone, Michaels said that the observed warming is below what computer models predicted, that natural variations in oceanic cycles such as El Niño explain most of the warming, and that human activity explains most of the warming but it’s nothing to worry about because technology will save us (cached copy, as the original was taken down).

While examining these arguments from skeptical scientists, something quickly becomes apparent: many of the arguments are contradictory. For example, how can the world be cooling if it is also warming naturally? Not only do the skeptics as a group seem unable to agree on a consistent explanation, some of the individuals either change their mind every year or believe two contradictory theories at the same time. Additionally, none of these arguments are supported by the peer-reviewed literature. They are all elementary misconceptions which were proven erroneous long ago. Multiple articles on this site could be devoted to rebutting such claims, but easy-to-read rebuttals for virtually every objection to human-caused climate change are already available on Skeptical Science. Here is a list of rebuttals relevant to the claims of Singer, Lindzen and Michaels:

With a little bit of research, the claims of these skeptics quickly fall apart. It does not seem possible that they are attempting to further our knowledge of science, as their arguments are so weak and inconsistent, and rarely published in scientific venues. However, their pattern of arguments does work as a media strategy, as most people will trust what a scientist says in the newspaper, and not research his reputation or remember his name. Over time, the public will start to remember dozens of so-called problems with the anthropogenic climate change theory.

Is There Consensus?

Part 2 of a series of 5 for NextGen Journal

We hear the phrase “climate change consensus” tossed around all the time. But what does that even mean? And does it actually exist?

In Part 1 we discussed the concept of a scientific consensus: overwhelming agreement (but rarely unanimity) among experts. Of course, such a consensus could be wrong, but it wouldn’t be very sensible for the public to ignore it or bet against it. If 19 out of 20 doctors said you needed surgery to save your life, would you sit in the hospital bed and argue about their motives?

When it comes to climate change, the consensus view can be summarized as follows:

  1. Human emissions of greenhouse gases, mainly from the burning of fossil fuels, are a significant force on the global climate.
  2. The expected warming from this force is beginning to show up.

Often, people will write these two points in opposite order: the Earth is warming, and it’s due to our actions. However, that’s not the order that scientists discovered them. The academic community realized the Earth was going to warm decades before that warming became clear. Flipping around these observations might imply “that the entirety of climate science is based upon a single correlation study”.

So, what do the scientists say? In fact, publishing climatologists – the most specialized and knowledgeable people there with regards to climate change – are almost unanimous in their position. 96.2% say the Earth is warming, and 97.4% say humans are causing climate change. It’s hard to know why the second figure is higher than the first – perhaps one scientist in the study thought the effects of our actions hadn’t shown up yet (i.e., point 1 but not point 2).

A year later, others built on this study. They had a larger sample of climate scientists, 97-98% of whom agreed with the consensus position. Additionally, those who agreed had higher academic credibility than those who disagreed: they had published more papers (“expertise”) and been cited more times (“prominence”).

However, it doesn’t really matter what a scientist says, as much as how they back it up. Having a Ph.D. doesn’t mean you get to stop supporting your claims. In the academic community, this is done in the peer reviewed literature.

In 2004, a random sample of almost 1000 scientific studies on climate change were examined. 75% of the studies explicitly supported the consensus position, while the remaining 25% didn’t mention it – for example, some papers wrote about climate change millions of years ago, so today’s climate wasn’t relevant. Incredibly, not a single one disagreed with the consensus.

This still doesn’t imply unanimity – remember, it was a random sample, not the entire literature. A very few dissenting studies do get published each year, but they are such a tiny fraction of the total papers that it’s not surprising that none showed up in a sample of one thousand. Additionally, these papers generally fail to stand up to further scrutiny – their methods are often heavily critiqued by the academic community. See, for example, Lindzen and Choi, 2009 and its response.

It’s clear that individual polls have limitations. They are restricted to a sample of scientists or papers, rather than the entire community. They don’t take into account which claims stood the test of time, and which were refuted. Luckily, the climate science community has another way to summarize the balance of evidence on global warming: the Intergovernmental Panel on Climate Change (IPCC). Since 1988, four assessment reports have been written by thousands of volunteer scientists worldwide. They examine the entire body of academic literature on climate change and create a summary, which is then painstakingly reviewed and scrutinized by others.

The latest report, published in 2007, is already quite out of date – due to the long review process, most of the data is from 2002 and earlier. However, it is still used by governments worldwide, so let’s look at some of its key findings:

  • “Warming of the climate system is unequivocal, as is now evident from observations of increases in global average air and ocean temperatures, widespread melting of snow and ice and rising global average sea level.”
  • “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic GHG concentrations.”
  • “Anthropogenic warming over the last three decades has likely had a discernible influence at the global scale on observed changes in many physical and biological systems.”
  • “Altered frequencies and intensities of extreme weather, together with sea level rise, are expected to have mostly adverse effects on natural and human systems.”
  • “Anthropogenic warming could lead to some impacts that are abrupt or irreversible, depending upon the rate and magnitude of the climate change.”

The final place to look for scientific consensus is statements from scientific organizations, such as the National Academy of Sciences. Not a single scientific organization worldwide disputes the consensus view, and many have published statements explicitly supporting it. A full list is available here, but here are some samples:

Climate change and sustainable energy supply are crucial challenges for the future of humanity. It is essential that world leaders agree on the emission reductions needed to combat negative consequences of anthropogenic climate change[.]
Thirteen national academies of science

It is certain that increased greenhouse gas emissions from the burning of fossil fuels and from land use change lead to a warming of climate, and it is very likely that these green house gases are the dominant cause of the global warming that has been taking place over the last 50 years.
Royal Society (UK)

The scientific evidence is clear: global climate change caused by human activities is occurring now, and it is a growing threat to society.
American Association for the Advancement of Science

[C]omprehensive scientific assessments of our current and potential future climates clearly indicate that climate change is real, largely attributable to emissions from human activities, and potentially a very serious problem.
American Chemical Society

Emissions of greenhouse gases from human activities are changing the atmosphere in ways that affect the Earth’s climate…The evidence is incontrovertible: Global warming is occurring. If no mitigating actions are taken, significant disruptions in the Earth’s physical and ecological systems, social systems, security and human health are likely to occur. We must reduce emissions of greenhouse gases beginning now.
American Physical Society

The Earth’s climate is now clearly out of balance and is warming. Many components of the climate system…are now changing at rates and in patterns that are not natural and are best explained by the increased atmospheric abundances of greenhouse gases and aerosols generated by human activity during the 20th century.
American Geophysical Union

It’s clear that a scientific consensus on climate change does exist. Since unanimity is virtually impossible in science, agreement over climate change can’t get much stronger than it is already.

Could all of these scientists, papers, reports, and organizations be wrong? Of course – nobody is infallible. Could that 3% of dissenting scientists triumph like Galileo? It’s possible.

But how much are you willing to risk on that chance?

Modularity

I’ve now taken a look at the code and structure of four different climate models: Model E, CESM, UVic ESCM, and the Met Office Unified Model (which contains all the Hadley models). I’m noticing all sorts of similarities and differences, many of which I didn’t expect.

For example, I didn’t anticipate any overlap in climate model components. I thought that every modelling group would build their own ocean, their own atmosphere, and so on, from scratch. In fact, what I think of as a “model” – a self-contained, independent piece of software – applies to components more accurately than it does to an Earth system model. The latter is more accurately described as a collection of models, each representing one piece of the climate system. Each modelling group has a different collection of models, but not every one of these models is unique to their lab.

Ocean models are a particularly good example. The Modular Ocean Model (MOM) is built by GFDL, but it’s also used in NASA’s Model E and the UVic Earth System Climate Model. Another popular ocean model is the Nucleus for European Modelling of the Ocean (NEMO, what a great acronym) which is used by the newer Hadley climate models, as well as the IPSL model from France (which is sitting on my desktop as my next project!)

Aside: Speaking of clever acronyms, I don’t know what the folks at NCAR were thinking when they created the Single Column Atmosphere Model. Really, how did they not see their mistake? And why haven’t Marc Morano et al latched onto this acronym and spread it all over the web by now?

In most cases, an Earth system model has a unique architecture to fit all the component models together – a different coupling process. However, with the rise of standard interfaces like the Earth System Modeling Framework, even couplers can be reused between modelling groups. For example, the Hadley Centre and IPSL both use the OASIS coupler.

There are benefits and drawbacks to the rising overlap and “modularity” of Earth system models. One could argue that it makes the models less independent. If they all agree closely, how much of that agreement is due to their physical grounding in reality, and how much is due to the fact that they all use a lot of the same code? However, modularity is clearly a more efficient process for model development. It allows larger communities of scientists from each sub-discipline of Earth system modelling to form, and – in the case of MOM and NEMO – make two or three really good ocean models, instead of a dozen mediocre ones. Concentrating our effort, and reducing unnecessary duplication of code, makes modularity an attractive strategy, if an imperfect one.

The least modular of all the Earth system models I’ve looked at is Model E. The documentation mentions different components for the atmosphere, sea ice, and so on, but these components aren’t separated into subdirectories, and the lines between them are blurry. Nearly all the fortran files sit in the same directory, “model”,  and some of them deal with two or more components. For example, how would you categorize a file that calculates surface-atmosphere fluxes? Even where Model E uses code from other institutions, such as the MOM ocean model, it’s usually adapted and integrated into their own files, rather than in a separate directory.

The most modular Earth system model is probably the Met Office Unified Model. They don’t appear to have adapted NEMO, CICE (the sea ice model from NCAR) and OASIS at all – in fact, they’re not present in the code repository they gave us. I was a bit confused when I discovered that their “ocean” directory, left over from the years when they wrote their own ocean code, was now completely empty! Encapsulation to the point where a component model can be stored completely externally to the structural code was unexpected.

An interesting example of the challenges of modularity appears in sea ice. Do you create a separate, independent sea ice component, like CESM did? Do you consider it part of the ocean, like NEMO? Or do you lump in lake ice along with sea ice and subsequently allow the component to float between the surface and the ocean, like Model E?

The real world isn’t modular. There are no clear boundaries between components on the physical Earth. But then, there’s only one physical Earth, whereas there are many virtual Earths in the form of climate modelling, and limited resources for developing the code in each component. In this spectrum of interconnection and encapsulation, is one end or the other our best bet? Or is there a healthy balance somewhere in the middle?

Why Trust Science?

Part 1 of a series of 5 for NextGen Journal.

What’s wrong with these statements?

  • I believe in global warming.
  • I don’t believe in global warming.
  • We should hear all sides of the climate change debate and decide for ourselves.

Don’t see it? How about these?

  • I believe in photosynthesis.
  • I don’t believe in Newton’s Laws of Motion.
  • We should hear all sides of the quantum mechanics debate and decide for ourselves.

Climate change is a scientific phenomenon, rooted in physics and chemistry. All I did was substitute in other scientific phenomena, and the statements suddenly sounded wacky and irrational.

Perhaps we have become desensitized by people conflating opinion with fact when it comes to climate change. However, the positions of politicians or media outlets do not make the climate system any less of a physical process. Unlike, say, ideology, there is a physical truth out there.

If there is a physical truth, there are also wrong answers and false explanations. In scientific issues, not every “belief” is equally valid.

Of course, the physical truth is elusive, and facts are not always clear-cut. Data requires interpretation and a lot of math. Uncertainty is omnipresent and must be quantified. These processes require training, as nobody is born with all the skills required to be a good scientist. Again, the complex nature of the physical world means that some voices are more important than others.

Does that mean we should blindly accept whatever a scientist says, just because they have a Ph.D.? Of course not. People aren’t perfect, and scientists are no exception.

However, the institution of science has a pretty good system to weed out incorrect or unsupported theories. It involves peer review, and critical thinking, and falsifiability. We can’t completely prove anything right – not one hundred percent – so scientists try really hard to prove a given theory wrong. If they can’t, their confidence in its accuracy goes up. Peter Watts describes this process in more colourful terms: “You put your model out there in the coliseum, and a bunch of guys in white coats kick the s**t out of it. If it’s still alive when the dust clears, your brainchild receives conditional acceptance. It does not get rejected. This time.”

Peer review is an imperfect process, but it’s far better than nothing. Combined with the technical skill and experience of scientists, it makes the words of the scientific community far more trustworthy than the words of a politician or a journalist. That doesn’t mean that science is always right. But, if you had to put your money on it, who would you bet on?

The issue is further complicated by the fact that scientists are rarely unanimous. Often, the issue at question is truly a mystery, and the disagreement is widespread. What causes El Niño conditions in the Pacific Ocean? Science can’t give us a clear answer yet.

However, sometimes disagreement is restricted to the extreme minority. This is called a consensus. It doesn’t imply unanimity, and it doesn’t mean that the issue is closed, but general confidence in a theory is so high that science accepts it and moves on. Even today, a few researchers will tell you that HIV doesn’t cause AIDS, or that secondhand smoke isn’t harmful to your health. But that doesn’t stop medical scientists from studying the finer details of such diseases, or governments from funding programs to help people quit smoking. Science isn’t a majority-rules democracy, but if virtually all scientists have the same position on an issue, they probably have some pretty good reasons.

If science is never certain, and almost never unanimous, what are we supposed to do? How do we choose who to trust? Trusting nobody but yourself would be a poor choice. Chances are, others are more qualified than you, and you don’t hold the entirety of human knowledge in your head. For policy-relevant science, ignoring the issue completely until one side is proven right could also be disastrous. Inaction itself is a policy choice, which we see in some governments’ responses to climate change.

Let’s bring the whole issue down to a more personal level. Imagine you were ill, and twenty well-respected doctors independently examined you and said that surgery was required to save your life. One doctor, however, said that your illness was all in your mind, that you were healthy as a horse. Should you wait in bed until the doctors all agreed? Should you go home to avoid surgery that might be unnecessary? Or should you pay attention to the relative size and credibility of each group, as well as the risks involved, and choose the course of action that would most likely save your life?

Working Away

The shape of my summer research is slowly becoming clearer. Basically, I’ll be writing a document comparing the architecture of different climate models. This, of course, involves getting access to the source code. Building on Steve’s list, here are my experiences:

NCAR, Community Earth System Model (CESM): Password-protected, but you can get access within an hour. After a quick registration, you’ll receive an automated email with a username and password. This login information gives you access to their Subversion repository. Registration links and further information are available here, under “Acquiring the CESM1.0 Release Code”.

University of Victoria, Earth System Climate Model (ESCM): Links to the source code can be found on this page, but they’re password-protected. You can request an account by sending an email – follow the link for more information.

Geophysical Fluid Dynamics Laboratory (GFDL), CM 2.1: Slightly more complicated. Create an account for their Gforge repository, which is an automated process. Then, request access to the MOM4P1 project – apparently CM 2.1 is included within that. Apparently, the server grants you request to a project, so it sounds automatic – but the only emails I’ve received from the server regard some kind of GFDL mailing list, and don’t mention the project request. I will wait and see.
Update (July 20): It looks like I got access to the project right after I requested it – I just never received an email!

Max Planck Institute (MPI), COSMOS: Code access involves signing a licence agreement, faxing it to Germany, and waiting for it to be approved and signed by MPI. The agreement is not very restrictive, though – it deals mainly with version control, documenting changes to the code, etc.

UK Met Office, Hadley Centre Coupled Model version 3 (HadCM3): Our lab already has a copy of the code for HadCM3, so I’m not really sure what the process is to get access, but apparently it involved a lot of government paperwork.

Institut Pierre Simon Laplace (IPSL), CM5: This one tripped me up for a while, largely because the user guide is difficult to find, and written in French. Google Translate helped me out there, but it also attempted to “translate” their command line samples! Make sure that you have ksh installed, too – it’s quick to fix, but I didn’t realize it right away. Some of the components for IPSLCM5 are open access, but others are password-protected. Follow the user guide’s instructions for who to email to request access.

Model E: This was the easiest of all. From the GISS website, you can access all the source code without any registration. They offer a frozen AR4 version, as well as nightly snapshots of the work-in-process for AR5 (frozen AR5 version soon to come). There is also a wealth of documentation on this site, such as an installation guide and a description of the model.

I’ve taken a look at the structural code for Model E, which is mostly contained in the file MODELE.f. The code is very clear and well commented, and the online documentation helped me out too. After drawing a lot of complicated diagrams with arrows and lists, I feel that I have a decent understanding of the Model E architecture.

Reading code can become monotonous, though, and every now and then I feel like a little computer trouble to keep things interesting. For that reason, I’m continuing to chip away at building and running two models, Model E and CESM. See my previous post for how this process started.

<TECHNICAL COMPUTER STUFF> (Feel free to skip ahead…)

I was still having trouble viewing the Model E output (only one file worked on Panoply, the rest created an empty map) so I emailed some of the lab’s contacts at NASA. They suggested I install CDAT, a process which nearly broke Ubuntu (haven’t we all been there?) Basically, because it’s an older program, it thought the newest version of Python was 2.5 – which it subsequently installed and set as the default in /usr/bin. Since I had Python 2.6 installed, and the versions are apparently very not-backwards-compatible, every program that depended on Python (i.e. almost everything on Ubuntu) stopped working. Our IT contact managed to set 2.6 back as the default, but I’m not about to try my hand at CDAT again…

I have moved forward very slightly on CESM. I’ve managed to build the model, but upon calling test.<machine name>.run, I get rather an odd error:

./Tools/ccsm_getenv: line 9: syntax error near unexpected token '('
./Tools/ccsm_getenv: line 9: 'foreach i (env_case.xml env_run.xml env_conf.xml env_build.xml env_mach_pes.xml)'

Now, I’m pretty new at shell scripting, but I can’t see the syntax error there – and wouldn’t syntax errors appear at compile-time, rather than run-time?

A post by Michael Tobis, who had a similar error, suggested that the issue had to do with qsub. Unfortunately, that meant I had to actually use qsub – I had previously given up trying to configure Torque to run on a single machine rather than many. I gave the installation another go, and now I can get scripts into the queue, but they never start running – their status stays as “Q” even if I leave the computer alone for an hour. Since the machine has a dual-core processor, I can’t see why it couldn’t run both a server and a node at once, but it doesn’t seem to be working for me.

</TECHNICAL COMPUTER STUFF>

Before I started this job, climate models seemed analogous to Antarctica – a distant, mysterious, complex system that I wanted to visit, but didn’t know how to get to. In fact, they’re far more accessible than Antarctica. More on the scale of a complicated bus trip across town, perhaps?

They are not perfect pieces of software, and they’re not very user friendly. However, all the struggles of installation pay off when you finally get some output, and open it up, and see realistic data representing the very same planet you’re sitting on! Even just reading the code for different models shows you many different ways to look at the same system – for example, is sea ice a realm of its own, or is it a subset of the ocean? In the real world the lines are blurry, but computation requires us to make clear divisions.

The code can be unintelligible (lndmaxjovrdmdni) or familiar (“The Stefan-Boltzmann constant! Finally I recognize something!”) or even entertaining (a seemingly random identification string, dozens of characters long, followed by the comment if you edit this you will get what you deserve). When you get tied up in the code, though, it’s easy to miss the bigger picture: the incredible fact that we can use the sterile, binary practice of computation to represent a system as messy and mysterious as the whole planet. Isn’t that something worth sitting and marveling over?

Quality, Transparency, and Rigour

The Intergovernmental Panel on Climate Change (IPCC) reports are likely the most cited documents on the subject of global warming. The organization, established by the United Nations, doesn’t do any original research – it simply summarizes the massive amount of scientific literature on the topic. Their reports, written and reviewed by volunteer scientists, and published approximately every six years, are a “one-stop shop” for credible information about climate change. When you have a question about climate science, it’s far easier to find the relevant section of the IPCC than it is to wade through thousands of results on Google Scholar.

The main problem with the IPCC, in my opinion, is that their reports are out of date as soon as they’re published, and then everyone has to wait another six years or so for the next version, which is subsequently out of date, and so on. Additionally, because there are so many authors, reviewers, and stakeholders involved in the IPCC, the reports come to reflect the lowest-common-denominator scientific understanding, rather than the median opinion of experts. In particular, government officials oversee the writing and reviewing of the Summary for Policymakers, to make sure that it’s relevant and clear. However, some governments are beginning to abuse their power in this process. The late Stephen Schneider, in his 2009 book Science as a Contact Sport, recounts his experiences with government representatives who absolutely refuse to allow certain conclusions to be published in the IPCC, regardless of their scientific backing.

The result is that the IPCC reports frequently underestimate the severity of climate change. For example, in the most recent report, the worst-case estimate of sea level rise by the end of this century was 0.59 m. Since then, scientists have revised this estimate to 1.9 m, but it won’t show up in the report until the next edition comes out around 2014.

Another example concerns Arctic sea ice: the worst-case scenario from the IPCC was an ice-free Arctic in the summer beginning around 2100. These estimates have come down so rapidly that there’s an outside chance the summer sea ice could be gone before the next IPCC report has a chance to correct it (presentation by Dr. David Barber, media coverage available here). It will more likely disappear around 2035, but that’s still a drastic change from what the IPCC said.

Despite this conservative stance, there are still some who think the IPCC is alarmist (this is usually paired with something about a New World Order and/or socialists using a carbon tax to take over the world). Naturally, the IPCC has become a favourite target of climate change deniers, who wish to obscure the reality of human-caused global warming. Last year, they claimed to have found all kinds of errors in the latest report, somehow proving that global warming wasn’t happening. In fact, most of these so-called “errors” were nothing of the sort, and the worst of the two real mistakes in the report involved a typo regarding which year certain glaicers were expected to disappear. Not bad, for a three-thousand-page document, but it created quite the media firestorm. Apparently scientists are expected to have 100% accuracy at all times, or else they are frauds.

Just a few weeks ago, the IPCC made some changes to their policies in response to these events. Their press release about the new policies featured the phrase “Boost Quality, Transparency and Rigour” in the title.

No, no, no. That’s not what the IPCC needs. These are very admirable goals, but they’re doing just fine as it is. Actions to “further minimize any possibility of errors in future reports” should not be their top priority. Further extending the review process will only further delay the publication of each report (making them even more out of date) and further enhance their lowest-common-denominator position. When you have an error rate on the order of 0.67 errors/1000 pages, should you spend your energy getting that all the way down to zero (a virtually impossible task) or on the real issues that need to be addressed?

I think the IPCC should adopt a continually-updating online version of their report. This would solve their chronic problem of being out of date, as well as help the organization adapt to the increasing role of the Internet in our world. Any future errors the deniers liked to yell about would be fixed immediately. Governments would be forming policies based on the best available evidence from today, not a decade ago. Everything would still be in one place, and version control would allow transparency to remain high.

The IPCC should also make it more clear when their estimates are too conservative. When a single sentence that didn’t even make it into the summary is shown to overestimate the problem, the climate science community ties itself up in knots trying to correct its tattered image. But prominent conclusions that underestimate the problem go unacknowledged for decades. If it were the other way around, can you imagine the field day deniers would have?

Luckily, the changes made to IPCC policy are not all aimed at appeasing the bullies. A long-overdue communications plan is in development: a rapid response team and Senior Communications Manager will develop formal strategies for public education and outreach. Hopefully, this will counteract the false claims and defamation the IPCC has been subject to since its creation.

Another new plan is to create an Executive Committee, composed of the Chair, Vice Chairs, Working Group Co-Chairs, and advisory members. This will “strengthen coordination and management of the IPCC” and allow for actions to be taken between reports, such as communication and responding to possible errors. A more structured administration will probably be helpful, given that the only people in the organization currently getting paid for their work are the office staff (even the Chair doesn’t make a cent). Coordinating overworked scientists who volunteer for a scientific undertaking that demands 100% accuracy can’t be an easy task.

Will the IPCC continue to be the best available source of credible information on climate change? Will its structure of endless review remain feasible in a world dominated by instant news? Should we continue to grant our governments control over the contents of scientific reports concerning an issue that they desperately want to avoid? Should we continue to play to the wants and needs of bullies? Or should we let scientists speak for themselves?

Tornadoes and Climate Change

Cross-posted from NextGen Journal

It has been a bad season for tornadoes in the United States. In fact, this April shattered the previous record for the most tornadoes ever. Even though the count isn’t finalized yet, nobody doubts that it will come out on top:

In a warming world, many questions are common, and quite reasonable. Is this a sign of climate change? Will we experience more, or stronger, tornadoes as the planet warms further?

In fact, these are very difficult questions to answer. First of all, attributing a specific weather event, or even a series of weather events, to a change in the climate is extremely difficult. Scientists can do statistical analysis to estimate the probability of the event with and without the extra energy available in a warming world, but this kind of study takes years. Even so, nobody can say for certain whether an event wasn’t just a fluke. The recent tornadoes very well might have been caused by climate change, but they also might have happened anyway.

Will tornadoes become more common in the future, as global warming progresses? Tornado formation is complicated, and forecasting them requires an awful lot of calculations. Many processes in the climate system are this way, so scientists simulate them using computer models, which can do detailed calculations at an increasingly impressive speed.

However, individual tornadoes are relatively small compared to other kinds of storms, such as hurricanes or regular rainstorms. They are, in fact, smaller than a single square in the highest-resolution climate models around today. Therefore, it’s just not possible to directly project them using mathematical models.

However, we can project the conditions necessary for tornadoes to form. They don’t always lead to a tornado, but they make one more likely. Two main factors exist: high wind shear and high convective available potential energy (CAPE). Climate change is making the atmosphere warmer, and increasing specific humidity (but not relative humidity): both of these contribute to CAPE, so that factor will increase the likelihood of conditions favourable to tornadoes. However, climate change warms the poles faster than the equator, which will decrease the temperature difference between them, subsequently lowering wind shear. That will make tornadoes less likely (Diffenbaugh et al, 2008). Which factor will win out? Is there another factor involved that climate change could impact? Will we get more tornadoes in some areas and less in others? Will we get weaker tornadoes or stronger tornadoes? It’s very difficult to tell.

In 2007, NASA scientists used a climate model to project changes in severe storms, including tornadoes. (Remember, even though an individual tornado can’t be represented on a model, the conditions likely to cause a tornado can.) They predicted that the future will bring fewer storms overall, but that the ones that do form will be stronger. A plausible solution to the question, although not a very comforting one.

With uncertain knowledge, how should we approach this issue? Should we focus on the comforting possibility that the devastation in the United States might have nothing to do with our species’ actions? Or should we acknowledge that we might bear responsibility? Dr. Kevin Trenberth, a top climate scientist at the National Center for Atmospheric Research (NCAR), thinks that ignoring this possibility until it’s proven is a bad idea. “It’s irresponsible not to mention climate change,” he writes.

Beautiful Things

This is what the last few days have taught me: even if the code for climate models can seem dense and confusing, the output is absolutely amazing.

Late yesterday I discovered a page of plots and animations from the Canadian Centre for Climate Modelling and Analysis. The most recent coupled global model represented on that page is CGCM3, so I looked at those animations. I noticed something very interesting: the North Atlantic, independent of the emissions scenario, was projected to cool slightly, while the world around it warmed up. Here is an example, from the A1B scenario. Don’t worry if the animation is already at the end, it will loop:

It turns out that this slight cooling is due to the North Atlantic circulation slowing down, as is very likely to happen from large additions of freshwater that change the salinity and density of the ocean (IPCC AR4 WG1, FAQ 10.2). This freshwater could come from either increased precipitation due to climate change, or meltwater from the Arctic ending up in the North Atlantic. Of course, we hear about this all the time – the unlikely prospect of the Gulf Stream completely shutting down and Europe going into an ice age, as displayed in The Day After Tomorrow – but, until now, I hadn’t realized that even a slight slowing of the circulation could cool the North Atlantic, while Europe remained unaffected.

Then, in chapter 8 of the IPCC, I read something that surprised me: climate models generate their own El Ninos and La Ninas. Scientists don’t understand quite what triggers the circulation patterns leading to these phenomena, so how can they be in the models? It turns out that the modellers don’t have to parameterize the ENSO cycles at all: they have done such a good job of reproducing global circulation from first principles that ENSO arises by itself, even though we don’t know why. How cool is that? (Thanks to Jim Prall and Things Break for their help with this puzzle.)

Jim Prall also pointed me to an HD animation of output from the UK-Japan Climate Collaboration. I can’t seem to embed the QuickTime movie (WordPress strips out some of the necessary HTML tags) so you will have to click on the link to watch it. It’s pretty long – almost 17 minutes – as it represents an entire year of the world’s climate system, in one-hour time steps. It shows 1978-79, starting from observational data, but from there it simulates its own circulation.

I am struck by the beauty of this output – the swirling cyclonic precipitation, the steady prevailing westerlies and trade winds, the subtropical high pressure belt clear from the relative absence of cloud cover in these regions. You can see storms sprinkling across the Amazon Basin, monsoons pounding South Asia, and sea ice at both poles advancing and retreating with the seasons. Scientists didn’t explicitly tell their models to do any of this. It all appeared from first principles.

Take 17 minutes out of your day to watch it – it’s an amazing stress reliever, sort of like meditation. Or maybe that’s just me…

One more quick observation: most of you are probably familiar with the naming conventions of IPCC reports. The First Assessment Report was FAR, the second was SAR, and so on, until the acronyms started to repeat themselves, so the Fourth Assessment Report was AR4. They’ll have to follow this alternate convention until the Eighth Annual Report, which will be EAR. Maybe they’ll stick with AR8, but that would be substantially less entertaining.

Learning Experiences

I apologize for my brief hiatus – it’s been almost two weeks since I’ve posted. I have been very busy recently, but for a very exciting reason: I got a job as a summer student of Dr. Steve Easterbrook! You can read more about Steve and his research on his faculty page and blog.

This job required me to move cities for the summer, so my mind has been consumed with thoughts such as “Where am I and how do I get home from this grocery store?” rather than “What am I going to write a post about this week?” However, I have had a few days on the job now, and as Steve encourages all of his students to blog about their research, I will use this outlet to periodically organize my thoughts.

I will be doing some sort of research project about climate modelling this summer – we’re not yet sure exactly what, so I am starting by taking a look at the code for some GCMs. The NCAR Community Earth System Model is one of the easiest to access, as it is largely an open source project. I’ve only read through a small piece of their atmosphere component, but I’ve already seen more physics calculations in one place than ever before.

I quickly learned that trying to understand every line of the code is a silly goal, as much as I may want to. Instead, I’m trying to get a broader picture of what the programs do. It’s really neat to have my knowledge about different subjects converge so completely. Multi-dimensional arrays, which I have previously only used to program games of Sudoku and tic-tac-toe, are now being used to represent the entire globe. Electric potential, a property I last studied in the circuitry unit of high school physics, somehow impacts atmospheric chemistry. The polar regions, which I was previously fascinated with mainly for their wildlife, also present interesting mathematical boundary cases for a climate model.

It’s also interesting to see how the collaborative nature of CESM, written by many different authors and designed for many different purposes, impacts its code. Some of the modules have nearly a thousand lines of code, and some have only a few dozen – it all depends on the programming style of the various authors. The commenting ranges from extensive to nonexistent. Every now and then one of the files will be written in an older version of Fortran, where EVERYTHING IS IN UPPER CASE.

I am bewildered by most of the variable names. They seem to be collections of abbreviations I’m not familiar with. Some examples are “mxsedfac”, “lndmaxjovrdmdni”, “fxdd”, and “vsc_knm_atm”.

When we get a Linux machine set up (I have heard too many horror stories to attempt a dual-boot with Windows) I am hoping to get a basic CESM simulation running, as well as EdGCM (this could theoretically run on my laptop, but I prefer to bring that home with me each evening, and the simulation will probably take over a day).

I am also doing some background reading on the topic of climate modelling, including this book, which led me to the story of PHONIAC. The first weather prediction done on a computer (the ENIAC machine) was recreated as a smartphone application, and ran approximately 3 million times faster. Unfortunately, I can’t find anyone with a smartphone that supports Java (argh, Apple!) so I haven’t been able to try it out.

I hope everyone is having a good summer so far. A more traditional article about tornadoes will be coming at the end of the week.

Thoughts

My presentation went very well. The church group was full of kind, educated, and passionate people. It was nice to have an audience that wasn’t full of high school students who thought science was boring!

After the presentation, a woman in the group shared something with me that she found at a conference in Australia just before the Copenhagen summit. I liked it so much that I thought I’d share it here, with her permission.

If the earth
were only a few feet in
diameter, floating a few feet above
a field somewhere, people would come
from everywhere to marvel at it. People would
walk around it marvelling at its big pools of water,
its little pools and the water flowing between the pools.
People would marvel at the bumps on it, and the holes in it,
and they would marvel at the very thin layer of gas surrounding
it and the water suspended in the gas. The people would
marvel at all the creatures walking around the surface of the ball
and at the creatures in the water. The people would declare it
as sacred because it was the only one and they would protect
it so that it would not be hurt. The ball would be the
greatest wonder known and people around would come to
pray to it, to be healed, to gain knowledge, to know
beauty and to wonder how it could be. People
would love it and defend it with their lives
because they would somehow know that
their lives, their own roundness, could
be nothing without it. If the
Earth were only a few
feet in diameter.

-Joe Miller