Technology as Communication

The relationship between technology and climate change is complex and multi-faceted. It was technology, in the form of fossil fuel combustion, that got us into this problem. Many uninformed politicians hold out hope that technology will miraculously save us in the future, so we can continue burning fossil fuels at our current rate. However, if we keep going along with such an attitude, risky geoengineering technologies may be required to keep the warming at a tolerable level.

However, we should never throw our hands in the air and give up, because we can always prevent the warming from getting worse. 2 C warming would be bad, but 3 or 4 C would be much worse, and 5 or 6 C would be devastating. We already possess many low-carbon, or even zero-carbon, forms of energy that could begin to replace the fossil fuel economy. The only thing missing is political will, and the only reason it’s missing, in my opinion, is that not enough people understand the magnitude and urgency of the problem.

Here is where technology comes in again – for purposes of communication. We live in an age of information and global interconnection, so ideas can travel at an unprecedented rate. It’s one thing for scientists to write an article about climate change and distribute it online, but there are many other, more engaging, forms of communication that harness today’s software and graphic technologies. Let’s look at a few recent examples.

Data clearly shows that the world is warming, but spreadsheets of temperature measurements are a little dry for public consumption. Graphs are better, but still cater to people with very specific kinds of intelligence. Since not everyone likes math, the climate team at NASA compressed all of their data into a 26-second video that shows changes in surface temperature anomalies (deviations from the average) from 1880 to 2010. The sudden warming over the past few decades even catches me by surprise.

Take a look – red is warm and blue is cool:

A more interactive visual expression of data comes from Penn State University. In this Flash application, you can play around with the amount of warming, latitude range, and type of crop, and see how yields change both with and without adaptation (changing farming practices to suit the warmer climate). Try it out here. A similar approach, where the user has control over the data selection, has been adopted by NOAA’s Climate Services website. Scroll down to “Climate Dashboard”, and you can compare temperature, carbon dioxide levels, energy from the sun, sea level, and Arctic sea ice on any timescale from 1880 to the present.

Even static images can be effective expressions of data. Take a look at this infographic, which examines the social dimensions of climate change. It does a great job of showing the problem we face: public understanding depends on media coverage, which doesn’t accurately reflect the scientific consensus. Click for a larger version:

Global Warming - the debate

Finally, a new computer game called Fate of the World allows you to try your hand at solving climate change. It adopts the same data and projections used by scientists to demonstrate to users what we can expect in the coming century, and how that changes based on our actions. Changing our lightbulbs and riding our bikes isn’t going to be enough, and, as PC Gamer discovered, even pulling out all the stops – nuclear power, a smart grid, cap-and-trade – doesn’t get us home free. You can buy the game for about $10 here (PC only, a Mac version is coming in April). I haven’t tried this game, but it looks pretty interesting – sort of like Civilization. Here is the trailer:

Take a look at these non-traditional forms of communication. Pass them along, and make your own if you’re so inclined. We need all the help we can get.

What’s the Warmest Year – and Does it Matter?

Cross-posted from NextGenJournal

Climate change is a worrying phenomenon, but watching it unfold can be fascinating. The beginning of a new year brings completed analysis of what last year’s conditions were like. Perhaps the most eagerly awaited annual statistic is global temperature.

This year was no different – partway through 2010, scientists could tell that it had a good chance of being the warmest year on record. It turned out to be more or less tied for first, as top temperature analysis centres recently announced:

Why the small discrepancy in the order of  1998, 2005, and 2010? The answer is mainly due to the Arctic. Weather stations in the Arctic region are few and far between, as it’s difficult to have a permanent station on ice floes that move around, and are melting away. Scientists, then, have two choices in their analyses: extrapolate Arctic temperature anomalies from the stations they do have, or just leave the missing areas out, assuming that they’re warming at the global average rate. The first choice might lead to results that are off in either direction…but the second choice almost certainly underestimates warming, as it’s clear that climate change is affecting the Arctic much more and much faster than the global average. Currently, NASA is the only centre to do extrapolation in Arctic data. A more detailed explanation is available here.

But how useful is an annual measurement of global temperature? Not very, as it turns out. Short-term climate variability, most prominently El Nino and La Nina, impact annual temperatures significantly. Furthermore, since this oscillation occurs in the winter, the thermal influence of El Nino or La Nina can fall entirely into one calendar year, or be split between two. The result is a graph that’s rather spiky:

A far more useful analysis involves plotting a 12-month running mean. Instead of measuring only from January to December, measurements are also compiled from February to January, March to February, and so on. This results in twelve times more data points, and prevents El Nino and La Nina events from being exaggerated:

This graph is better, but still not that useful. The natural spikiness of the El Nino cycle can, in the short term, get in the way of understanding the underlying trend. Since the El Nino cycle takes between 3 and 7 years to complete, a 60-month (5-year) running mean allows the resulting ups and downs to cancel each other out. Another cycle that impacts short-term temperature is the sunspot cycle, which operates on an 11-year cycle. A 132-month running mean smooths out that influence too. Both 60- and 132- month running means are shown below:

A statistic every month that shows the average global temperature over the last 5 or 11 years may not be as exciting as an annual measurement regarding the previous year. But that’s the reality of climate change. It doesn’t make every month or even every year warmer than the last, and a short-term trend line means virtually nothing. In the climate system, trends are always obscured by noise, and the nature of human psychology means we pay far more attention to noise. Nonetheless, the long-term warming trend since around 1975 is irrefutable when one is presented with the data. A gradual, persistent change might not make the greatest headline, but that doesn’t mean it’s worth ignoring.

Storms of my Grandchildren

I hope everyone had a fun and relaxing Christmas. Here’s a book I’ve been meaning to review for a while.

The worst part of the recent book by NASA climatologist James Hansen is, undoubtedly, the subtitle. The truth about the coming climate catastrophe and our last chance to save humanity – really? That doesn’t sound like the intrinsic, subdued style of Dr. Hansen. In my opinion, it simply alienates the very audience we’re trying to reach: moderate, concerned non-scientists.

The inside of the book is much better. While he couldn’t resist slipping in a good deal of hard science (and, in my opinion, these were the best parts), the real focus was on climate policy, and the relationship between science and policy. Hansen struggled with the prospect of becoming involved in policy discussions, but soon realized that he didn’t want his grandchildren, years from now, to look back at his work and say, “Opa understood what was happening, but he did not make it clear.”

Hansen is very good at distinguishing between his scientific work and his opinions on policy, and makes no secret of which he would rather spend time on. “I prefer to just do science,” he writes in the introduction. “It’s more pleasant, especially when you are having some success in your investigations. If I must serve as a witness, I intend to testify and then get back to the laboratory, where I am comfortable. That is what I intend to do when this book is finished.”

Hansen’s policy opinions centre on a cap-and-dividend system: a variant of a carbon tax, where revenue is divided evenly among citizens and returned to them. His argument for a carbon tax, rather than cap-and-trade, is compelling, and certainly convinced me. He also advocates the expansion of nuclear power (particularly “fourth-generation” fast nuclear reactors), a moratorium on new coal-generated power plants, and drastically improved efficiency measures.

These recommendations are robust, backed up with lots of empirical data to argue why they would be our best bet to minimize climate change and secure a stable future for generations to come. Hansen is always careful to say when he is speaking as a scientist and when he is speaking as a citizen, and provides a fascinating discussion of the connection between these two roles. As Bill Blakemore from ABC television wrote in correspondence with Hansen, “All communication is biased. What makes the difference between a propagandist on one side and a professional journalist or scientist on the other is not that the journalist or scientist ‘set their biases aside’ but that they are open about them and constantly putting them to the test, ready to change them.”

Despite all this, I love when Hansen puts on his scientist hat. The discussions of climate science in this book, particularly paleoclimate, were gripping. He explains our current knowledge of the climatic circumstances surrounding the Permian-Triassic extinction and the Paleocene-Eocene Thermal Maximum (usually referred to as the PETM). He explains why neither of these events is a suitable analogue for current climate change, as the current rate of introduction of the radiative forcing is faster than anything we can see in the paleoclimatic record.

Be prepared for some pretty terrifying facts about our planet’s “methane hydrate gun”, and how it wasn’t even fully loaded when it went off in the PETM. Also discussed is the dependence of climate sensitivity on forcing: the graph of these two variables is more or less a parabola, as climate sensitivity increases both in Snowball Earth conditions and in Runaway Greenhouse conditions. An extensive discussion of runaway greenhouse is provided, where the forcing occurs so quickly that negative feedbacks don’t have a chance to act before the positive water vapour feedback gets out of control, the oceans boil, and the planet becomes too hot for liquid water to exist. For those who are interested in this scenario, Hansen argues that, if we’re irresponsible about fossil fuels, it is quite possible for current climate change to reach this stage. For those who have less practice separating the scientific part of their brain from the emotional part, I suggest you skip this chapter.

I would recommend this book to everyone interested in climate change. James Hansen is such an important player in climate science, and has arguably contributed more to our knowledge of climate change than just about anyone. Whether it’s for the science, for the policy discussions, or for his try at science fiction in the last chapter, it’s well worth the cover price.

Thoughts from others who have read this book are welcome in the comments, as always.

The Real Story of Climategate

A year ago today, an unidentified hacker published a zipped folder in several locations online. In this folder were approximately one thousand emails and three thousand files which had been stolen from the backup server of the Climatic Research Unit in the UK, a top centre for global temperature analysis and climate change studies. As links to the folder were passed around on blogs and online communities, a small group of people sorted through the emails, picking out a handful of phrases that could be seen as controversial, and developing a narrative which they pushed to the media with all their combined strength. “A lot is happening behind the scenes,” one blog administrator wrote. “It is not being ignored. Much is being coordinated among major players and the media. Thank you very much. You will notice the beginnings of activity on other sites now. Here soon to follow.”

This was not the work of a computer-savvy teenager that liked to hack security systems for fun. Whoever the thief was, they knew what they were looking for. They knew how valuable the emails could be in the hands of the climate change denial movement.

Skepticism is a worthy quality in science, but denial is not. A skeptic will only accept a claim given sufficient evidence, but a denier will cling to their beliefs regardless of evidence. They will relentlessly attack arguments that contradict their cause, using talking points that are full of misconceptions and well-known to be false, while blindly accepting any argument that seems to support their point of view. A skeptic is willing to change their mind. A denier is not.

There are many examples of denial in our society, but perhaps the most powerful and pervasive is climate change denial. We’ve been hearing the movement’s arguments for years, ranging from illogic (“climate changed naturally in the past, so it must be natural now“) to misrepresentation (“global warming stopped in 1998“) to flat-out lies (“volcanoes emit more carbon dioxide than humans“). Of course, climate scientists thought of these objections and ruled them out long before you and I even knew what global warming was, so in recent years, the arguments of deniers were beginning to reach a dead end. The Copenhagen climate summit was approaching, and the public was beginning to understand the basic science of human-caused climate change, even realize that the vast majority of the scientific community was concerned about it. A new strategy for denial and delay was needed – ideally, for the public to lose trust in researchers. Hence, the hack at CRU, and the beginning of a disturbing new campaign to smear the reputations of climate scientists.

The contents of the emails were spun in a brilliant exercise of selective quotation. Out of context, phrases can be twisted to mean any number of things – especially if they were written as private correspondence with colleagues, rather than with public communication in mind. Think about all the emails you have sent in the past decade. Chances are, if someone tried hard enough, they could make a few sentences you had written sound like evidence of malpractice, regardless of your real actions or intentions.

Consequently, a mathematical “trick” (clever calculation) to efficiently analyse data was reframed as a conspiracy to “trick” (deceive) the public into believing the world was warming. Researchers discussed how to statistically isolate and “hide the decline” in problematic tree ring data that was no longer measuring what it used to, but this quote was immediately twisted to claim that the decline was in global temperatures: the world is cooling and scientists are hiding it from us!

Other accusations were based not on selective misquotation but on a misunderstanding of the way science works. When the researchers discussed what they felt were substandard papers that should not be published, many champions of the stolen emails shouted accusations that scientists were censoring their critics, as if all studies, no matter how weak their arguments, had a fundamental right to be published. Another email, in which a researcher privately expressed a desire to punch a notorious climate change denier, was twisted into an accusation that the scientists threatened people who disagreed with them. How was it a threat if the action was never intended to materialize, and if the supposed target was never aware of it?

These serious and potentially damaging allegations, which, upon closer examination, are nothing more than grasping at straws, were not carefully examined and evaluated by journalists – they were repeated. Early media reports bordered on the hysterical. With headlines such as “The final nail in the coffin of anthropogenic global warming” and “The worst scientific scandal of our generation“, libelous claims and wild extrapolations were published mere days after the emails were distributed. How could journalists have possibly had time to carefully examine the contents of one thousand emails? It seems much more likely that they took the short-cut of repeating the narrative of the deniers without assessing its accuracy.

Even if, for the sake of argument, all science conducted by the CRU was fraudulent, our understanding of global warming would not change. The CRU runs a global temperature dataset, but so do at least six other universities and government agencies around the world, and their independent conclusions are virtually identical. The evidence for human-caused climate change is not a house of cards that will collapse as soon as one piece is taken away. It’s more like a mountain: scrape a couple of pebbles off the top, but the mountain is still there. For respected newspapers and media outlets to ignore the many independent lines of evidence for this phenomenon in favour of a more interesting and controversial story was blatantly irresponsible, and almost no retractions or apologies have been published since.

The worldwide media attention to this so-called scandal had a profound personal impact on the scientists involved. Many of them received death threats and hate mail for weeks on end. Dr. Phil Jones, the director of CRU, was nearly driven to suicide. Another scientist, who wishes to remain anonymous, had a dead animal dumped on his doorstep and now travels with bodyguards. Perhaps the most wide-reaching impact of the issue was the realization that private correspondence was no longer a safe environment. This fear only intensified when the top climate modelling centre in Canada was broken into, in an obvious attempt to find more material that could be used to smear the reputations of climate scientists. For an occupation that relies heavily on email for cross-national collaboration on datasets and studies, the pressure to write in a way that cannot be taken out of context – a near-impossible task – amounts to a stifling of science.

Before long, the investigations into the contents of the stolen emails were completed, and one by one, they came back clear. Six independent investigations reached basically the same conclusion: despite some reasonable concerns about data archival and sharing at CRU, the scientists had shown integrity and honesty. No science had been falsified, manipulated, exaggerated, or fudged. Despite all the media hullabaloo, “climategate” hadn’t actually changed anything.

Sadly, by the time the investigations were complete, the media hullabaloo had died down to a trickle. Climategate was old news, and although most newspapers published stories on the exonerations, they were generally brief, buried deep in the paper, and filled with quotes from PR spokespeople that insisted the investigations were “whitewashed”. In fact, Scott Mandia, a meteorology professor, found that media outlets devoted five to eleven times more stories to the accusations against the scientists than they devoted to the resulting exonerations of the scientists.

Six investigations weren’t enough, though, for some stubborn American politicians who couldn’t let go of the article of faith that Climategate was proof of a vast academic conspiracy. Senator James Inhofe planned a McCarthy-like criminal prosecution of seventeen researchers, most of whom had done nothing more than occasionally correspond with the CRU scientists. The Attorney General of Virginia, Ken Cuccinelli, repeatedly filed requests to investigate Dr. Michael Mann, a prominent paleoclimatic researcher, for fraud, simply because a twelve-year-old paper by Mann had some statistical weaknesses. Ironically, the Republican Party, which prides itself on fiscal responsibility and lower government spending, continues to advocate wasting massive sums of money conducting inquiries which have already been completed multiple times.

Where are the politicians condemning the limited resources spent on the as yet inconclusive investigations into who stole these emails, and why? Who outside the scientific community is demanding apologies from the hundreds of media outlets that spread libelous accusations without evidence? Why has the ongoing smear campaign against researchers studying what is arguably the most pressing issue of our time gone largely unnoticed, and been aided by complacent media coverage?

Fraud is a criminal charge, and should be treated as such. Climate scientists, just like anyone else, have the right to be presumed innocent until proven guilty. They shouldn’t have to endure this endless harassment of being publicly labelled as frauds without evidence. However, the injustice doesn’t end there. This hate campaign is a dangerous distraction from the consequences of global climate change, a problem that becomes more difficult to solve with every year we delay. The potential consequences are much more severe, and the time we have left to successfully address it is much shorter, than the vast majority of the public realizes. Unfortunately, powerful forces are at work to keep it that way. This little tussle about the integrity of a few researchers could have consequences millennia from now – if we let it.

Update: Many other climate bloggers are doing Climategate anniversary pieces. Two great ones I read today were Bart Verheggen’s article and the transcript of John Cook’s radio broadcast. Be sure to check them out!

Odds and Ends

I must thank Michael Tobis for two pieces of reading that his blog recently pointed me to. First, a fantastic article by Bill McKibben, which everyone should print out and stick to their fridge. Here’s a taste:

Read the comments on one of the representative websites: Global warming is a “fraud” or a “plot.” Scientists are liars out to line their pockets with government grants. Environmentalism is nothing but a money-spinning “scam.” These people aren’t reading the science and thinking, I have some questions about this. They’re convinced of a massive conspiracy.

The odd and troubling thing about this stance is not just that it prevents action. It’s also profoundly unconservative. If there was ever a radical project, monkeying with the climate would surely qualify. Had the Soviet Union built secret factories to pour carbon dioxide into the atmosphere and threatened to raise the sea level and subvert the Grain Belt, the prevailing conservative response would have been: Bomb them. Bomb them back to the Holocene—to the 10,000-year period of climatic stability now unraveling, the period that underwrote the rise of human civilization that conservatism has taken as its duty to protect. Conservatism has always stressed stability and continuity; since Burke, the watchwords have been tradition, authority, heritage. The globally averaged temperature of the planet has been 57 degrees, give or take, for most of human history; we know that works, that it allows the world we have enjoyed. Now, the finest minds, using the finest equipment, tell us that it’s headed toward 61 or 62 or 63 degrees unless we rapidly leave fossil fuel behind, and that, in the words of NASA scientists, this new world won’t be “similar to that on which civilization developed and to which life on earth is adapted.” Conservatives should be leading the desperate fight to preserve the earth we were born on.

Read the rest of the article here. Highly recommended to all.

The other link I wanted to share was a new publication entitled “Science and the Media”, just released by the American Academy of Arts and Sciences (not to be confused with the American Association for the Advancement of Science – why all the acronym duplication?)

With contributions from everyone from Donald Kennedy to Alan Alda, and essays with titles from “The Scientist as Citizen” to “Civic Scientific Literacy: The Role of the Media in the Electronic Era”, I’m virtually certain that I will enjoy this one (sorry, I can’t bring myself to say things like “certain” without caveats any more). The 109-page pdf is available free of charge and can be accessed from this page, which also includes information on ordering hard copies.

In other news, the La Niña conditions in the eastern Pacific (see anomaly map above) have bumped this year’s temperatures down a bit, so January-September 2010 is now tied for the warmest on record, rather than being a clear winner. This analysis is from NCDC, however, and I’m not sure how they deal with sparse data in the Arctic (for background, see this post – a summary of one of the most interesting papers I’ve read this year). Does anyone know if GISS has an up-to-date estimate for 2010 temperatures that we could compare it to? All I can find on their website are lines and lines of raw data, and I’m not really sure how to process it myself.

That’s all for today. Enjoy the week, everyone.

Global Surface Temperature Change

I really enjoyed reading “Global Surface Temperature Change“, by James Hansen and his team at GISS. Keep in mind that it’s still in the draft stages – they haven’t submitted to a journal yet, but they certainly plan to, and it’s a very credible team of scientists that will almost definitely get it published.

The paper is mostly about the methods of global temperature analysis. It’s more of a review paper than an account of a single experiment. However, their main discussion point was that even by using the same data, problems can be addressed in different ways. The two main problems with temperature analysis are:

  • “incomplete spatial and temporal coverage” (sparse data)
  • “non-climatic influences on measurement station environment” (urban heat island effect).

The authors explain the methods they use and why, and explore the impacts that different methods have on their results.

GISS measures anomalies in the temperatures, largely because they are much smoother and more consistent, geographically, than absolute temperatures. In 1987, they determined that anomalies could be safely extrapolated for a radius of 1200 km from a station and still be accurate. GISS smooths the whole map out by extrapolating everything and averaging the overlapping bits.

Extrapolating is also very useful in areas with very few stations, such as the polar regions and parts of Africa. In this map, grey indicates missing data:



The Arctic is particularly problematic, not only because its data is so sparse, but also because it has the largest anomaly of any region in the world. If you have incomplete coverage of an area that is warming so dramatically, it won’t pull its full weight in the global trend, and your result will almost certainly be too low.

This difficulty with the Arctic is the reason that GISS says 2005 is the warmest year on record, while HadCRUT, the team in England, says that 1998 is. GISS extrapolates from the stations they have, and end up getting pretty good coverage of the Arctic:

They’re assuming that areas with missing data have the same anomaly as whatever temperature stations are within 1200 km, which, as they determined in 1987, is a pretty fair assumption.

However, HadCRUT doesn’t do this extrapolating thing. When they don’t have data for an area, they just leave it out:

This might sound safer, in a way, but this method also makes an assumption. It assumes that the area has the same anomaly as the global average. And as we all know, the Arctic is warming a lot more and a lot faster than the global average. So it’s quite possible that GISS is right on this one.

Another adjustment that NASA makes is for local, anthropogenic, non-climatic effects on temperature data. The most obvious of these is the urban heat island effect. As an area becomes more urban, it gets more pavement, less vegetation, and its albedo goes down – it absorbs more heat. This often makes cities substantially warmer than the surrounding rural areas, which can obviously contaminate the temperature record. However, there are ways of eliminating urban influences from the data so we can see what the real trend is.

The first step is determining what stations are considered urban. The obvious way to do this is through population, but that’s actually not very accurate. Think of somewhere like Africa, where, even if there are thousands of people living in a small area, the urban influences such as concrete, absence of vegetation, or exhaust aren’t usually present. A much better indication is energy use, and a good proxy for energy use, that’s easy to measure, is lights at night-time.

So GISS put a bit of code into their analysis that singles out stations where nightlight brightness is greater than 32 µW/m2/sr/µm, and adjusts their trends to agree with rural stations within 1200 km. If there aren’t enough rural stations within that radius, they’ll just exclude the station from the analysis.

They did an even more rigorous test for this paper, to test just how much urban influences were contaminating the long-term trend, and it was pretty interesting.

There were enough stations considered “pitch-dark” at night, where they couldn’t detect any light, to run a global analysis all by themselves. The trend that came out was <0.01 °C/century smaller than GISS’s normal calculation, an amount of error that they described as “immeasurably small”.

The result of all this temperature analysis is a graph, with one new point every year, that is “eagerly awaited by some members of the public and the media”:

However, this graph isn’t actually as useful as this one – the 12-month running mean:

“From a climate standpoint there is nothing special about the time  of year at which the calendar begins”, so instead of only measuring January-December, you can also do February-January, March-February, and so on. This way, you get a data point every month instead of every year, and more data means more accuracy. It also solves problems with short-term influences, such as El Nino, La Nina, and volcanic eruptions, that the annual graph was having. These fleeting, but fairly substantial, influences can fall completely into one calendar year or be split between two – so their influence on global temperature could be overestimated or underestimated, depending on the starting month of the calendar. The 12-month running mean is much less misleading in this fashion.

As it is, we just set a new record for the 12-month running mean, and unless La Nina really takes off, 2010 will likely set a new record for the annual graph as well. But the authors argue that we need to start moving away from the annual graph, because it isn’t as useful.

The authors also discuss public perception of climate change, and media coverage of the issue. They say, “Our comments here about communication of this climate science to the public are our opinion…[We offer it] because it seems inappropriate to ignore the vast range of claims appearing in the media and in hopes that open discussion of these matters may help people distinguish the reality of global change sooner than would otherwise be the case.”

They make the very good point that “Lay people’s perception tends to be strongly influenced by the latest local fluctuation”, and use this winter as a case study, where a strongly negative Arctic Oscillation index caused significantly cooler-than-normal conditions across the United States and Europe. Consequently, a lot of people, especially in the US, began to doubt the reality of global warming – even though, in the world as a whole, it was the second warmest winter on record:

The authors also talk about data sharing. GISS likes to make everything freely available to the public – temperature station data, computer code, everything. However, putting it out there immediately, so that anyone can help check for flaws, has “a practical disadvantage: it allows any data flaws to be interpreted and misrepresented as machinations.” Multiple times in the past few years, when there have been minor errors that didn’t actually change anything, GISS was widely accused of making these mistakes deliberately, to “intentionally exaggerate the magnitude of global warming”. They realized this wasn’t working, so they changed their system: Before releasing the data to everyone, they first put it up on a private site so that only select scientists can examine it for flaws. And, of course, this “has resulted in the criticism that GISS now “hides” their data”.

Personally, I find the range and prevalence of these accusations against scientists absolutely terrifying. Look at what has become mainstream:

Scientific fraud is a very serious allegation, and it’s one thing for citizens to make it without evidence, but it’s another thing altogether for the media to repeat such claims without first investigating their validity:

I have been disgusted by the media coverage of climate science, especially over the past year, especially in the United States, and I worry what this will mean for our ability to solve the problem.

However, there is still fantastic science going on that is absolutely fascinating and essential to our understanding of global climate change. This paper was a very interesting read, and it helped me to better understand a lot of aspects of global temperature analysis.

So What Happened with ClimateGate?

Remember back in December, when the news was buzzing each day about the stolen emails from top climate researchers? They were described as “the final nail in the coffin of anthropogenic global warming”, or worse. Apparently, the scientists had written things that severely compromised the underpinnings for the idea that human activity was causing the Earth to warm. We could now all stop worrying and forget about cap-and-trade.

But that wasn’t the end of the story. There were no less than four independent investigations into the contents of these emails – conducted by scientists, universities, and governments, not general reporters rushing off a story about an area of science with which they were unfamiliar, and trying to make it sound interesting and controversial in the process.

So what did these investigations find? Is the Earth still warming? Are humans still responsible? Can we trust the scientific process any more, or should we throw peer-review out the window and practice Blog Science instead?

Actually, all four of the investigations concluded that absolutely no science was compromised by the contents of the emails. The CRU scientists weren’t as good as they should have been about making data easily accessible to others, but that was the only real criticism. These scientists are not frauds, although they are accused of it on a daily basis.

Pennsylvania State University, over a series of two reports, investigated the actions of their employee, Dr. Michael Mann, who is arguably at the top of the field of paleoclimatology. They found that, contrary to most accounts in the mainstream media, he did not hide or manipulate any data to exaggerate global warming, delete any emails that might seem suspicious and be subject to Freedom of Information requests, or unjustly suppress skeptical papers from publication. After a second investigation, following up on the catch-all accusation of “seriously deviating from accepted practices within the academic community”, Penn State exonerated Mann. They criticized him for occasionally sharing unpublished manuscripts with his colleagues without first obtaining the express permission of the authors, but besides that minor (and somewhat unrelated) reprimand, they found absolutely nothing wrong.

The British House of Commons investigated the actions of CRU director Phil Jones, and came to a similar conclusion. They found that his “actions were in line with common practice in the climate science community”, that he was “not part of a systematic attempt to mislead” or “subvert the peer review process”, and that “the focus on CRU….has been largely misplaced”. They criticized CRU’s lack of openness with their data, but said that the responsibility should lie with the University of East Anglia, which CRU is a part of. So these scientists should really catch up to the climate research team at NASA, for example, which publishes all of their raw data, methodologies, and computer codes online, with impeccable archives.

The University of East Anglia conducted their own investigation into the actions of CRU as a whole. They found no hint of tailoring results to a particular agenda”, and asserted thatallegations of deliberate misrepresentation and unjustified selection of data are not valid”. They also explored the lack of transparency in CRU, but were more sympathetic. “CRU accepts with hindsight”, they write, “that they should have devoted more attention in the past to archiving data and algorithms and recording exactly what they did. At the time the work was done, they had no idea that these data would assume the importance they have today and that the Unit would have to answer detailed inquiries on earlier work.” They also note that CRU should not have had to respond to Freedom of Information requests for data which they did not own (such as weather station records).

Just last week, the final investigation, headed by Sir Muir Russell on behalf of UEA, found that “their rigour and honesty as scientists are not in doubt.” Is this starting to seem a bit repetitive? To illustrate their point, over the course of two days, they independently reconstructed the global temperature record using publicly available data, and came to the same conclusion as CRU. Again, there was the criticism that CRU was not as open as it should have been. They also noted that an obscure cover figure for a 1999 World Meteorological Organization report, constructed by Phil Jones, did not include enough caveats about what was proxy data and what was instrumental data. However, the more formally published, and much more iconic, graphs in Mann 98 and the IPCC TAR, were fine.

There have been some great comments on the results of these investigations since they were released, especially by scientists. Here are some samples:

[The CRU researchers] are honest, hardworking scientists whose reputations have been unjustifiably smeared by allegations of unscrupulous behaviour…I hope that the media will devote as much attention to this comprehensive dismissal of the allegations as it did to promoting the hysteria surrounding the email theft in the first place. Will the Daily Telegraph now retract its claim that the emails revealed “the greatest scientific scandal of our age” and apologize unreservedly to Phil Jones? Will there now be a public inquiry about the erroneous, shallow and repetitive nonsense promulgated in the media over this affair? If there is a scandal to be reported at all, it is this: the media stoked a controversy without properly investigating the issues, choosing to inflate trivialities to the level of an international scandal, without regard for the facts or individuals affected. This was a shameful chapter in the history of news reporting. -Raymond Bradley, director of the Climate System Research Center at the University of Massachusetts

The call for greater transparency and openness among scientists and their institutions is necessary and welcomed, but certainly they aren’t the only ones who deserve that reminder. What institution on the planet would pass muster under such intense scrutiny? Certainly not the U.S. government agencies, which often deny or impede FOIA requests, or global corporations like BP, Massey Energy and Koch Industries, which seem to revel in hiding information from the public all the time. More transparency is needed everywhere, not just among scientists in lab coats. -Brendan DeMelle, freelance journalist, DeSmogBlog

[The Muir-Russell report] makes a number of recommendations for improvements in processes and practices at the CRU, and so can be taken as mildly critical, especially of CRU governance. But in so doing, it never really acknowledges the problems a small research unit (varying between 3.5 to 5 FTE staff over the last decade) would have in finding the resources and funding to be an early adopter in open data and public communication, while somehow managing to do cutting edge research in its area of expertise too. -Steve Easterbrook, computer science professor at the University of Toronto

I agree with these statements. I think that we are holding scientists in general, but especially climate scientists, to a far higher standard than any other group of people in the world. We need to relax a bit and realize that scientists make mistakes, and that innocent mistakes are not evidence of fraud that will bring a long-standing theory tumbling down. We need to realize that scientists are employees like any others, who don’t always follow ideal actions in every professional situation, especially when they are under intense pressure that includes death threats and accusations of criminal activity.

However, at the same time, we need to start holding other groups of people, especially journalists, to a higher standard. Why has the media been able to get away with perpetuating serious allegations without first investigating the what really happened, and without publishing explicit retractions and apologies when the people whose reputations they smeared are found innocent? Why haven’t there been four official investigations into who stole these emails, and why?