Feeds:
Posts
Comments

Archive for the ‘Research Blogging’ Category

After a long hiatus – much longer than I like to think about or admit to – I am finally back. I just finished the last semester of my undergraduate degree, which was by far the busiest few months I’ve ever experienced.

This was largely due to my honours thesis, on which I spent probably three times more effort than was warranted. I built a (not very good, but still interesting) model of ocean circulation and implemented it in Python. It turns out that (surprise, surprise) it’s really hard to get a numerical solution to the Navier-Stokes equations to converge. I now have an enormous amount of respect for ocean models like MOM, POP, and NEMO, which are extremely realistic as well as extremely stable. I also feel like I know the physics governing ocean circulation inside out, which will definitely be useful going forward.

Convocation is not until early June, so I am spending the month of May back in Toronto working with Steve Easterbrook. We are finally finishing up our project on the software architecture of climate models, and writing it up into a paper which we hope to submit early this summer. It’s great to be back in Toronto, and to have a chance to revisit all of the interesting places I found the first time around.

In August I will be returning to Australia to begin a PhD in Climate Science at the University of New South Wales, with Katrin Meissner and Matthew England as my supervisors. I am so, so excited about this. It was a big decision to make but ultimately I’m confident it was the right one, and I can’t wait to see what adventures Australia will bring.

Read Full Post »

I haven’t forgotten about this project! Read the introduction and ODE derivation if you haven’t already.

Last time I derived the following ODE for temperature T at time t:

where S and τ are constants, and F(t) is the net radiative forcing at time t. Eventually I will discuss each of these terms in detail; this post will focus on S.

At equilibrium, when dT/dt = 0, the ODE necessitates T(t) = S F(t). A physical interpretation for S becomes apparent: it measures the equilibrium change in temperature per unit forcing, also known as climate sensitivity.

A great deal of research has been conducted with the aim of quantifying climate sensitivity, through paleoclimate analyses, modelling experiments, and instrumental data. Overall, these assessments show that climate sensitivity is on the order of 3 K per doubling of CO2 (divide by 5.35 ln 2 W/m2 to convert to warming per unit forcing).

The IPCC AR4 report (note that AR5 was not yet published at the time of my calculations) compared many different probability distribution functions (PDFs) of climate sensitivity, shown below. They follow the same general shape of a shifted distribution with a long tail to the right, and average 5-95% confidence intervals of around 1.5 to 7 K per doubling of CO2.

Box 10.2, Figure 1 of the IPCC AR4 WG1: Probability distribution functions of climate sensitivity (a), 5-95% confidence intervals (b).

These PDFs generally consist of discrete data points that are not publicly available. Consequently, sampling from any existing PDF would be difficult. Instead, I chose to create my own PDF of climate sensitivity, modelled as a log-normal distribution (e raised to the power of a normal distribution) with the same shape and bounds as the existing datasets.

The challenge was to find values for μ and σ, the mean and standard deviation of the corresponding normal distribution, such that for any z sampled from the log-normal distribution,

Since erf, the error function, cannot be evaluated analytically, this two-parameter problem must be solved numerically. I built a simple particle swarm optimizer to find the solution, which consistently yielded results of μ = 1.1757, σ = 0.4683.

The upper tail of a log-normal distribution is unbounded, so I truncated the distribution at 10 K, consistent with existing PDFs (see figure above). At the beginning of each simulation, climate sensitivity in my model is sampled from this distribution and held fixed for the entire run. A histogram of 106 sampled points, shown below, has the desired characteristics.

Histogram of 106 points sampled from the log-normal distribution used for climate sensitivity in the model.

Histogram of 106 points sampled from the log-normal distribution used for climate sensitivity in the model.

Note that in order to be used in the ODE, the sampled points must then be converted to units of Km2/W (warming per unit forcing) by dividing by 5.35 ln 2 W/m2, the forcing from doubled CO2.

Read Full Post »

Now that the academic summer is over, I have left Australia and returned home to Canada. It is great to be with my friends and family again, but I really miss the ocean and the giant monster bats. Not to mention the lab: after four months as a proper scientist, it’s very hard to be an undergrad again.

While I continue to settle in, move to a new apartment, and recover from jet lag (which is way worse in this direction!), here are a few pieces of reading to tide you over:

Scott Johnson from Ars Technica wrote a fabulous piece about climate modelling, and the process by which scientists build and test new components. The article is accurate and compelling, and features interviews with two of my former supervisors (Steve Easterbrook and Andrew Weaver) and lots of other great communicators (Gavin Schmidt and Richard Alley, to name a few).

I have just started reading A Short History of Nearly Everything by Bill Bryson. So far, it is one of the best pieces of science writing I have ever read. As well as being funny and easy to understand, it makes me excited about areas of science I haven’t studied since high school.

Finally, my third and final paper from last summer in Victoria was published in the August edition of Journal of Climate. The full text (subscription required) is available here. It is a companion paper to our recent Climate of the Past study, and compares the projections of EMICs (Earth System Models of Intermediate Complexity) when forced with different RCP scenarios. In a nutshell, we found that even after anthropogenic emissions fall to zero, it takes a very long time for CO2 concentrations to recover, even longer for global temperatures to start falling, and longer still for sea level rise (caused by thermal expansion alone, i.e. neglecting the melting of ice sheets) to stabilize, let alone reverse.

Read Full Post »

Last time I introduced the concept of a simple climate model which uses stochastic techniques to simulate uncertainty in our knowledge of the climate system. Here I will derive the backbone of this model, an ODE describing the response of global temperature to net radiative forcing. This derivation is based on unpublished work by Nathan Urban – many thanks!

In reality, the climate system should be modelled not as a single ODE, but as a coupled system of hundreds of PDEs in four dimensions. Such a task is about as arduous as numerical science can get, but dozens of research groups around the world have built GCMs (General Circulation Models, or Global Climate Models, depending on who you talk to) which come quite close to this ideal.

Each GCM has taken hundreds of person-years to develop, and I only had eight weeks. So for the purposes of this project, I treat the Earth as a spatially uniform body with a single temperature. This is clearly a huge simplification but I decided it was necessary.

Let’s start by defining T1(t) to be the absolute temperature of this spatially uniform Earth at time t, and let its heat capacity be C. Therefore,

C \: T_1(t) = E

where E is the change in energy required to warm the Earth from 0 K to temperature T1. Taking the time derivative of both sides,

C \: \frac{dT_1}{dt} = \frac{dE}{dt}

Now, divide through by A, the surface area of the Earth:

c \: \frac{dT_1}{dt} = \frac{1}{A} \frac{dE}{dt}

where c = C/A is the heat capacity per unit area. Note that the right side of the equation, a change in energy per unit time per unit area, has units of W/m2. We can express this as the difference of incoming and outgoing radiative fluxes, I(t) and O(t) respectively:

c \: \frac{dT_1}{dt} = I(t)- O(t)

By the Stefan-Boltzmann Law,

c \: \frac{dT_1}{dt} = I(t) - \epsilon \sigma T_1(t)^4

where ϵ is the emissivity of the Earth and σ is the Stefan-Boltzmann constant.

To consider the effect of a change in temperature, suppose that T1(t) = T0 + T(t), where T0 is an initial equilibrium temperature and T(t) is a temperature anomaly. Substituting into the equation,

c \: \frac{d(T_0 + T(t))}{dt} = I(t) - \epsilon \sigma (T_0 + T(t))^4

Noting that T0 is a constant, and also factoring the right side,

c \: \frac{dT}{dt} = I(t) - \epsilon \sigma T_0^4 (1 + \tfrac{T(t)}{T_0})^4

Since the absolute temperature of the Earth is around 280 K, and we are interested in perturbations of around 5 K, we can assume that T(t)/T0 ≪ 1. So we can linearize (1 + T(t)/T0)4 using a Taylor expansion about T(t) = 0:

c \: \frac{dT}{dt} = I(t) - \epsilon \sigma T_0^4 (1 + 4 \tfrac{T(t)}{T_0} + O[(\tfrac{T(t)}{T_0})^2])

\approx I(t) - \epsilon \sigma T_0^4 (1 + 4 \tfrac{T(t)}{T_0})

= I(t) - \epsilon \sigma T_0^4 - 4 \epsilon \sigma T_0^3 T(t)

Next, let O0 = ϵσT04 be the initial outgoing flux. So,

c \: \frac{dT}{dt} = I(t) - O_0 - 4 \epsilon \sigma T_0^3 T(t)

Let F(t) = I(t) – O0 be the radiative forcing at time t. Making this substitution as well as dividing by c, we have

\frac{dT}{dt} = \frac{F(t) - 4 \epsilon \sigma T_0^3 T(t)}{c}

Dividing each term by 4ϵσT03 and rearranging the numerator,

\frac{dT}{dt} = - \frac{T(t) - \tfrac{1}{4 \epsilon \sigma T_0^3} F(t)}{\tfrac{c}{4 \epsilon \sigma T_0^3}}

Finally, let S = 1/(4ϵσT03) and τ = cS. Our final equation is

\frac{dT}{dt} = - \frac{T(t) - S F(t)}{\tau}

While S depends on the initial temperature T0, all of the model runs for this project begin in the preindustrial period when global temperature is approximately constant. Therefore, we can treat S as a parameter independent of initial conditions. As I will show in the next post, the uncertainty in S based on climate system dynamics far overwhelms any error we might introduce by disregarding T0.

Read Full Post »

This winter I took a course in computational physics, which has probably been my favourite undergraduate course to date. Essentially it was an advanced numerical methods course, but from a very practical point of view. We got a lot of practice using numerical techniques to solve realistic problems, rather than just analysing error estimates and proving conditions of convergence. As a math student I found this refreshing, and incredibly useful for my research career.

We all had to complete a term project of our choice, and I decided to build a small climate model. I was particularly interested in the stochastic techniques taught in the course, and given that modern GCMs and EMICs are almost entirely deterministic, it was possible that I could contribute something original to the field.

The basic premise of my model is this: All anthropogenic forcings are deterministic, and chosen by the user. Everything else is determined stochastically: parameters such as climate sensitivity are sampled from probability distributions, whereas natural forcings are randomly generated but follow the same general pattern that exists in observations. The idea is to run this model with the same anthropogenic input hundreds of times and build up a probability distribution of future temperature trajectories. The spread in possible scenarios is entirely due to uncertainty in the natural processes involved.

This approach mimics the real world, because the only part of the climate system we have full control over is our own actions. Other influences on climate are out of our control, sometimes poorly understood, and often unpredictable. It is just begging to be modelled as a stochastic system. (Not that it is actually stochastic, of course; in fact, I understand that nothing is truly stochastic, even random number generators – unless you can find a counterexample using quantum mechanics? But that’s a discussion for another time.)

A word of caution: I built this model in about eight weeks. As such, it is highly simplified and leaves out a lot of processes. You should never ever use it for real climate projections. This project is purely an exercise in numerical methods, and an exploration of the possible role of stochastic techniques in climate modelling.

Over the coming weeks, I will write a series of posts that explains each component of my simple stochastic climate model in detail. I will show the results from some sample simulations, and discuss how one might apply these stochastic techniques to existing GCMs. I also plan to make the code available to anyone who’s interested – it’s written in Matlab, although I might translate it to a free language like Python, partly because I need an excuse to finally learn Python.

I am very excited to finally share this project with you all! Check back soon for the next installment.

Read Full Post »

You may have already heard that carbon dioxide concentrations have surpassed 400 ppm. The most famous monitoring station, Mauna Loa Observatory in Hawaii, reached this value on May 9th. Due to the seasonal cycle, CO2 levels began to decline almost immediately thereafter, but next year they will easily blow past 400 ppm.

Of course, this milestone is largely arbitrary. There’s nothing inherently special about 400 ppm. But it’s a good reminder that while we were arguing about taxation, CO2 levels continued to quietly tick up and up.


In happier news, John Cook and others have just published the most exhaustive survey of the peer-reviewed climate literature to date. Read the paper here (open access), and a detailed but accessible summary here. Unsurprisingly, they found the same 97% consensus that has come up over and over again.

Cook et al read the abstracts of nearly 12 000 papers published between 1991 and 2011 – every single hit from the ISI Web of Science with the keywords “global climate change” or “global warming”. Several different people categorized each abstract, and the authors were contacted whenever possible to categorize their own papers. Using several different methods like this makes the results more reliable.

Around two-thirds of the studies, particularly the more recent ones, didn’t mention the cause of climate change. This is unsurprising, since human-caused warming has been common knowledge in the field for years. Similarly, seismology papers don’t usually mention that plate tectonics cause earthquakes, particularly in the abstracts where space is limited.

Among the papers which did express a position, 97.1% said climate change was human-caused. Again, unsurprising to anyone working in the field, but it’s news to many members of the public. The study has been widely covered in the mainstream media – everywhere from The Guardian to The Australian – and even President Obama’s Twitter feed.


Congratulations are also due to Andrew Weaver, my supervisor from last summer, who has just been elected to the British Columbia provincial legislature. He is not only the first-ever Green Party MLA in BC’s history, but also (as far as I know) the first-ever climate scientist to hold public office.

Governments the world over are sorely in need of officials who actually understand the problem of climate change. Nobody fits this description better than Andrew, and I think he is going to be great. The large margin by which he won also indicates that public support for climate action is perhaps higher than we thought.


Finally, my second publication came out this week in Climate of the Past. It describes an EMIC intercomparison project the UVic lab conducted for the next IPCC report, which I helped out with while I was there. The project was so large that we split the results into two papers (the second of which is in press in Journal of Climate). This paper covers the historical experiments – comparing model results from 850-2005 to observations and proxy reconstructions – as well as some idealized experiments designed to measure metrics such as climate sensitivity, transient climate response, and carbon cycle feedbacks.

Read Full Post »

It seems that every post I write begins with an apology for not writing more. I’ve spent the past few months writing another set of exams (only one more year to go), building and documenting two simple climate models for term projects (much more on that later), and moving to Australia!

This (Northern Hemisphere) summer I have a job at the Climate Change Research Centre at the University of New South Wales in Sydney, which has a close partnership with the UVic Climate Lab (where I worked last summer). I am working with Dr. Katrin Meissner, who primarily studies ocean, carbon cycle, and paleoclimate modelling. We have lots of plans for exciting projects to work on over the next four months.

Australia is an interesting place. Given that it’s nearly 20 hours away by plane, it has a remarkably similar culture to Canada. The weather is much warmer, though (yesterday it dropped down to 15 C and everyone was complaining about the cold) and the food is fantastic. The birds are more colourful (Rainbow Lorikeets are so common that some consider them pests) and the bats are as big as ravens. Best of all, there is an ocean. I think I am going to like it here.

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 351 other followers