We just published in Nature Geoscience

It turns out that when you submit a paper to a journal like Nature Geoscience “just in case, we have nothing to lose, they’ll probably reject it straight away”…sometimes you are unexpectedly successful.

Read it here!

Assorted media coverage:

More detailed post to come…

Advertisement

My Cartoon Supervisors

My supervisors are so distinguished that they now exist in cartoon form! If that’s not the mark of a successful science communicator, I’m not sure what is.

Here is Katrin:

And here is Matt:

A former supervisor of mine also got a cartoon:

There are 97 cartoons like this over at Skeptical Science, a site which is quickly becoming a force of nature. This campaign reached millions of people through Twitter alone, and was even retweeted by President Obama.

Milestones

You may have already heard that carbon dioxide concentrations have surpassed 400 ppm. The most famous monitoring station, Mauna Loa Observatory in Hawaii, reached this value on May 9th. Due to the seasonal cycle, CO2 levels began to decline almost immediately thereafter, but next year they will easily blow past 400 ppm.

Of course, this milestone is largely arbitrary. There’s nothing inherently special about 400 ppm. But it’s a good reminder that while we were arguing about taxation, CO2 levels continued to quietly tick up and up.


In happier news, John Cook and others have just published the most exhaustive survey of the peer-reviewed climate literature to date. Read the paper here (open access), and a detailed but accessible summary here. Unsurprisingly, they found the same 97% consensus that has come up over and over again.

Cook et al read the abstracts of nearly 12 000 papers published between 1991 and 2011 – every single hit from the ISI Web of Science with the keywords “global climate change” or “global warming”. Several different people categorized each abstract, and the authors were contacted whenever possible to categorize their own papers. Using several different methods like this makes the results more reliable.

Around two-thirds of the studies, particularly the more recent ones, didn’t mention the cause of climate change. This is unsurprising, since human-caused warming has been common knowledge in the field for years. Similarly, seismology papers don’t usually mention that plate tectonics cause earthquakes, particularly in the abstracts where space is limited.

Among the papers which did express a position, 97.1% said climate change was human-caused. Again, unsurprising to anyone working in the field, but it’s news to many members of the public. The study has been widely covered in the mainstream media – everywhere from The Guardian to The Australian – and even President Obama’s Twitter feed.


Congratulations are also due to Andrew Weaver, my supervisor from last summer, who has just been elected to the British Columbia provincial legislature. He is not only the first-ever Green Party MLA in BC’s history, but also (as far as I know) the first-ever climate scientist to hold public office.

Governments the world over are sorely in need of officials who actually understand the problem of climate change. Nobody fits this description better than Andrew, and I think he is going to be great. The large margin by which he won also indicates that public support for climate action is perhaps higher than we thought.


Finally, my second publication came out this week in Climate of the Past. It describes an EMIC intercomparison project the UVic lab conducted for the next IPCC report, which I helped out with while I was there. The project was so large that we split the results into two papers (the second of which is in press in Journal of Climate). This paper covers the historical experiments – comparing model results from 850-2005 to observations and proxy reconstructions – as well as some idealized experiments designed to measure metrics such as climate sensitivity, transient climate response, and carbon cycle feedbacks.

From the Other Side of the World…

It seems that every post I write begins with an apology for not writing more. I’ve spent the past few months writing another set of exams (only one more year to go), building and documenting two simple climate models for term projects (much more on that later), and moving to Australia!

This (Northern Hemisphere) summer I have a job at the Climate Change Research Centre at the University of New South Wales in Sydney, which has a close partnership with the UVic Climate Lab (where I worked last summer). I am working with Dr. Katrin Meissner, who primarily studies ocean, carbon cycle, and paleoclimate modelling. We have lots of plans for exciting projects to work on over the next four months.

Australia is an interesting place. Given that it’s nearly 20 hours away by plane, it has a remarkably similar culture to Canada. The weather is much warmer, though (yesterday it dropped down to 15 C and everyone was complaining about the cold) and the food is fantastic. The birds are more colourful (Rainbow Lorikeets are so common that some consider them pests) and the bats are as big as ravens. Best of all, there is an ocean. I think I am going to like it here.

Climate Change and Atlantic Circulation

Today my very first scientific publication is appearing in Geophysical Research Letters. During my summer at UVic, I helped out with a model intercomparison project regarding the effect of climate change on Atlantic circulation, and was listed as a coauthor on the resulting paper. I suppose I am a proper scientist now, rather than just a scientist larva.

The Atlantic meridional overturning circulation (AMOC for short) is an integral part of the global ocean conveyor belt. In the North Atlantic, a massive amount of water near the surface, cooling down on its way to the poles, becomes dense enough to sink. From there it goes on a thousand-year journey around the world – inching its way along the bottom of the ocean, looping around Antarctica – before finally warming up enough to rise back to the surface. A whole multitude of currents depend on the AMOC, most famously the Gulf Stream, which keeps Europe pleasantly warm.

Some have hypothesized that climate change might shut down the AMOC: the extra heat and freshwater (from melting ice) coming into the North Atlantic could conceivably lower the density of surface water enough to stop it sinking. This happened as the world was coming out of the last ice age, in an event known as the Younger Dryas: a huge ice sheet over North America suddenly gave way, drained into the North Atlantic, and shut down the AMOC. Europe, cut off from the Gulf Stream and at the mercy of the ice-albedo feedback, experienced another thousand years of glacial conditions.

A shutdown today would not lead to another ice age, but it could cause some serious regional cooling over Europe, among other impacts that we don’t fully understand. Today, though, there’s a lot less ice to start with. Could the AMOC still shut down? If not, how much will it weaken due to climate change? So far, scientists have answered these two questions with “probably not” and “something like 25%” respectively. In this study, we analysed 30 climate models (25 complex CMIP5 models, and 5 smaller, less complex EMICs) and came up with basically the same answer. It’s important to note that none of the models include dynamic ice sheets (computational glacial dynamics is a headache and a half), which might affect our results.

Models ran the four standard RCP experiments from 2006-2100. Not every model completed every RCP, and some extended their simulations to 2300 or 3000. In total, there were over 30 000 model years of data. We measured the “strength” of the AMOC using the standard unit Sv (Sverdrups), where each Sv is 1 million cubic metres of water per second.

Only two models simulated an AMOC collapse, and only at the tail end of the most extreme scenario (RCP8.5, which quite frankly gives me a stomachache). Bern3D, an EMIC from Switzerland, showed a MOC strength of essentially zero by the year 3000; CNRM-CM5, a GCM from France, stabilized near zero by 2300. In general, the models showed only a moderate weakening of the AMOC by 2100, with best estimates ranging from a 22% drop for RCP2.6 to a 40% drop for RCP8.5 (with respect to preindustrial conditions).

Are these somewhat-reassuring results trustworthy? Or is the Atlantic circulation in today’s climate models intrinsically too stable? Our model intercomparison also addressed that question, using a neat little scalar metric known as Fov: the net amount of freshwater travelling from the AMOC to the South Atlantic.

The current thinking in physical oceanography is that the AMOC is more or less binary – it’s either “on” or “off”. When AMOC strength is below a certain level (let’s call it A), its only stable state is “off”, and the strength will converge to zero as the currents shut down. When AMOC strength is above some other level (let’s call it B), its only stable state is “on”, and if you were to artificially shut it off, it would bounce right back up to its original level. However, when AMOC strength is between A and B, both conditions can be stable, so whether it’s on or off depends on where it started. This phenomenon is known as hysteresis, and is found in many systems in nature.

This figure was not part of the paper. I made it just now in MS Paint.

Here’s the key part: when AMOC strength is less than A or greater than B, Fov is positive and the system is monostable. When AMOC strength is between A and B, Fov is negative and the system is bistable. The physical justification for Fov is its association with the salt advection feedback, the sign of which is opposite Fov: positive Fov means the salt advection feedback is negative (i.e. stabilizing the current state, so monostable); a negative Fov means the salt advection feedback is positive (i.e. reinforcing changes in either direction, so bistable).

Most observational estimates (largely ocean reanalyses) have Fov as slightly negative. If models’ AMOCs really were too stable, their Fov‘s should be positive. In our intercomparison, we found both positives and negatives – the models were kind of all over the place with respect to Fov. So maybe some models are overly stable, but certainly not all of them, or even the majority.

As part of this project, I got to write a new section of code for the UVic model, which calculated Fov each timestep and included the annual mean in the model output. Software development on a large, established project with many contributors can be tricky, and the process involved a great deal of head-scratching, but it was a lot of fun. Programming is so satisfying.

Beyond that, my main contribution to the project was creating the figures and calculating the multi-model statistics, which got a bit unwieldy as the model count approached 30, but we made it work. I am now extremely well-versed in IDL graphics keywords, which I’m sure will come in handy again. Unfortunately I don’t think I can reproduce any figures here, as the paper’s not open-access.

I was pretty paranoid while coding and doing calculations, though – I kept worrying that I would make a mistake, never catch it, and have it dredged out by contrarians a decade later (“Kate-gate”, they would call it). As a climate scientist, I suppose that comes with the job these days. But I can live with it, because this stuff is just so darned interesting.

Permafrost Projections

During my summer at UVic, two PhD students at the lab (Andrew MacDougall and Chris Avis) as well as my supervisor (Andrew Weaver) wrote a paper modelling the permafrost carbon feedback, which was recently published in Nature Geoscience. I read a draft version of this paper several months ago, and am very excited to finally share it here.

Studying the permafrost carbon feedback is at once exciting (because it has been left out of climate models for so long) and terrifying (because it has the potential to be a real game-changer). There is about twice as much carbon frozen into permafrost than there is floating around in the entire atmosphere. As high CO2 levels cause the world to warm, some of the permafrost will thaw and release this carbon as more CO2 – causing more warming, and so on. Previous climate model simulations involving permafrost have measured the CO2 released during thaw, but haven’t actually applied it to the atmosphere and allowed it to change the climate. This UVic study is the first to close that feedback loop (in climate model speak we call this “fully coupled”).

The permafrost part of the land component was already in place – it was developed for Chris’s PhD thesis, and implemented in a previous paper. It involves converting the existing single-layer soil model to a multi-layer model where some layers can be frozen year-round. Also, instead of the four RCP scenarios, the authors used DEPs (Diagnosed Emission Pathways): exactly the same as RCPs, except that CO2 emissions, rather than concentrations, are given to the model as input. This was necessary so that extra emissions from permafrost thaw would be taken into account by concentration values calculated at the time.

As a result, permafrost added an extra 44, 104, 185, and 279 ppm of CO2 to the atmosphere for DEP 2.6, 4.5, 6.0, and 8.5 respectively. However, the extra warming by 2100 was about the same for each DEP, with central estimates around 0.25 °C. Interestingly, the logarithmic effect of CO2 on climate (adding 10 ppm to the atmosphere causes more warming when the background concentration is 300 ppm than when it is 400 ppm) managed to cancel out the increasing amounts of permafrost thaw. By 2300, the central estimates of extra warming were more variable, and ranged from 0.13 to 1.69 °C when full uncertainty ranges were taken into account. Altering climate sensitivity (by means of an artificial feedback), in particular, had a large effect.

As a result of the thawing permafrost, the land switched from a carbon sink (net CO2 absorber) to a carbon source (net CO2 emitter) decades earlier than it would have otherwise – before 2100 for every DEP. The ocean kept absorbing carbon, but in some scenarios the carbon source of the land outweighed the carbon sink of the ocean. That is, even without human emissions, the land was emitting more CO2 than the ocean could soak up. Concentrations kept climbing indefinitely, even if human emissions suddenly dropped to zero. This is the part of the paper that made me want to hide under my desk.

This scenario wasn’t too hard to reach, either – if climate sensitivity was greater than 3°C warming per doubling of CO2 (about a 50% chance, as 3°C is the median estimate by scientists today), and people followed DEP 8.5 to at least 2013 before stopping all emissions (a very intense scenario, but I wouldn’t underestimate our ability to dig up fossil fuels and burn them really fast), permafrost thaw ensured that CO2 concentrations kept rising on their own in a self-sustaining loop. The scenarios didn’t run past 2300, but I’m sure that if you left it long enough the ocean would eventually win and CO2 would start to fall. The ocean always wins in the end, but things can be pretty nasty until then.

As if that weren’t enough, the paper goes on to list a whole bunch of reasons why their values are likely underestimates. For example, they assumed that all emissions from permafrost were  CO2, rather than the much stronger CH4 which is easily produced in oxygen-depleted soil; the UVic model is also known to underestimate Arctic amplification of climate change (how much faster the Arctic warms than the rest of the planet). Most of the uncertainties – and there are many – are in the direction we don’t want, suggesting that the problem will be worse than what we see in the model.

This paper went in my mental “oh shit” folder, because it made me realize that we are starting to lose control over the climate system. No matter what path we follow – even if we manage slightly negative emissions, i.e. artificially removing CO2 from the atmosphere – this model suggests we’ve got an extra 0.25°C in the pipeline due to permafrost. It doesn’t sound like much, but add that to the 0.8°C we’ve already seen, and take technological inertia into account (it’s simply not feasible to stop all emissions overnight), and we’re coming perilously close to the big nonlinearity (i.e. tipping point) that many argue is between 1.5 and 2°C. Take political inertia into account (most governments are nowhere near even creating a plan to reduce emissions), and we’ve long passed it.

Just because we’re probably going to miss the the first tipping point, though, doesn’t mean we should throw up our hands and give up. 2°C is bad, but 5°C is awful, and 10°C is unthinkable. The situation can always get worse if we let it, and how irresponsible would it be if we did?

Ten Things I Learned in the Climate Lab

  1. Scientists do not blindly trust their own models of global warming. In fact, nobody is more aware of a model’s specific weaknesses than the developers themselves. Most of our time is spent comparing model output to observations, searching for discrepancies, and hunting down bugs.
  2. If 1.5 C global warming above preindustrial temperatures really does represent the threshold for “dangerous climate change” (rather than 2 C, as some have argued), then we’re in trouble. Stabilizing global temperatures at this level isn’t just climatically difficult, it’s also mathematically difficult. Given current global temperatures, and their current rate of change, it’s nearly impossible to smoothly extend the curve to stabilize at 1.5 C without overshooting.
  3. Sometimes computers do weird things. Some bugs appear for the most illogical reasons (last week, the act of declaring a variable altered every single metric of the model output). Other bugs show up once, then disappear before you can track down the source, and you’re never able to reproduce them. It’s not uncommon to fix a problem without ever understanding why the problem occurred in the first place.
  4. For anyone working with climate model output, one of the best tools to have in your arsenal is the combination of IDL and NetCDF. Hardly an hour of work goes by when I don’t use one or both of these programming tools in some way.
  5. Developing model code for the first time is a lot like moving to a new city. At first you wander around aimlessly, clutching your map and hesitantly asking for directions. Then you begin to recognize street names and orient yourself around landmarks. Eventually you’re considered a resident of the city, as your little house is there on the map with your name on it. You feel inordinately proud of the fact that you managed to build that house without burning the entire city down in the process.
  6. The RCP 8.5 scenario is really, really scary. Looking at the output from that experiment is enough to give me a stomachache. Let’s just not let that scenario happen, okay?
  7. It’s entirely possible to get up in the morning and just decide to be enthusiastic about your work. You don’t have to pretend, or lie to yourself – all you do is consciously choose to revel in the interesting discoveries, and to view your setbacks as challenges rather than chores. It works really well, and everything is easier and more fun as a result.
  8. Climate models are fabulous experimental subjects. If you run the UVic model twice with the same code, data, options, and initial state, you get exactly the same results. (I’m not sure if this holds for more complex GCMs which include elements of random weather variation.) For this reason, if you change one factor, you can be sure that the model is reacting only to that factor. Control runs are completely free of external influences, and deconstructing confounding variables is only a matter of CPU time. Most experimental scientists don’t have this element of perfection in their subjects – it makes me feel very lucky.
  9. The permafrost is in big trouble, and scientists are remarkably calm about it.
  10. Tasks that seem impossible at first glance are often second nature by the end of the day. No bug lasts forever, and no problem goes unsolved if you exert enough effort.

Cumulative Emissions and Climate Models

As my summer research continues, I’m learning a lot about previous experiments that used the UVic ESCM (Earth System Climate Model), as well as beginning to run my own. Over the past few years, the UVic model has played an integral role in a fascinating little niche of climate research: the importance of cumulative carbon emissions.

So far, global warming mitigation policies have focused on choosing an emissions pathway: making a graph of desired CO2 emissions vs. time, where emissions slowly reduce to safer levels. However, it turns out that the exact pathway we take doesn’t actually matter. All that matters is the area under the curve: the total amount of CO2 we emit, or “cumulative emissions” (Zickfeld et al, 2009). So if society decides to limit global warming to 2°C (a common target), there is a certain amount of total CO2 that the entire world is allowed to emit. We can use it all up in the first ten years and then emit nothing, or we can spread it out – either way, it will lead to the same amount of warming.

If you delve a little deeper into the science, it turns out that temperature change is directly proportional to cumulative emissions (Matthews et al, 2009). In other words, if you draw a graph of the total amount of warming vs. total CO2 emitted, it will be a straight line.

This is counter-intuitive, because the intermediate processes are definitely not straight lines. Firstly, the graph of warming vs. CO2 concentrations is logarithmic: as carbon dioxide builds up in the atmosphere, each extra molecule added has less and less effect on the climate.

However, as carbon dioxide builds up and the climate warms, carbon sinks (which suck up some of our emissions) become less effective. For example, warmer ocean water can’t hold as much CO2, and trees subjected to heat stress often die and stop photosynthesizing. Processes that absorb CO2 become less effective, so more of our emissions actually stay in the air. Consequently, the graph of CO2 concentrations vs. CO2 emissions is exponential.

These two relationships, warming vs. concentrations and concentrations vs. emissions, more or less cancel each other out, making total warming vs. total emissions linear. It doesn’t matter how much CO2 was in the air to begin with, or how fast the allowable emissions get used up. Once society decides how much warming is acceptable, all we need to do is nail down the proportionality constant (the slope of the straight line) in order to find out how much carbon we have to work with. Then, that number can be handed to economists, who will figure out the best way to spread out those emissions while causing minimal financial disruption.

Finding that slope is a little tricky, though. Best estimates, using models as well as observations, generally fall between 1.5°C and 2°C for every trillion tonnes of carbon emitted (Matthews et al, 2009; Allen et al, 2009; Zickfeld et al, 2009). Keep in mind that we’ve already emitted about 0.6 trillion tonnes of carbon (University of Oxford). Following a theme commonly seen in climate research, the uncertainty is larger on the high end of these slope estimates than on the low end. So if the real slope is actually lower than our best estimate, it’s probably only a little bit lower; if it’s actually higher than our best estimate, it could be much higher, and the problem could be much worse than we thought.

Also, this approach ignores other human-caused influences on global temperature, most prominently sulfate aerosols (which cause cooling) and greenhouse gases other than carbon dioxide (which cause warming). Right now, these two influences basically cancel, which is convenient for scientists because it means we can ignore both of them. Typically, we assume that they will continue to cancel far into the future, which might not be the case – there’s a good chance that developing countries like China and India will reduce their emissions of sulfate aerosols, allowing the non-CO2 greenhouse gases to dominate and cause warming. If this happened, we couldn’t even lump the extra greenhouse gases into the allowable CO2 emissions, because the warming they cause does depend on the exact pathway. For example, methane has such a short atmospheric lifetime that “cumulative methane emissions” is a useless measurement, and certainly isn’t directly proportional to temperature change.

This summer, one of my main projects at UVic is to compare what different models measure the slope of temperature change vs. cumulative CO2 emissions to be. As part of the international EMIC intercomparison project that the lab is coordinating, different modelling groups have sent us their measurements of allowable cumulative emissions for 1.5°C, 2°C, 3°C, and 4°C global warming. Right now (quite literally, as I write this) I’m running the same experiments on the UVic model. It’s very exciting to watch the results trickle in. Perhaps my excitement towards the most menial part of climate modelling, watching as the simulation chugs along, is a sign that I’m on the right career path.

Summer Research

I recently started working for the summer, with Andrew Weaver’s research group at the University of Victoria. If you’re studying climate modelling in Canada, this is the place to be. They are a fairly small group, but continually churn out world-class research.

Many of the projects here use the group’s climate model, the UVic ESCM (Earth System Climate Model). I am working with the ESCM this summer, and have previously read most of the code, so I feel pretty well acquainted with it.

The climate models that most people are familiar with are the really complex ones. GCMs (General Circulation Models or Global Climate Models, depending on who you talk to) use high resolution, a large number of physical processes, and relatively few parameterizations to emulate the climate system as realistically as possible. These are the models that take weeks to run on the world’s most powerful supercomputers.

EMICs (Earth System Models of Intermediate Complexity) are a step down in complexity. They run at a lower resolution than GCMs and have more paramaterizations. Individual storms and wind patterns (and sometimes ocean currents as well) typically are not resolved – instead, the model predicts the statistics of these phenomena. Often, at least one component (such as sea ice) is two-dimensional.

The UVic ESCM is one of the most complex EMICs – it really sits somewhere between a GCM and an EMIC. It has a moderately high resolution, with a grid of 3.6° longitude by 1.8° latitude (ten thousand squares in all), and 19 vertical layers in the ocean. Its ocean, land, and sea ice component would all belong in a GCM. It even has a sediment component, which simulates processes that most GCMs ignore.

The only reason that the UVic model is considered an EMIC is because of its atmosphere component. This part of the model is two-dimensional and parameterizes most processes. For example, clouds aren’t explicitly simulated – instead, as soon as the relative humidity of a region reaches 85%, the atmospheric moisture falls out as rain (or snow). You would never see this kind of atmosphere in a GCM, and it might seem strange for scientists to deliberately build an unrealistic model. However, this simplified atmosphere gives the UVic ESCM a huge advantage over GCMs: speed.

For example, today I tested out the model with an example simulation. It ran on a Linux cluster with 32 cores, which I accessed remotely from a regular desktop. It took about 7 minutes of real time to simulate each year and record annual averages for several dozen variables. In comparison, many GCMs take an entire day of real time to simulate a year, while running on a machine with thousands of cores. Most of this work is coming from the atmospheric component, which requires short time steps. Consequently, cutting down on complexity in the atmosphere gives the best return on model efficiency.

Because the UVic model is so fast, it’s suitable for very long runs. Simulating a century is an “overnight job”, and several millennia is no big deal (especially if you run it on WestGrid). As a result, long-term processes have come to dominate the research in this lab: carbon cycle feedbacks, sensitivity studies, circulation in the North Atlantic. It simply isn’t feasible to simulate these millennial-scale processes on a GCM – so, by sacrificing complexity, we’re able to open up brand new areas of research. Perfectly emulating the real world isn’t actually the goal of most climate modelling.

Of course, the UVic ESCM is imperfect. Like all models, it has its quirks – an absolute surface temperature that’s a bit too low, projections of ocean heat uptake that are a bit too high. It doesn’t give reliable projections of regional climate, so you can only really use globally or hemispherically averaged quantities. It’s not very good at decadal-scale projection. However, other models are suitable for these short-term and small-scale simulations: the same GCMs that suffer when it comes to speed. In this way, climate models perform “division of labour”. By developing many different models of varying complexity, we can make better use of the limited computer power available to us.

I have several projects lined up for the summer, and right now I’m reading a lot of papers to familiarize myself with the relevant sub-fields. There have been some really cool discoveries in the past few years that I wasn’t aware of. I have lots of ideas for posts to write about these papers, as well as the projects I’m involved in, so check back often!

Wrapping Up

My summer job as a research student of Steve Easterbrook is nearing an end. All of a sudden, I only have a few days left, and the weather is (thankfully) cooling down as autumn approaches. It feels like just a few weeks ago that this summer was beginning!

Over the past three months, I examined seven different GCMs from Canada, the United States, and Europe. Based on the source code, documentation, and correspondence with scientists, I uncovered the underlying architecture of each model. This was represented in a set of diagrams. You can view full-sized versions here:

The component bubbles are to scale (based on the size of the code base) within each model, but not between models. The size and complexity of each GCM varies greatly, as can be seen below. UVic is by far the least complex model – it is arguably closer to an EMIC than a full GCM.

I came across many insights while comparing GCM architectures, regarding how modular components are, how extensively the coupler is used, and how complexity is distributed between components. I wrote some of these observations up into the poster I presented last week to the computer science department. My references can be seen here.

A big thanks to the scientists who answered questions about their work developing GCMs: Gavin Schmidt (Model E); Michael Eby (UVic); Tim Johns (HadGEM3); Arnaud Caubel, Marie-Alice Foujols, and Anne Cozic (IPSL); and Gary Strand (CESM). Additionally, Michael Eby from the University of Victoria was instrumental in improving the diagram design.

Although the summer is nearly over, our research certainly isn’t. I have started writing a more in-depth paper that Steve and I plan to develop during the year. We are also hoping to present our work at the upcoming AGU Fall Meeting, if our abstract gets accepted. Beyond this project, we are also looking at a potential experiment to run on CESM.

I guess I am sort of a scientist now. The line between “student” and “scientist” is blurry. I am taking classes, but also writing papers. Where does one end and the other begin? Regardless of where I am on the spectrum, I think I’m moving in the right direction. If this is what Doing Science means – investigating whatever little path interests me – I’m certainly enjoying it.