How does the Weddell Polynya affect Antarctic ice shelves?

The Weddell Polynya is a large hole in the sea ice of the Weddell Sea, near Antarctica. It occurs only very rarely in observations, but is extremely common in ocean models, many of which simulate a near-permanent polynya. My new paper published today in Journal of Climate finds that the Weddell Polynya increases melting beneath the nearby Filchner-Ronne Ice Shelf. This means it’s important to fix the polynya problems in ocean models, if we want to use them to study ice shelves.

The Southern Ocean surrounding Antarctica is cold at the surface – often so cold that it freezes to form sea ice – but warmer below. The deep ocean is about 1°C, which might not sound warm to you, but to Antarctic oceanographers this is positively balmy. If regions of the Southern Ocean start to convect, with strong top-to-bottom mixing, the warm deep water will come to the surface and melt the sea ice.

In observations, this doesn’t happen very often, and it only seems to happen in one region: the Weddell Sea, in the Atlantic sector of the Southern Ocean. Satellites spotted a large polynya (about the size of the UK) for three winters in a row, from 1974-1976. But then the Weddell Polynya disappeared until 2017, when a much smaller polynya (about a tenth of the size) showed up for a few months in the spring. We haven’t seen it since.

holland_polynya_1975

The Weddell Polynya in the winter of 1975. (Holland et al., 2001)

nsidc_polynya_2017

The Weddell Polynya in the spring of 2017. (NSIDC)

By contrast, models of the Southern Ocean simulate Weddell Polynyas very enthusiastically. In many ocean models, it’s a near-permanent feature of the Weddell Sea, and is often much larger than the observed polynya from the 1970s. This can happen very easily if the model’s surface waters are slightly too salty, which makes them dense enough to sink, triggering top-to-bottom convection. We also think it might have something to do with imperfect vertical mixing schemes.

It’s a rite of passage for Southern Ocean modellers that sooner or later you will work with a model that forms massive polynyas, all the time, and you can’t make them go away. I spent months and months on this during my PhD, and eventually I gave up and did “surface salinity restoring” to prevent the salty bias from forming. Basically, I killed it with freshwater. If you throw enough freshwater at this problem, the problem will go away.

So when the little Weddell Polynya of 2017 showed up, I was paying attention. And when the worldwide oceanography community jumped on the idea and started publishing lots of papers about the Weddell Polynya, I was paying attention. But soon I noticed that there was an important question nobody was trying to answer: what does the Weddell Polynya mean for Antarctic ice shelves?

Ice shelves are the floating edges of the Antarctic Ice Sheet. They’re in direct contact with the ocean, and they slow down the flow of the glaciers behind them. Ice shelves are what stand between us and massive sea level rise, so we should give them our respect. But ocean modellers have largely neglected them until now, because ice shelf cavities – the pockets of ocean between the ice shelf and the seafloor – are quite difficult to model. This is changing as supercomputers improve and high resolution becomes more affordable. More and more ocean models are adding ice shelf cavities to their simulations, and calculating melt rates at the ice-ocean interface. So if it turns out that the Weddell Polynya contaminates these ice shelf cavities, it would be even more important to fix the models’ polynya biases. It would also be interesting from an observational perspective, especially if the polynya shows up again soon.

At the time I started wondering about the Weddell Polynya and ice shelves, I was conveniently already setting up a new model of the Weddell Sea, which includes ice shelves. This model doesn’t produce Weddell Polynyas spontaneously, and for that I am eternally grateful. But I found a way to create “idealised” polynyas in the model, by choosing particular regions and forcing the model to convect there, whether or not it wanted to. This way I had control over where the polynyas occurred, how large they were, and how long they stayed open. I could run simulations with polynyas, compare them to a simulation with no polynyas, and see how the ice shelf cavities were affected.

I found that Weddell Polynyas do increase melt rates beneath nearby ice shelves. This happens because the polynyas cause density changes in the ocean, which allows more warm, salty deep water to flow onto the Antarctic continental shelf. The melt rates increase the larger the polynya gets, and the longer it stays open. This is bad news for Southern Ocean models with massive, permanent polynyas.

First I looked at the Filchner-Ronne Ice Shelf (FRIS), the second-largest ice shelf in Antarctica, and the focus of my Weddell Sea research these days. On the continental shelf in front of FRIS, the sea ice formation is so strong that the warm signal from the Weddell Polynya gets wiped out. The water ends up at the surface freezing point anyway, and the extra heat is lost to the atmosphere. But the salty signal is still there, and these salinity changes cause the ocean currents beneath FRIS to speed up. Stronger circulation means stronger ice shelf melting, in this case by up to 30% for the largest Weddell Polynyas.

For smaller ice shelves in the Eastern Weddell Sea, the nearby sea ice formation is weaker. So both the warm signal and the salty signal from the Weddell Polynya are preserved, and the ice shelf cavities are flooded with warmer, saltier water. Melting beneath these ice shelves increases by up to 80%.

The modelled changes are smaller for Weddell Polynyas which match observations, in terms of size as well as duration. So if the Weddell Polynya of the 1970s affected the FRIS cavity, it probably wasn’t by very much. And the effect of the little 2017 polynya was probably so small that we’ll never detect it.

However, these results should send a message to Southern Ocean modellers: you really need to fix your polynya problem if you want to model ice shelf cavities. I’m sorry.

Advertisement

With a Little Help from the Elephant Seals

A problem which has plagued oceanography since the very beginning is a lack of observations. We envy atmospheric scientists with their surface stations and satellite data that monitor virtually the entire atmosphere in real time. Until very recently, all that oceanographers had to work with were measurements taken by ships. This data was very sparse in space and time, and was biased towards certain ship tracks and seasons.

A lack of observations makes life difficult for ocean modellers, because there is very little to compare the simulations to. You can’t have confidence in a model if you have no way of knowing how well it’s performing, and you can’t make many improvements to a model without an understanding of its shortcomings.

Our knowledge of the ocean took a giant leap forward in 2000, when a program called Argo began. “Argo floats” are smallish instruments floating around in the ocean that control their own buoyancy, rising and sinking between the surface and about 2000 m depth. They use a CTD sensor to measure Conductivity (from which you can easily calculate salinity), Temperature, and Depth. Every 10 days they surface and send these measurements to a satellite. Argo floats are battery-powered and last for about 4 years before losing power. After this point they are sacrificed to the ocean, because collecting them would be too expensive.

This is what an Argo float looks like while it’s being deployed:

With at least 27 countries helping with deployment, the number of active Argo floats is steadily rising. At the time of this writing, there were 3748 in operation, with good coverage everywhere except in the polar oceans:

The result of this program is a massive amount of high-quality, high-resolution data for temperature and salinity in the surface and intermediate ocean. A resource like this is invaluable for oceanographers, analogous to the global network of weather stations used by atmospheric scientists. It allows us to better understand the current state of the ocean, to monitor trends in temperature and salinity as climate change continues, and to assess the skill of ocean models.

But it’s still not good enough. There are two major shortcomings to Argo floats. First, they can’t withstand the extreme pressure in the deep ocean, so they don’t sink below about 2000 m depth. Since the average depth of the world’s oceans is around 4000 m, the Argo program is only sampling the upper half. Fortunately, a new program called Deep Argo has developed floats which can withstand pressures down to 6000 m depth, covering all but the deepest ocean trenches. Last June, two prototypes were successfully deployed off the coast of New Zealand, and the data collected so far is looking good. If all future Argo floats were of the Deep Argo variety, in five or ten years we would know as much about the deep ocean’s temperature and salinity structure as we currently know about the surface. To oceanographers, particularly those studying bottom water formation and transport, there is almost nothing more exciting than this prospect.

The other major problem with Argo floats is that they can’t handle sea ice. Even if they manage to get underneath the ice by drifting in sideways, the next time they rise to the surface they will bash into the underside of the ice, get stuck, and stay there until their battery dies. This is a major problem for scientists like me who study the Southern Ocean (surrounding Antarctica), which is largely covered with sea ice for much of the year. This ocean will be incredibly important for sea level rise, because the easiest way to destabilise the Antarctic Ice Sheet is to warm up the ocean and melt the ice shelves (the edges of the ice sheet which extend over the ocean) from below. But we can’t monitor this process using Argo data, because there is a big gap in observations over the region. There’s always the manual option – sending in scientists to take measurements – but this is very expensive, and nobody wants to go there in the winter.

Instead, oceanographers have recently teamed up with biologists to try another method of data collection, which is just really excellent:

They are turning seals into Argo floats that can navigate sea ice.

Southern elephant seals swim incredible distances in the Southern Ocean, and often dive as far as 2000 m below the surface. Scientists are utilising the seals’ natural talents to fill in the gaps in the Argo network, so far with great success. Each seal is tranquilized while a miniature CTD is glued to the fur on its head, after which it is released back into the wild. As the seal swims around, the sensors take measurements and communicate with satellites just like regular Argo floats. The next time the seal sheds its coat (once per year), the CTD falls off and the seal gets on with its life, probably wondering what that whole thing was about.

This project is relatively new and it will be a few years before it’s possible to identify trends in the data. It’s also not clear whether or not the seals tend to swim right underneath the ice shelves, where observations would be most useful. But if this dataset gains popularity among oceanographers, and seals become officially integrated into the Argo network…

…then we will be the coolest scientists of all.

The PETM

Lately I have been reading a lot about the Paleocene-Eocene Thermal Maximum, or PETM, which is my favourite paleoclimatic event (is it weird to have a favourite?) This episode of rapid global warming 55 million years ago is particularly relevant to our situation today, because it was clearly caused by greenhouse gases. Unfortunately, the rest of the story is far less clear.

Paleocene mammals

The PETM happened about 10 million years after the extinction that killed the dinosaurs. The Age of Mammals was well underway, although humans wouldn’t appear in any form for another few million years. It was several degrees warmer, to start with, than today’s conditions. Sea levels would have been higher, and there were probably no polar ice caps.

Then, over several thousand years, the world warmed by between 5 and 8°C. It seems to have happened in a few bursts, against a background of slower temperature increase. Even the deep ocean, usually a very stable thermal environment, warmed by at least 5°C. It took around a hundred thousand years for the climate system to recover.

Such rapid global warming hasn’t been seen since, although it’s possible (probable?) that human-caused warming will surpass this rate, if it hasn’t already. It is particularly troubling to realize that our species has never before experienced an event like the one we’re causing today. The climate has changed before, but humans generally weren’t there to see it.

The PETM is marked in the geological record by a sudden jump in the amount of “light” carbon in the climate system. Carbon comes in different isotopes, two of which are most important for climate analysis: carbon with 7 neutrons (13C), and carbon with 6 neutrons (12C). Different carbon cycle processes sequester these forms of carbon in different amounts. Biological processes like photosynthesis preferentially take 12C out of the air in the form of CO2, while geological processes like subduction of the Earth’s crust take anything that’s part of the rock. When the carbon comes back up, the ratios of 12C to 13C are preserved: emissions from the burning of fossil fuels, for example, are relatively “light” because they originated from the tissues of living organisms; emissions from volcanoes are more or less “normal” because they came from molten crust that was once the ocean floor.

In order to explain the isotopic signature of the PETM, you need to add to the climate system either a massive amount of carbon that’s somewhat enriched in light carbon, or a smaller amount of carbon that’s extremely enriched in light carbon, or (most likely) something in the middle. The carbon came in the form of CO2, or possibly CH4 that soon oxidized to form CO2. That, in turn, almost certainly caused the warming.

There was a lot of warming, though, so there must have been a great deal of carbon. We don’t know exactly how much, because the warming power of CO2 depends on how much is already present in the atmosphere, and estimates for initial CO2 concentration during the PETM vary wildly. However, the carbon injection was probably something like 5 trillion tonnes. This is comparable to the amount of carbon we could emit today from burning all our fossil fuel reserves. That’s a heck of a lot of carbon, and what nobody can figure out is where did it all come from?

Arguably the most popular hypothesis is methane hydrates. On continental shelves, methane gas (CH4) is frozen into the ocean floor. Microscopic cages of water contain a single molecule of methane each, but when the water melts the methane is released and bubbles up to the surface. Today there are about 10 trillion tonnes of carbon stored in methane hydrates. In the PETM the levels were lower, but nobody is sure by how much.

The characteristics of methane hydrates seem appealing as an explanation for the PETM. They are very enriched in 12C, meaning less of them would be needed to cause the isotopic shift. They discharge rapidly and build back up slowly, mirroring the sudden onset and slow recovery of the PETM. The main problem with the methane hydrate hypothesis is that there might not have been enough of them to account for the warming observed in the fossil record.

However, remember that in order to release their carbon, methane hydrates must first warm up enough to melt. So some other agent could have started the warming, which then triggered the methane release and the sudden bursts of warming. There is no geological evidence for any particular source – everything is speculative, except for the fact that something spat out all this CO2.

Magnified foraminifera

Don’t forget that where there is greenhouse warming, there is ocean acidification. The ocean is great at soaking up greenhouse gases, but this comes at a cost to organisms that build shells out of calcium carbonate (CaCO3, the same chemical that makes up chalk). CO2 in the water forms carbonic acid, which starts to dissolve their shells. Likely for this reason, the PETM caused a mass extinction of benthic foraminifera (foraminifera = microscopic animals with CaCO3 shells; benthic = lives on the ocean floor).

Other groups of animals seemed to do okay, though. There was a lot of rearranging of habitats – species would disappear in one area but flourish somewhere else – but no mass extinction like the one that killed the dinosaurs. The fossil record can be deceptive in this manner, though, because it only preserves a small number of species. By sheer probability, the most abundant and widespread organisms are most likely to appear in the fossil record. There could be many organisms that were less common, or lived in restricted areas, that went extinct without leaving any signs that they ever existed.

Climate modellers really like the PETM, because it’s a historical example of exactly the kind of situation we’re trying to understand using computers. If you add a few trillion tonnes of carbon to the atmosphere in a relatively short period of time, how much does the world warm and what happens to its inhabitants? The PETM ran this experiment for us in the real world, and can give us some idea of what to expect in the centuries to come. If only it had left more data behind for us to discover.

References:
Pagani et al., 2006
Dickens, 2011
McInerney and Wing, 2011

Climate Change and Atlantic Circulation

Today my very first scientific publication is appearing in Geophysical Research Letters. During my summer at UVic, I helped out with a model intercomparison project regarding the effect of climate change on Atlantic circulation, and was listed as a coauthor on the resulting paper. I suppose I am a proper scientist now, rather than just a scientist larva.

The Atlantic meridional overturning circulation (AMOC for short) is an integral part of the global ocean conveyor belt. In the North Atlantic, a massive amount of water near the surface, cooling down on its way to the poles, becomes dense enough to sink. From there it goes on a thousand-year journey around the world – inching its way along the bottom of the ocean, looping around Antarctica – before finally warming up enough to rise back to the surface. A whole multitude of currents depend on the AMOC, most famously the Gulf Stream, which keeps Europe pleasantly warm.

Some have hypothesized that climate change might shut down the AMOC: the extra heat and freshwater (from melting ice) coming into the North Atlantic could conceivably lower the density of surface water enough to stop it sinking. This happened as the world was coming out of the last ice age, in an event known as the Younger Dryas: a huge ice sheet over North America suddenly gave way, drained into the North Atlantic, and shut down the AMOC. Europe, cut off from the Gulf Stream and at the mercy of the ice-albedo feedback, experienced another thousand years of glacial conditions.

A shutdown today would not lead to another ice age, but it could cause some serious regional cooling over Europe, among other impacts that we don’t fully understand. Today, though, there’s a lot less ice to start with. Could the AMOC still shut down? If not, how much will it weaken due to climate change? So far, scientists have answered these two questions with “probably not” and “something like 25%” respectively. In this study, we analysed 30 climate models (25 complex CMIP5 models, and 5 smaller, less complex EMICs) and came up with basically the same answer. It’s important to note that none of the models include dynamic ice sheets (computational glacial dynamics is a headache and a half), which might affect our results.

Models ran the four standard RCP experiments from 2006-2100. Not every model completed every RCP, and some extended their simulations to 2300 or 3000. In total, there were over 30 000 model years of data. We measured the “strength” of the AMOC using the standard unit Sv (Sverdrups), where each Sv is 1 million cubic metres of water per second.

Only two models simulated an AMOC collapse, and only at the tail end of the most extreme scenario (RCP8.5, which quite frankly gives me a stomachache). Bern3D, an EMIC from Switzerland, showed a MOC strength of essentially zero by the year 3000; CNRM-CM5, a GCM from France, stabilized near zero by 2300. In general, the models showed only a moderate weakening of the AMOC by 2100, with best estimates ranging from a 22% drop for RCP2.6 to a 40% drop for RCP8.5 (with respect to preindustrial conditions).

Are these somewhat-reassuring results trustworthy? Or is the Atlantic circulation in today’s climate models intrinsically too stable? Our model intercomparison also addressed that question, using a neat little scalar metric known as Fov: the net amount of freshwater travelling from the AMOC to the South Atlantic.

The current thinking in physical oceanography is that the AMOC is more or less binary – it’s either “on” or “off”. When AMOC strength is below a certain level (let’s call it A), its only stable state is “off”, and the strength will converge to zero as the currents shut down. When AMOC strength is above some other level (let’s call it B), its only stable state is “on”, and if you were to artificially shut it off, it would bounce right back up to its original level. However, when AMOC strength is between A and B, both conditions can be stable, so whether it’s on or off depends on where it started. This phenomenon is known as hysteresis, and is found in many systems in nature.

This figure was not part of the paper. I made it just now in MS Paint.

Here’s the key part: when AMOC strength is less than A or greater than B, Fov is positive and the system is monostable. When AMOC strength is between A and B, Fov is negative and the system is bistable. The physical justification for Fov is its association with the salt advection feedback, the sign of which is opposite Fov: positive Fov means the salt advection feedback is negative (i.e. stabilizing the current state, so monostable); a negative Fov means the salt advection feedback is positive (i.e. reinforcing changes in either direction, so bistable).

Most observational estimates (largely ocean reanalyses) have Fov as slightly negative. If models’ AMOCs really were too stable, their Fov‘s should be positive. In our intercomparison, we found both positives and negatives – the models were kind of all over the place with respect to Fov. So maybe some models are overly stable, but certainly not all of them, or even the majority.

As part of this project, I got to write a new section of code for the UVic model, which calculated Fov each timestep and included the annual mean in the model output. Software development on a large, established project with many contributors can be tricky, and the process involved a great deal of head-scratching, but it was a lot of fun. Programming is so satisfying.

Beyond that, my main contribution to the project was creating the figures and calculating the multi-model statistics, which got a bit unwieldy as the model count approached 30, but we made it work. I am now extremely well-versed in IDL graphics keywords, which I’m sure will come in handy again. Unfortunately I don’t think I can reproduce any figures here, as the paper’s not open-access.

I was pretty paranoid while coding and doing calculations, though – I kept worrying that I would make a mistake, never catch it, and have it dredged out by contrarians a decade later (“Kate-gate”, they would call it). As a climate scientist, I suppose that comes with the job these days. But I can live with it, because this stuff is just so darned interesting.

More on Phytoplankton

On the heels of my last post about iron fertilization of the ocean, I found another interesting paper on the topic. This one, written by Long Cao and Ken Caldeira in 2010, was much less hopeful.

Instead of a small-scale field test, Cao and Caldeira decided to model iron fertilization using the ocean GCM from Lawrence Livermore National Laboratory. To account for uncertainties, they chose to calculate an upper bound on iron fertilization rather than a most likely scenario. That is, they maxed out phytoplankton growth until something else became the limiting factor – in this case, phosphates. On every single cell of the sea surface, the model phytoplankton were programmed to grow until phosphate concentrations were zero.

A 2008-2100 simulation implementing this method was forced with CO2 emissions data from the A2 scenario. An otherwise identical A2 simulation did not include the ocean fertilization, to act as a control. Geoengineering modelling is strange that way, because there are multiple definitions of “control run”: a non-geoengineered climate that is allowed to warm unabated, as well as preindustrial conditions (the usual definition in climate modelling).

Without any geoengineering, atmospheric CO2 reached 965 ppm by 2100. With the maximum amount of iron fertilization possible, these levels only fell to 833 ppm. The mitigation of ocean acidification was also quite modest: the sea surface pH in 2100 was 7.74 without geoengineering, and 7.80 with. Given the potential side effects of iron fertilization, is such a small improvement worth the trouble?

Unfortunately, the ocean acidification doesn’t end there. Although the problem was lessened somewhat at the surface, deeper layers in the ocean actually became more acidic. There was less CO2 being gradually mixed in from the atmosphere, but another source of dissolved carbon appeared: as the phytoplankton died and sank, they decomposed a little bit and released enough CO2 to cause a net decrease in pH compared to the control run.

In the diagram below, compare the first row (A2 control run) to the second (A2 with iron fertilization). The more red the contours are, the more acidic that layer of the ocean is with respect to preindustrial conditions. The third row contains data from another simulation in which emissions were allowed to increase just enough to offest sequestration by phytoplankton, leading to the same CO2 concentrations as the control run. The general pattern – iron fertilization reduces some acidity at the surface, but increases it at depth – is clear.

depth vs. latitude at 2100 (left); depth vs. time (right)

The more I read about geoengineering, the more I realize how poor the associated cost-benefit ratios might be. The oft-repeated assertion is true: the easiest way to prevent further climate change is, by a long shot, to simply reduce our emissions.

Feeding the Phytoplankton

While many forms of geoengineering involve counteracting global warming with induced cooling, others move closer to the source of the problem and target the CO2 increase. By artificially boosting the strength of natural carbon sinks, it might be possible to suck CO2 emissions right out of the air. Currently around 30% of human emissions are absorbed by these sinks; if we could make this metric greater than 100%, atmospheric CO2 concentrations would decline.

One of the most prominent proposals for carbon sink enhancement involves enlisting phytoplankton, photosynthetic organisms in the ocean which take the carbon out of carbon dioxide and use it to build their bodies. When nutrients are abundant, phytoplankton populations explode and create massive blue or green blooms visible from space. Very few animals enjoy eating these organisms, so they just float there for a while. Then they run out of nutrients, die, and sink to the bottom of the ocean, taking the carbon with them.

Phytoplankton blooms are a massive carbon sink, but they still can’t keep up with human emissions. This is because CO2 is not the limiting factor for their growth. In many parts of the ocean, the limiting factor is actually iron. So this geoengineering proposal, often known as “iron fertilization”, involves dumping iron compounds into the ocean and letting the phytoplankton go to work.

A recent study from Germany (see also the Nature news article) tested out this proposal on a small scale. The Southern Ocean, which surrounds Antarctica, was the location of their field tests, since it has a strong circumpolar current that kept the iron contained. After adding several tonnes of iron sulphate, the research ship tracked the phytoplankton as they bloomed, died, and sank.

Measurements showed that at least half of the phytoplankton sank below 1 km after they died, and “a substantial portion is likely to have reached the sea floor”. At this depth, which is below the mixed layer of the ocean, the water won’t be exposed to the atmosphere for centuries. The carbon from the phytoplankton’s bodies is safely stored away, without the danger of CO2 leakage that carbon capture and storage presents. Unlike in previous studies, the researchers were able to show that iron fertilization could be effective.

However, there are other potential side effects of large-scale iron fertilization. We don’t know what the impacts of so much iron might be on other marine life. Coating the sea surface with phytoplankton would block light from entering the mixed layer, decreasing photosynthesis in aquatic plants and possibly leading to oxygen depletion or “dead zones”. It’s also possible that toxic species of algae would get a hold of the nutrients and create poisonous blooms. On the other hand, the negative impacts of ocean acidification from high levels of CO2 would be lessened, a problem which is not addressed by solar radiation-based forms of geoengineering.

Evidently, the safest way to fix the global warming problem is to stop burning fossil fuels. Most scientists agree that geoengineering should be a last resort, an emergency measure to pull out if the Greenland ice sheet is about to go, rather than an excuse for nations to continue burning coal. And some scientists, myself included, fully expect that geoengineering will be necessary one day, so we might as well figure out the safest approach.