Tuesday, January 6, 2009

RealClimate FAQ on climate models: Part II

FAQ on climate models: Part II. By Gavin Schmidt
Real Climate, Jan 06, 2009 @ 8:09 AM

[This is a continuation of a previous post including interesting questions from the comments.]

What are parameterisations?

Some physics in the real world, that is necessary for a climate model to work, is only known empirically. Or perhaps the theory only really applies at scales much smaller than the model grid size. This physics needs to be 'parameterised' i.e. a formulation is used that captures the phenomenology of the process and its sensitivity to change but without going into all of the very small scale details. These parameterisations are approximations to the phenomena that we wish to model, but which work at the scales the models actually resolve. A simple example is the radiation code - instead of using a line-by-line code which would resolve the absorption at over 10,000 individual wavelengths, a GCM generally uses a broad-band approximation (with 30 to 50 bands) which gives very close to the same results as a full calculation. Another example is the formula for the evaporation from the ocean as a function of the large-scale humidity, temperature and wind-speed. This is really a highly turbulent phenomena, but there are good approximations that give the net evaporation as a function of the large scale ('bulk') conditions. In some parameterisations, the functional form is reasonably well known, but the values of specific coefficients might not be. In these cases, the parameterisations are 'tuned' to reproduce the observed processes as much as possible.


How are the parameterisations evaluated?

In at least two ways. At the process scale, and at the emergent phenomena scale. For instance, taking one of the two examples mentioned above, the radiation code can be tested against field measurements at specific times and places where the composition of the atmosphere is known alongside a line-by-line code. It would need to capture the variations seen over time (the daily cycle, weather, cloudiness etc.). This is a test at the level of the actual process being parameterised and is a necessary component in all parameterisations. The more important tests occur when we examine how the parameterisation impacts larger-scale or emergent phenomena. Does changing the evaporation improve the patterns of precipitation? the match of the specific humidity field to observations? etc. This can be an exhaustive set of tests but again are mostly necessary. Note that most 'tunings' are done at the process level. Only those that can't be constrained using direct observations of the phenomena are available for tuning to get better large scale climate features. As mentioned in the previous post, there are only a handful of such parameters that get used in practice.


Are clouds included in models? How are they parameterised?

Models do indeed include clouds, and do allow changes in clouds as a response to forcings. There are certainly questions about how realistic those clouds are and whether they have the right sensitivity - but all models do have them! In general, models suggest that they are a positive feedback - i.e. there is a relative increase in high clouds (which warm more than they cool) compared to low clouds (which cool more than they warm) - but this is quite variable among models and not very well constrained from data.

Cloud parameterisations are amongst the most complex in the models. The large differences in mechanisms for cloud formation (tropical convection, mid-latitude storms, marine stratus decks) require multiple cases to be looked at and many sensitivities to be explored (to vertical motion, humidity, stratification etc.). Clouds also have important micro-physics that determine their properties (such as cloud particle size and phase) and interact strongly with aerosols. Standard GCMs have most of this physics included, and some are even going so far as to embed cloud resolving models in each grid box. These models are supposed to do away with much of the parameterisation (though they too need some, smaller-scale, ones), but at the cost of greatly increased complexity and computation time. Something like this is probably the way of the future.


What is being done to address the considerable uncertainty associated with cloud and aerosol forcings?

As alluded to above, cloud parameterisations are becoming much more detailed and are being matched to an ever larger amount of observations. However, there are still problems in getting sufficient data to constrain the models. For instance, it's only recently that separate diagnostics for cloud liquid water and cloud ice have become available. We still aren't able to distinguish different kinds of aerosols from satellites (though maybe by this time next year).

However, none of this is to say that clouds are a done deal, they certainly aren't. In both cloud and aerosol modelling the current approach is get as wide a spectrum of approaches as possible and to discern what is and what is not robust among those results. Hopefully soon we will start converging on the approaches that are the most realistic, but we are not there yet.

Forcings over time are a slightly different issue, and there it is likely that substantial uncertainties will remain because of the difficulty in reconstructing the true emission data for periods more than a few decades back. That involves making pretty unconstrained estimates of the efficiency of 1930s technology (for instance) and 19th Century deforestation rates. Educated guesses are possible, but independent constraints (such as particulates in ice cores) are partial at best.


Do models assume a constant relative humidity?

No. Relative humidity is a diagnostic of the models' temperature and water distribution and will vary according to the dynamics, convection etc. However, many processes that remove water from the atmosphere (i.e. cloud formation and rainfall) have a clear functional dependence on the relative humidity rather than the total amount of water (i.e. clouds form when air parcels are saturated at their local temperature, not when humidity reaches X g/m3). These leads to the phenomenon observed in the models and the real world that long-term mean relative humidity is pretty stable. In models it varies by a couple of percent over temperature changes that lead to specific humidity (the total amount of water) changing by much larger amounts. Thus a good estimate of the model relative humidity response is that it is roughly constant, similar to the situation seen in observations. But this is a derived result, not an assumption. You can see for yourself here (select Relative Humidty (%) from the diagnostics).


What are boundary conditions?

These are the basic data input into the models that define the land/ocean mask, the height of the mountains, river routing and the orbit of the Earth. For standard models additional inputs are the distribution of vegetation types and their properties, soil properties, and mountain glacier, lake, and wetland distributions. In more sophisticated models some of what were boundary conditions in simpler models have now become prognostic variables. For instance, dynamic vegetation models predict the vegetation types as a function of climate. Other examples in a simple atmospheric model might be the distribution of ozone or the level of carbon dioxide. In more complex models that calculate atmospheric chemistry or the carbon cycle, the boundary conditions would instead be the emissions of ozone precursors or anthropogenic CO2. Variations in these boundary conditions (for whatever reason) will change the climate simulation and can be considered forcings in the most general sense (see the next few questions).


Does the climate change if the boundary conditions are stable?

The answer to this question depends very much on perspective. On the longest timescales a climate model with constant boundary conditions is stable - that is, the mean properties and their statistical distribution don't vary. However, the spectrum of variability can be wide, and so there is variation from one decade to the next, from one century to the next, that are the result of internal variations in (for instance) the ocean circulation. While the long term stability is easy to demonstrate in climate models, it can't be unambiguously determined whether this is true in the real world since boundary conditions are always changing (albeit slowly most of the time).


Does the climate change if boundary conditions change?

Yes. If any of the factors that influence the simulation change, there will be a response in the climate. It might be large or small, but it will always be detectable if you run the model for long enough. For example, making the Rockies smaller (as they were a few million years ago) changes the planetary wave patterns and the temperature patterns downstream. Changing the ozone distribution changes temperatures, the height of the tropopause and stratospheric winds. Changing the land-ocean mask (because of sea level rise or tectonic changes for instance) changes ocean circulation, patterns of atmospheric convection and heat transports.


What is a forcing then?

The most straightforward definition is simply that a forcing is a change in any of the boundary conditions. Note however that this definition is not absolute with respect to any particular bit of physics. Take ozone for instance. In a standard atmospheric model, the ozone distribution is fixed and any change in that fixed distribution (because of stratospheric ozone depletion, tropospheric pollution, or changes over a solar cycle) would be a forcing causing the climate to change. In a model that calculates atmospheric chemistry, the ozone distribution is a function of the emissions of chemical precursors, the solar UV input and the climate itself. In such a model, ozone changes are a response (possibly leading to a feedback) to other imposed changes. Thus it doesn't make sense to ask whether ozone changes are or aren't a forcing without discussing what kind of model you are talking about.

There is however a default model setup in which many forcings are considered. This is not always stated explicitly and leads to (somewhat semantic) confusion even among specialists. This setup consists of an atmospheric model with a simple mixed-layer ocean model, but that doesn't include chemistry, aerosol vegetation or dynamic ice sheet modules. Not coincidentally this corresponds to the state-of-the-art of climate models around 1980 when the first comparisons of different forcings started to be done. It persists in the literature all the way through to the latest IPCC report (figure xx). However, there is a good reason for this, and that is observation that different forcings that have equal 'radiative' impacts have very similar responses. This allows many different forcings to be compared in magnitude and added up.

The 'radiative forcing' is calculated (roughly) as the net change in radiative fluxes (both short wave and long wave) at the top of the atmosphere when a component of the default model set up is changed. Increased solar irradiance is an easy radiative forcing to calculate, as is the value for well-mixed greenhouse gases. The direct effect of aerosols (the change in reflectance and absorption) is also easy (though uncertain due to the distributional uncertainty), while the indirect effect of aerosols on clouds is a little trickier. However, some forcings in the general sense defined above don't have an easy-to-caclulate 'radiative forcing' at all. What is the radiative impact of opening the isthmus of Panama? or the collapse of Lake Agassiz? Yet both of these examples have large impacts on the models' climate. Some other forcings have a very small global radiative forcing and yet lead to large impacts (orbital changes for instance) through components of the climate that aren't included in the default set-up. This isn't a problem for actually modelling the effects, but it does make comparing them to other forcings without doing the calculations a little more tricky.


What are the differences between climate models and weather models?

Conceptually they are very similar, but in practice they are used very differently. Weather models use as much data as there is available to start off close to the current weather situation and then use their knowledge of physics to step forward in time. This has good skill for a few days and some skill for a little longer. Because they are run for short periods of time only, they tend to have much higher resolution and more detailed physics than climate models (but note that the Hadley Centre for instance, uses the same model for climate and weather purposes). Weather models develop in ways that improve the short term predictions, though the impact for long term statistics or the climatology needs to be assessed independently. Curiously, the best weather models often have a much worse climatology than the best climate models. There are many current attempts to improve the short-term predictability in climate models in line with the best weather models, though it is unclear what impact that will have on projections.


How are solar variations represented in the models?

This varies a lot because of uncertainties in the past record and complexities in the responses. But given a particular estimate of solar activity there are a number of modelled responses. First, the total amount of solar radiation (TSI) can be varied - this changes the total amount of energy coming into the system and is very easy to implement. Second, the variation over the the solar cycle at different frequencies (from the UV to the near infra-red) don't all vary with the same amplitude - UV changes are about 10 times as large as those in the total irradiance. Since UV is mostly absorbed by ozone in the stratosphere, including these changes increases the magnitude of the solar cycle variability in the stratosphere. Furthermore, the change in UV has an impact on the production of ozone itself (even down into the troposphere). This can be calculated with chemistry-climate models, and is increasingly being used in climate model scenarios (see here for instance).

There are also other hypothesised impacts of solar activity on climate, most notably the impact of galactic cosmic rays (which are modulated by the solar magnetic activity on solar cycle timescales) on atmospheric ionisation, which in turn has been linked to aerosol formation, and in turn linked to cloud amounts. Most of these links are based on untested theories and somewhat dubious correlations, however, as was recognised many years ago (Dickinson, 1975), this is a plausible idea. Implementing it in climate models is however a challenge. It requires models to have a full model of aerosol creation, growth, accretion and cloud nucleation. There are many other processes that affect aerosols and GCR-related ionisation is only a small part of that. Additionally there is a huge amount of uncertainty in aerosol-cloud effects (the 'aerosol indirect effect'). Preliminary work seems to indicate that the GCR-aerosol-cloud link is very small (i.e. the other effects dominate), but this is still in the early stages of research. Should this prove to be significant, climate models will likely incorporate this directly (using embedded aerosol codes), or will parameterise the effects based on calculated cloud variations from more detailed models. What models can't do (except perhaps as a sensitivity study) is take purported global scale correlations and just 'stick them in' - cloud processes and effects are so tightly wound up in the model dynamics and radiation and have so much spatial and temporal structure that this couldn't be done in a way that made physical sense. For instance, part of the observed correlation could be due to the other solar effects, and so how could they be separated out? (and that's even assuming that the correlations actually hold up over time, which doesn't seem to be the case).


What do you mean when you say model has “skill”?

'Skill' is a relative concept. A model is said to have skill if it gives more information than a naive heuristic. Thus for weather forecasts, a prediction is described as skilful if it works better than just assuming that each day is the same as the last ('persistence'). It should be noted that 'persistence' itself is much more skillful than climatology (the historical average for that day) for about a week. For climate models, there is a much larger range of tests available and there isn't necessarily an analogue for 'persistence' in all cases. For a simulation of a previous time period (say the mid-Holocene), skill is determined relative to a 'no change from the present'. Thus if a model predicts a shift northwards of the tropical rain bands (as was observed), that would be skillful. This can be quantified and different models can exhibit more or less skill with respect to that metric. For the 20th Century, models show skill for the long-term changes in global and continental-scale temperatures - but only if natural and anthropogenic forcings are used - compared to an expectation of no change. Standard climate models don't show skill at the interannual timescales which depend heavily on El Niño's and other relatively unpredictable internal variations (note that initiallised climate model projections that use historical ocean conditions may show some skill, but this is still a very experimental endeavour).


How much can we learn from paleoclimate?

Lots! The main issue is that for the modern instrumental period the changes in many aspects of climate have not been very large - either compared with what is projected for the 21st Century, or from what we see in the past climate record. Thus we can't rely on the modern observations to properly assess the sensitivity of the climate to future changes. For instance, we don't have any good observations of changes in the ocean's thermohaline circulation over recent decades because a) the measurements are difficult, and b) there is a lot of noise. However, in periods in the past, say around 8,200 years ago, or during the last ice age, there is lots of evidence that this circulation was greatly reduced, possibly as a function of surface freshwater forcing from large lake collapses or from the ice sheets. If those forcings and the response can be quantified they provide good targets against which the models' sensitivity can be tested. Periods that are of possibly the most interest for testing sensitivities associated with uncertainties in future projections are the mid-Holocene (for tropical rainfall, sea ice), the 8.2kyr event (for the ocean thermohaline circulation), the last two millennia (for decadal/multi-decadal variability), the last interglacial (for ice sheets/sea level) etc. There are plenty of other examples, and of course, there is a lot of intrinsic interest in paleoclimate that is not related to climate models at all!

As before, if there are additional questions you'd like answered, put them in the comments and we'll collate the interesting ones for the next FAQ.

No comments:

Post a Comment