Different Models Different Votes

From a pragmatic standpoint, in view of the increasing attention and activities in the area of adaptation, simple ensemble means and ranges have the desirable property of being easy to interpret so that non-experts handling multi-model projections can straightforwardly appreciate what they are dealt. The need remains though to alert users to some shortcomings of these multi-model ensembles. They have been called "ensemble of opportunity" for very important reasons: they are not intended to be a systematic exploration of uncertainties, there may exist dependencies among the models and systematic errors common to all of them, and there is no easy way to rank or pick and choose better and worse models. There is also a more general aspect of model projections that invites careful consideration. An important characteristic that sets climate model projections apart from other kinds of numerical forecasts (e.g., daily weather, or seasonal forecasts) is the lack of validation, since the projections usually consists of multi-decadal mean changes at some point far in the future, and are conditional to emissions scenarios that may not be realized exactly as hypothesized.

Nevertheless, GCMs remain our best guess at future changes, especially regional changes, and the existence of coordinated experiments by many modeling groups, willing to make their respective output available in public archives facilitates a cautious approach to model uncertainty, even if some sources of uncertainty remain elusive.

Since 2000, when the first coordinated experiments aimed at coupled-models' comparison made results available under the CMIP flag (the simulations that will be made available for the next IPCC report will be labeled CMIP5) formal statistical approaches to combining multi-model ensembles started to be developed and to appear in the literature (e.g., Raisanen and Palmer 2001; Giorgi and Mearns 2002, 2003; Tebaldi et al. 2004, 2005; Smith et al. 2009). Most of these approaches use a Bayesian paradigm, in order to provide probabilistic projections of quantities like temperature and precipitation change. Departing from a simple count of observed frequencies in the ensemble, these methods formally posit an initial best guess, i.e. prior distribution, of the quantities of interest (be those current and future climate variables, or the different models' weights) and use the data collected (observations and model simulations) to reshape it into so-called posterior distributions. This is done through Bayes theorem by writing down the likelihood of the data as a function of the unknown quantities and combining it with the prior distribution of the unknown quantities. The Bayesian paradigm offers a natural means of incorporating expert judgment, which is formalized in the prior probabilities (for example, scientists may be asked to specify ranges and the distribution of probability within them for unknown quantities, like climate sensitivity or model reliability). If no such information is available, prior distributions are chosen uniform over a large interval, or otherwise very diffuse, like Gaussians with very large variance.

For some of these methods the final result may not be significantly different from an empirical histogram of models' individual projections, but the formal nature of the derived probabilities may be considered of value if incorporated in quantitative risk assessment exercises, for example.

It is fair to say that these methodological developments are in their infancy, and each study accounts for some aspects of the peculiar nature of this problem, but each also makes some approximations. There is a method-dependent nature to their results, and different statistical approaches have been shown to deliver different estimates of the probabilities of interest (Christensen et al. 2007; Tebaldi and Knutti 2007). If one accepts the statistical assumptions of a given method, however, the propagation of the uncertainties to impact models is rigorously achieved. For example, in Tebaldi and Lobell (2008) a formal quantification of the uncertainty in temperature and precipitation projections at the regional scales through a Bayesian hierarchical model was used as input to a statistical model of crop yield changes for several staple crops, in order to derive probabilistic projections of changes in yields accounting for several sources of uncertainties (climate change, relation between climate change and crop change, CO2 fertilization effect). We present the analysis in more detail in Section 3.3.2.

As discussed earlier, sometimes a simple descriptive analysis of ensemble model data is more interpretable. Sometimes it is the nature of the climatic variables of interest to pose obstacles to a formal statistical synthesis across models. Quantities like growing season length, or indices of climate extremes, are not as easily represented through statistical likelihood models as mean temperature or precipitation at large regional scales, for which a Normal distribution works in most cases. For these quantities and for the time being we may be better served by considering measures of model consensus and variability, like model spread, means and medians. We give an example of this kind of analysis in Section 3.3.1.

Renewable Energy 101

Renewable Energy 101

Renewable energy is energy that is generated from sunlight, rain, tides, geothermal heat and wind. These sources are naturally and constantly replenished, which is why they are deemed as renewable. The usage of renewable energy sources is very important when considering the sustainability of the existing energy usage of the world. While there is currently an abundance of non-renewable energy sources, such as nuclear fuels, these energy sources are depleting. In addition to being a non-renewable supply, the non-renewable energy sources release emissions into the air, which has an adverse effect on the environment.

Get My Free Ebook

Post a comment