Were the links between oceanic sea surface temperatures and continental rainfall directly linear then seasonal prediction would be straightforward. In practice most empirical models make the underlying assumption of linearity and this provides a reasonable starting point. Nevertheless rainfall over land may be influenced by sea surface temperature variations across several parts of an ocean, by variations across more than one ocean, by adjustments in land surface moisture, and by a number of other factors which may have no clear predictability and which are often dismissed as noise. An excellent example of the complexity of the situation related to sea surface temperature changes alone was given during the 1997/98 El Niño event, as will be discussed later, but because of this complexity it is inevitable that even the best-calibrated empirical model will fail at times. Unfortunately, few, if any, empirical models can claim to be well calibrated, as insufficient historical data are normally available from which to develop the models and, additionally, evidence exists that the empirical relationships forming the foundations of the models may in any case change slowly in time (e.g. Kumar et al., 1999).

There is, however, a more key aspect of predictability than the basic complexity of the links, and that is 'chaos'. A chaotic system is one that, if perturbed by a small amount (in meteorology these 'small amounts' can be below normal measurement capabilities), may end up in a different state than it would have had the perturbation not been present. In fact the perturbation may not exist as such but may, and usually is, a consequence of a limited global observing system leading to errors in information provided to the model. Such sensitivity to small changes is not present at all times at a given location in chaotic systems, but in the more complex chaotic systems, including that of the atmosphere, special techniques are required to determine what sensitivities are present and what their consequences are. In meteorology the ensemble approach is used to assess the level of sensitivity of a forecast to perturbations, the impacts on a prediction of which can be substantial even over only a few days. The standard way of producing an ensemble, of which there are a number of variants, is to run the model from a number of slightly different starting points, differences between which being consistent with the errors that can occur in measuring the starting point. Ensembles with over 50 members are now run operationally at some leading centres for periods out to ten days (Mullen and Buizza, 2002), with smaller ensembles generally used for seasonal predictions (Goddard et al., 2001). A further issue is that the models themselves are not perfect and errors developing in the models can provide additional sensitivities. One of the best approaches to encompassing model sensitivities is to use more than one model in an ensemble (Graham et al., 2000). On seasonal time scales the underlying sea surface temperatures tend to constrain the atmospheric circulation to a restricted set of options and so provide the sought-after predictability, but even in the ocean chaotic influences are important. Only a few systems take this oceanic chaos into account at present.

Because of computer processor restrictions operational seasonal forecast ensembles often have less than 10 members, but one system7 has 40, a figure recently doubled to 80 with the incorporation of a second model from the Met Office in a multi-model configuration. Larger ensembles can be expected in the future. The range of rainfall predictions for a specific location across an ensemble of forecasts, whether from a single model or from multiple models, can be, but need not be, substantial, and from the applications perspective may be daunting. There are two fundamental approaches to presenting ensemble output. The first is to average across all ensemble members and to base the forecast on that average. In theory this is supposed to eliminate the less predictable, smaller-scale, details of the forecast, and certainly produces information easier to use from the applications side, but in practice it has all the limitations of any deterministic prediction, including the possibility of being entirely wrong. The second approach, and the one consistent with chaos theory, is to treat all members of an ensemble as equally likely (or weighted in some manner) and to produce a probability distribution. Although many users, and even some meteorologists, balk at the prospect of using probability forecasts, this approach has many advantages, expanded further later, not least of which is the possibility of providing information on less likely, but nonetheless possible, outcomes. Forecasts of the more extreme events also become more achievable through use of distributions.

Empirical methods can be used to assess chaotic impacts on a prediction, but models currently in practical use do not do so. However a common method used to develop a probability distribution from a deterministic empirical model is to examine the historical performance of the model and to develop a distribution from that information using occasions when the model was providing similar forecasts to a current one (Folland et al., 2001). When this approach is adopted it becomes possible to build consensus forecasts from both numerical and empirical models, a technique often used in Regional Climate Outlook Forums (RCOFs—see below), and thought to produce the best prediction possible (Basher et al., 2001).

0 0

Post a comment