Cloud Climate Metrics for Assessing the Relative Reliability of Climate Change Cloud Feedbacks Produced by Climate Models

With the realization and the open availability of a large coordinated set of climate simulations performed by a large number of climate models (Meehl et al. 2007), the question now arises whether some model results are more reliable than others. Giving more importance (or more weight) to models that seem to perform better in simulating the current climate is sometimes presented as a way to reduce uncertainties in climate projections (Murphy et al. 2004). To address this question, the climate modeling community is currently developing efforts to define a basket of "metrics" to assess the relative merits of the different climate models in reproducing observed features (always remembering that the models may have common errors that need to be identified and corrected, because such errors may offset one another so that some of our present crude criteria for assessing models are satisfied whereas in truth the models are flawed). This effort, in some way, extends to climate models a procedure that has been routinely applied to NWP models for thirty years. However, it raises many questions and concerns.

As provocatively asked during the presentation of the IPCC AR4: "Might the 5th Assessment Report of the IPCC be the end of models democracy?" (IPCC 2007). Indeed, thus far, different climate models have all been treated equally, in terms of their ability to simulate climate change projections. However, we feel that there is a growing desire (and pressure) to rank the different models and to give them different weights depending on their relative ability to reproduce the observed climate.

Certainly, the climate community welcomes the possibility of quantifying, for a large ensemble of climate models and for a wide range of diagnostics, the resemblance between simulations and observations. For example, one may imagine developing metrics focused on the ability of climate models to simulate a realistic diurnal cycle or realistic tropical intraseasonal oscillations. The scrutiny by a very large community of analysts of the simulations performed in support of the IPCC AR4 is already contributing to this very constructively. However, concerns might be expressed regarding the meaning and the future use of these metrics.

As explained above, we still do not know whether some model biases matter more than others for climate change prediction. Common sense suggests that the answer to this question depends on the climate question to be addressed. To assess the reliability of climate model projections in regions dominated by monsoon or ENSO phenomena, one might find useful metrics focused on the simulation of these processes. However, there is some danger in the use of nonspecific metrics based on mean climate features (e.g., mean cloudiness or mean radiative fluxes) to assess the relative reliability of different model estimates of global climate sensitivity (besides, it turns out that climate models producing very different cloud responses to climate change may not be distinguishable in their simulation of mean cloud properties in the current climate). To address this question, we need instead to encourage the development and use of some process-based metrics assessing the ability of climate models to simulate cloud relationships, processes, or composites shown to play a critical role in climate change-cloud feedbacks. Again, analyses and idealized studies of the kind described earlier in this chapter provide some guidance about the processes to be considered in such metrics.

0 0

Post a comment