Future Climate Change

2.6.1 Description of climate models - general circulation models (GCMs)

Much of our knowledge of future climate change comes from climate model experiments. Climate models are complex three-dimensional mathematical representations of the processes responsible for climate. These processes include complex interactions among atmosphere, land surface, oceans and sea ice. Climate models simulate the global distributions of variables such as temperature, wind, cloudiness and rainfall. Major climate processes represented in most state-of-the-art climate models are shown in Fig. 2.6. The equations describing the behaviour of the atmosphere are solved on a three-dimensional grid representing the surface of the earth and the vertical height of the atmosphere. The spatial resolution at which a model is configured is an important aspect of how well the model can reproduce the actual climate of the earth.

Fig. 2.6. Schematic illustration of the components of the coupled atmosphere/earth/ocean system. (Source: Cubasch and Cess, 1990.)

2.6.2 Climate model development

(a) History of GCM development

Climate models have developed considerably over the past few decades (Meams, 1990). The earliest experiments that evaluated effects of increased greenhouse gases using climate models were performed in the 1960s and 1970s. General circulation models (GCMs) at that time were simple. They used very rudimentary geometric sectors to represent land masses and simple oceans. The oceans, which were referred to as swamp oceans, effectively consisted of a wet surface with zero heat capacity and essentially acted only as an evaporating surface (Manabe and Wetherald, 1975, 1980).

In the early and mid-1980s, climate models that included more realistic geography were developed. Mixed-layer oceans, which were usually about 50 m deep and included estimates of evaporation from their surface and heat diffusion throughout their depth, became the standard ocean model coupled to the atmospheric models (Washington and Meehl, 1984; Schlesinger and Mitchell, 1987). The inclusion of a mixed-layer ocean also allowed the annual seasonal cycle of solar radiation to be included. Climate modellers continued to develop better parameterizations for atmospheric processes, such as cloud formation and precipitation. The land surface, however, still was crudely represented. Soil moisture dynamics were handled via the simplified bucket approach in which the soil is assigned a certain field capacity (i.e. size of the bucket). When field capacity was exceeded, runoff would occur. Evaporation of soil water (water in the bucket) occurred at diminishing rates as the amount of water in the bucket decreased. Horizontal resolutions of these climate models were typically between 5 and 8 degrees (Schlesinger and Mitchell, 1987).

By the mid 1980s, more attention was directed toward improving the overly simplified surface of the earth. Several research groups (Dickinson et al., 1986; Sellers et al., 1986) developed sophisticated surface packages that included vegetation/atmosphere interactions and more realistic soil moisture representation. At the same time, the spatial resolution of the atmospheric modelling further increased, and other parameterizations for processes such as cloud formation and precipitation were also further improved (Mitchell et al., 1990).

Since the late 1980s, atmospheric models have been coupled with three-dimensional dynamic ocean models, which allows for much more realistic modelling of interannual variability and longer-term variability of the coupled system. The ocean models allow for detailed modelling of horizontal and vertical heat transport within the ocean (Stouffer et al., 1989; Washington and Meehl, 1989).

In the 1990s, the spatial resolutions of the atmospheric and oceanic components of models have been greatly improved. A relatively standard resolution of about 250-300 km (2.8 degrees) for the atmosphere and of about 100-200 km (1 or 2 degrees) for the ocean is used (Johns et al., 1997; Boville and Gent, 1998). Another important new improvement is the coupling of atmospheric and oceanic models without using flux adjustment. Previously this adjustment was necessary to avoid coupled models drifting away from the observed climate. However, the result of the flux adjustment was that the models were less physically based, and their responses under perturbed conditions were somewhat constrained (Gates et al., 1992). The most recent coupled models have largely resolved the problem, partially by increasing their resolution, and no flux adjustment is necessary (Gregory and Mitchell, 1997; Boville and Gent, 1998).

It is important to note that climate models are computationally quite expensive, i.e. they require large amounts of computer time. Computer power has increased tremendously over the past few years; however, the computer time required by the models has also increased because of the increasing sophistication of the modelling of various aspects of the climate system and the increased spatial resolution. Giorgi and Mearns (1991), for example, calculated the amount of computer time required to run the National Center for Atmospheric Research (NCAR) community climate model (CCM1) on the Cray X-MP, a state-of-the-art computer at the time. At a resolution of 4.5 degrees by 7.5 degrees, 1 cpu (central processing unit) minute was required to simulate 1 day of the global model run. To run the same model at a resolution of 0.3 degrees by 0.3 degrees would have required 3000 cpu minutes. This meant that it would have taken 2 days of computing time to simulate one actual day of climate with the CCM1. Computer power increases have kept pace with climate model developments throughout the 1990s, but power still remains a limitation for performing multi-ensemble, transient runs with fully coupled atmosphere/ocean models.

As climate models have improved with more detailed modelling of important processes and increasing spatial resolution, their ability to reproduce faithfully the current climate has improved significantly. The models can now represent most of the features of the current climate on a large regional or continental scale. The distributions of pressure, temperature, wind, precipitation and ocean currents are well represented in time (seasonally) and space. However, at spatial scales of less than several hundred kilometres, the models still can produce errors as large as 4 or 5°C in monthly average temperature and as large as 150% in precipitation (Risbey and Stone, 1996; Kittel et al., 1998; Doherty and Mearns, 1999).

(b) Higher resolution models - regional climate models

Over the past 10 years, the technique of nesting higher resolution regional climate models within GCMs has evolved to increase the spatial resolution of the models over a region of interest (Giorgi and Mearns, 1991; McGregor, 1997). The basic strategy is to rely on the GCM to simulate the large-scale atmospheric circulation and the regional model to simulate sub-GCM-scale distributions of climatic factors such as precipitation, temperature and winds. The GCM provides the initial and lateral boundary conditions for driving the regional climate model. In numerous experiments, models for such regions as the continental USA, Europe, Australia and China have been driven by ambient and doubled CO2 output from GCMs. The spatial pattern of changed climate, particularly changes in precipitation, simulated by these regional models often departs significantly from the more general pattern over the same region simulated by the GCM (Giorgi et al., 1994; Jones et al., 1997; Laprise et al., 1998; Machenhauer et al., 2000). The regional model is able to provide more detailed results because the spatial resolution is in the order of tens of kilometres, whereas the GCM scale is an order of magnitude coarser. This method, while often producing better simulations of the regional climate, is still dependent on the quality of the information provided by the GCM.

It is likely that the best regional climate simulations eventually will be performed by global models run at high spatial resolutions (tens of kilometres). In the meantime, the regional modelling approach affords climate scientists the opportunity to obtain greater insight into possible details of climatic change on a regional scale. It also provides researchers assessing the impact of climate with high-resolution scenarios of climatic change to use as input in models of climate impact, such as crop models. For example, Mearns et al. (1999, 2000) have used results from recent regional climate simulations over the USA to study the effect of the scale of climatic change scenarios on crop production in the Great Plains.

2.6.3 Climatic change experiments with climate models

To simulate possible future climatic change, climate models are run using changes in the concentrations of greenhouse gases (and aerosols) which then affect the radiative forcing within the models. In the early to mid-1980s, experiments were primarily conducted using doubled [CO2] with climate models that possessed relatively simple mixed-layer oceans (described above). In general, control runs of 10-20 years duration were produced. In the climate change experiments, the amount of CO2 was instantaneously doubled, and the climate model was run until it reached equilibrium in relation to the new forcing (Schlesinger and Mitchell, 1987).

In the late 1980s, coupled atmosphere/ocean general circulation models (AOGCMs) that used evolving changes in atmospheric CO2 concentrations were used to simulate the response of the earth/atmosphere system over time. In these experiments, time-varying forcing by CO2 and other greenhouse gases on a yearly basis was used, and the transient response of the climate was analysed. These first-generation experiments were still run at relatively coarse spatial resolutions of about 5 degrees latitude and longitude (Stouffer et al., 1989; Washington and Meehl, 1989; Cubasch et al., 1992; Manabe et al., 1992). Results from these early runs indicated that the time evolving response could result in some patterns of climatic change different from those which resulted from equilibrium experiments. For example, initial cooling in the North Atlantic and off the coast of Antarctica was a common feature of these experiments (Stouffer et al., 1989). These differences were a direct effect of the dynamic response of the ocean model to the change in radiative forcing.

More recent experiments have included detailed radiative models for each greenhouse gas and the effects of sulphate aerosols. These simulations with coupled models have been run at much higher spatial resolutions (e.g. 2.8 degrees) and have incorporated effects of increases in greenhouse gases including direct (and sometimes indirect) aerosol effects (Bengtsson, 1997; Johns et al., 1997; Meehl et al., 1996; Boer et al., 2000a). However, in this generation of runs, the aerosol effect was highly parameterized: surface albedo was changed to simulate the direct effect, and cloud albedo was altered to simulate the indirect effect (Meehl et al., 1996). By including the effects of aerosols, patterns distinct from those produced by greenhouse gases alone emerge.

2.6.4 Most recent results of climate models

A great deal of progress in the development of knowledge of climate and in the ability to model the climate system has occurred in the past 10 years. However, some of the fundamental statements made in the first IPCC report (Houghton et al., 1990) still hold today. It is still considered likely that the doubled greenhouse gas equilibrium response of global surface temperature ranges from 1.5 to 4.5°C. It is also now inevitable that [CO2] doubling will be surpassed.

(a) Summary results from IPCC 1995

To analyse climatic responses to a range of different scenarios of concentrations of emissions of greenhouse gas and aerosol amounts, simple (and comparatively less expensive) upwelling diffusion energy balance climate models are often used (Wigley and Raper, 1992). These models provide the mean global temperature response to the transient greenhouse gas and aerosol scenarios. Another factor considered is a range of model sensitivity. Climate model sensitivity is the equilibrium global mean warming per unit radiation forcing, usually expressed as the global mean warming simulated by the model for a doubling of [CO2]. For the 1995 IPCC report (Houghton et al., 1996), model sensitivities of 1.5, 2.5 and 3.5°C were used in conjunction with the emission scenarios from IPCC 1992 to provide a range of estimated global mean temperature for 2100. Across the three model sensitivities and the range of 1992 emission scenarios, the projected increase in global mean temperature by 2100 ranged from 0.9 to 3.5°C (Kattenberg et al., 1996). Results based on a climate sensitivity of 2.5°C (medium range value) for all the scenarios are presented in Fig. 2.7. Any of these estimated rates of warming would be the greatest to occur in the past 10,000 years. Note also that all the scenarios showed warming even though the cooling effects of aerosols were accounted for in the simulations. Changes in sea level, based on the full range of climate models (energy balance and AOGCMs) and full range of 1992 scenarios, ranged between 13 and 94 cm (Warrick et al., 1996). For the 'middle-of-the-road' IS92a scenario, the increase is projected to be about 50 cm, with a range from 20 to 86 cm.

There were regional differences in the various climate model runs, but some points of similarity occurred in all transient coupled model simulations, with and without aerosol effects. These included: (i) greater surface warming of land than of oceans; (ii) minimum warming around Antarctica and the northern Atlantic; (iii) maximum warming in high northern latitudes in late

2040 Year

Fig. 2.7. Projected global mean surface temperature changes from 1990 to 2100 for the full set of IS92 emission scenarios. A climate sensitivity of 2.5°C is assumed. (Source: Houghton et al., 1996.)

autumn and winter associated with reduced sea ice and snow cover; (iv) decreased diurnal temperature range over land in most seasons and most regions; (v) enhanced global mean hydrological cycle; and (vi) increased precipitation in high latitudes (Kattenberg et al., 1996).

There were some important regional differences between the transient CO2-only runs and those using increased CO2 plus aerosols. For example, Mitchell et al. (1995) found that Asian monsoon rainfall increased in the CO2-only run, but decreased in the CO2 plus aerosols run. Also, precipitation decreased in southern Europe in the elevated CO2-only case, but increased in the CO2 plus aerosols run. This particular climate model did not include the indirect aerosols effect.

(b) Detection and attribution - the IPCC 1995 debate

Important issues in the effort to understand present and future climate are those of attribution and detection. The chapter dedicated to this topic in the IPCC 1995 volume (Santer et al., 1996) inspired intense debate between the 'naysayers' on the issue of climate change and those scientists who participated in the IPCC process (Moss and Schneider, 1996). Detection, in the present context, refers to the detection of statistically significant changes in the global climate system. Attribution refers to determining the cause for these changes as at least partially anthropogenic. Thus, the combined goal of these two endeavours is to determine if there have been significant human-caused changes in the climate system, particularly during the 20th century. However, it is important to understand that both these concepts are inherently probabilistic in nature; i.e. there are no clearcut yes or no answers (Santer et al., 1996).

Detection initially involved looking at one time series of one variable (e.g. global mean temperature). The purpose was to detect a signal of global warming by statistically separating out the natural variability in the time series from the possible anthropogenically generated variability. This is a difficult problem, given the high natural variability (on various time scales) of the climate system. Often this separation is aided by the use of climate model results in which only the natural variability is being modelled.

More recent work in detection and attribution has focused on examining patterns of change in temperature across many points on the earth's surface, through the various vertical heights of the atmosphere, or using a combination of both (three-dimensional analysis). The most sophisticated detection/attribution investigation involves patterns of multiple variables (e.g. temperature and precipitation). Santer et al. (1996) provides an excellent review of the work performed in this important and complex research area. Establishing the detectability of human-induced global climatic change significantly changes the perception of the climatic change problem, particularly by policy makers. In the IPCC 1995 chapter, a statement was made for the first time that global warming due to anthropogenic pollution of the atmosphere is most likely occurring.

Some of the strongest evidence for this in IPCC 1995 was from multivariate detection studies. These compared observed three-dimensional temperature patterns with the patterns found in AOGCM model runs that took into account 20th century historical changes in both CO2 and sulphate aerosols. Often, comparisons of the relative strength of statistical agreement between observations and climate model runs with CO2-only forcing were made against those with both CO2 and aerosol forcing. In general, agreement was strongest between observations and climate model results with both forcings. The chapter concludes with the statement, 'The body of statistical evidence when examined in the context of our physical understanding of the climate system, now points towards a discernible human influence on global climate' (Santer etal., 1996, p. 439).

When the IPCC 1995 report appeared, numerous editorials and opinion pieces alleged that more conservative statements in the chapter had been inappropriately changed by the lead authors after the final wording had been approved (e.g. F. Seitz, Wall Street Journal, 12 June 1996). These accusations were made without full knowledge of the carefully constructed approval procedures for the final IPCC document. A flurry of counter-editorials appeared (e.g. Bolin et al., Wall Street Journal, 25 June 1996) defending the authors of the chapter, and the debate eventually subsided by autumn. However, by this time numerous scientists, policy makers, national politicians and world leaders had become involved.

Work in detection/attribution has continued to move forward since the IPCC 1995 report (Santer et al., 1997). For example, Wigley et al. (1998) demonstrated that the serial correlation structure of observed temperature data was much stronger than that in two state-of-the-art climate models that did not account for increases in CO2 or changes in aerosols. As climate models and observations continue to improve, and understanding of the external radiative forcing of the climate system increases, higher levels of detection and attribution will be obtained.

(c) Since IPCC 1995

Since the IPCC 1995 report, more coupled-climate runs have been performed with higher resolution models, and with both direct and indirect aerosol effects. Moreover, more physically based modelling of aerosol effects in the atmosphere is under way (Feichter et al., 1997; Qian and Giorgi, 1999; J. Kiehl, 1998, personal communication).

In the current National Assessment Program in the USA, two of the most recent modelling transient experiments to the year 2100 are being used - that of the Canadian model CGCMI (Reader and Boer, 1998; Boer et al., 2000b) and the British HADCM2 (Johns et al., 1997; Mitchell and Johns, 1997). In the HADCM2 run, regional climate changes over the USA show that temperature increases in the order of 5°C in winter and 3°C in summer are expected to occur by 2060. The CGCMI model projects a 4-7°C greater warming over North America by 2060 than does the HADCM2. The problem still remains that the models do not always agree on the specific regional climate changes. For example, the CGCMI model predicts precipitation decreases in the southeastern USA in the summer, while the HADCM2 model predicts increases (Doherty and Mearns, 1999).

Guide to Alternative Fuels

Guide to Alternative Fuels

Your Alternative Fuel Solution for Saving Money, Reducing Oil Dependency, and Helping the Planet. Ethanol is an alternative to gasoline. The use of ethanol has been demonstrated to reduce greenhouse emissions slightly as compared to gasoline. Through this ebook, you are going to learn what you will need to know why choosing an alternative fuel may benefit you and your future.

Get My Free Ebook


Post a comment