Intercomparison of models

Because of the large number of ice sheet models being developed, each employing slightly different approaches and each subject to inadvertent programming errors, a group of 16 modelers developed a set of tests for comparison of models (Huybrechts et al., 1996; Payne et al., 2000). One test, for example, utilizes a square domain, 1500 km on a side, with grid points at 50 km spacing. Initially there is no ice sheet in the domain. A radially symmetric mass balance pattern is specified as are the flow law constants n and B, and other relevant parameters such as p, g, and k . Because the specified mass balance pattern is radially symmetric and constant with time, a model, when stepped through time, eventually produces a steady-state circular ice sheet.

An intercomparison of twelve finite-difference models done using this test found that most of the differences among them were inconsequential. The only significant difference was between so-called Type I and Type II models. Type I models used a mass flux parameterization that conserves mass but requires short time steps to achieve stability. Type II

Figure 11.5. Ice sheet profiles calculated using Type I and II finite-difference models compared with an exact solution. (After Huybrechts et al., 1996, Figure 5a. Used with permission of the authors and the International Glaciological Society.)

Figure 11.5. Ice sheet profiles calculated using Type I and II finite-difference models compared with an exact solution. (After Huybrechts et al., 1996, Figure 5a. Used with permission of the authors and the International Glaciological Society.)

models do not conserve mass, but have the advantage of allowing longer time steps. The means and standard deviations of the thicknesses of the model ice sheets at the divide were 2997 ± 7.4 m and 2959 ± 1.3 m for Types I and II models, respectively. An exact solution, obtained by integrating the mass balance function analytically, produced an ice sheet that was 20 km smaller in radius, and consequently somewhat thinner, but a model with a 50 km grid spacing cannot get closer. Surface profiles predicted by the two model types are compared with each other and with the exact solution in Figure 11.5.

The tests developed by these modelers, normally referred to as the EISMINT (European Ice Sheet Modeling INiTiative) benchmark tests, are an invaluable tool. Both new models that are developed and existing models that are being refined can be tested against these benchmarks to expose errors in reasoning or programming.

Sensitivity testing and tuning

Because the parameters used to the define boundary conditions, initial conditions, and forcing are rarely known precisely, modelers normally test the sensitivity of their models by varying these parameters within reasonable limits. Suppose, for example, that the most likely temperature boundary condition for a particular model is —5 °C, and suppose further that it is unlikely that the correct boundary temperature is lower than —7 °C or higher than —1 °C. The modeler then might run the model with all three temperatures to see if the conclusions changed when the extreme temperatures are used. If the conclusions are unchanged, the model is said to be robust against a reasonable range of temperature boundary conditions. Such tests are called sensitivity tests.

If there are N parameters that are only known approximately and if the maximum likely, minimum likely, and most probable values of all combinations of the parameters is to be tested, the total number of tests will be 3n. If N > 3, such a task becomes daunting.

In a similar vein, models are often tuned so that they reproduce observed characteristics of a glacier. For example, in the model of the Barnes Ice Cap temperature profiles discussed above (Equation (11.12)), the surface temperature, 0s, under which the profiles were presumed to have developed prior to the most recent warming, and the longitudinal gradient, d0/dx were only loosely constrained by field measurements. Thus, the model was tuned by adjusting these parameters until the model profiles matched the lower parts of the measured profiles well. Then step increases of various sizes in 0s were tested until the upper parts of the profiles were modeled reasonably well. Tuning can be viewed either as: (1) a way of solving for unknowns that cannot be evaluated analytically as mentioned earlier (p. 289), or (2) a necessary step if the model is going to be used to explore the consequences of future changes.

0 0

Post a comment