To select a better (or best) parameter set from the feasible parameter space, we must be able to compare and evaluate, in some manner, the model performance associated with different parameter sets. The indicator of model performance is usually taken to be a comparison between the observed (measured) watershed output time series and the corresponding model simulated quantities. In general, particularly for runoff forecasting models, the indicator representing watershed behavior is the sequence of observed streamflow levels at the watershed outlet, although streamflow level data at gauging stations within the watershed or soil moisture data at specific points within the watershed may sometimes also be available. Other models may have multiple outputs that can serve as indicators of model behavior. In the case of watershed hydrochemical models, we may have data on concentrations of various chemical species, and in the case of watershed scale water-and-energy budget models [e.g., Dickinson (1993) and Schaake (1996)], we may have data on surface soil temperature, emitted short- and long-wave radiation, sensible and latent heat fluxes, etc.

The million-dollar question is: What method should be used to compare the model-simulated streamflow values and the observed streamflow data? A visual comparison of the two time series plotted together on the same graph is intuitively appealing (Fig. 2) but is made difficult by the large number (say n) of time steps at which the simulated streamflow values must be compared to the observed data. If E, = {e, = st — ot, t = 1,...,«} represents the vector of differences between each simulated flow (sf) and its corresponding observed data value (o,), the method of visual comparison will involve adjusting the parameters to simultaneously make each one of the e, differences as small as possible. Because this approach is subjective, different hydrologists will tend to judge different model-simulated time series (and their associated parameter sets) as being better. Further, while it may be relatively simple to decide that a certain approximate region of the parameter space gives better simulations than some other regions of the parameter space, it can be very difficult to narrow down the choice (see section below on parameter adjustment). The method of visual comparison is also difficult if not impossible to automate.

Day of Water Year

Figure 2 Plot of observed flow (■) vs. model simulated flow (-). Values are in the transformed space (to observe behavior in the fall range of flows better), where transformed flow = [(flow + l)/t - l]A and A = 0.3.

Figure 2 Plot of observed flow (■) vs. model simulated flow (-). Values are in the transformed space (to observe behavior in the fall range of flows better), where transformed flow = [(flow + l)/t - l]A and A = 0.3.

An alternative to visual comparison is to define a mathematical measure of the "size" of the vector E,. However, there are an infinite number of ways in which this can be done. The most popular implementation is to compute a scalar measure of the average size of the differences, such as the mean-squared error [MSE = mean(ef, t — 1,..., n)) or the mean absolute error (MAE = mean(|e,|, t = 1,..., «)]. A number of different such measures, which are commonly called "objective" functions, have been suggested in the literature; Table 1 lists many of the measures used by the U.S. National Weather Service for calibration of its flood forecast models. A related approach is to treat the residuals as though they have stochastic properties and belong to some preassumed probability distribution, usually assumed to be Gaussian. Under this assumption, it is possible to develop maximum-likelihood (ML) measures having theoretical underpinnings; for example, the heteroscedastic maximum-likelihood estimator [HMLE = mean (wfef, t — 1,..., n); w, = / (ill °obs"i'V^i ^ is a parameter to be estimated] criterion developed by Sorooshian (1980) assumes that the residuals are Gaussian, uncorrected, unbiased, and have

TABLE 1 Objective Functions Used SAC-SMA Model by National Weather Service for Calibration of

Name

Description

Formula

DRMS TMVOL

ABSERR

Daily root-mean-squaied error

Total mean monthly volume-squared error Mean absolute error

Minimize w.r.t 9

Minimize w.r.t 9

Minimize w.r.t 9

AB SM AX

BIAS PDIFF

Maximum absolute error

Nash-Sutcliffe measure

Bias (mean daily error)

Peak difference

Minimize w.r.t 9

Minimize w.r.t 9

Minimize w.r.t 9 Minimize w.r.t 6

Was this article helpful?

## Post a comment