Xn h xn h2

v = n — h — 3 (degrees of freedom) and x' is interpolated x (Fig. 8.10). The linear model has no parameter and v = n — h — 2. The step size of the brute-force search for H was 5 a.

The resulting lag estimate is H = —1.3 ka, that is, a lag of CO2 variations behind temperature variations. The resulting persistence time is = 0.92 ka. The reduced least-squares sum in dependence on H is shown in Fig. 8.11. The resulting parabolic fit is shown in Fig. 8.12.

Figure 8.11. Vostok deuterium and CO2, reduced sum of squares. The minimum (i.e., lag estimate) is at H = -1.3 ka. (After Mudelsee 2001b.)

Figure 8.11. Vostok deuterium and CO2, reduced sum of squares. The minimum (i.e., lag estimate) is at H = -1.3 ka. (After Mudelsee 2001b.)

Both predictor and response (x, y) exhibit measurement and proxy errors, and both timescales (tX,ty) show dating uncertainties. These four error sources propagate into the estimation standard error of the lag, sejH. Mudelsee (2001b) determined se^ by means of a parametric surrogate data approach (Algorithm 8.2).

The first error source (x) was simulated (Mudelsee 2001b) as x*(i) = x(i) + xnoise(i), (8.23)

where xnoise(i) is a realization of a Gaussian AR(1) process with standard deviation sX = 1.0%o (Petit et al. 1999) and persistence time tx = 2.1 ka (Chapter 2).

The second error source (y) was simulated analogously as y*(i) = y(i) + ynoise(i), (8.24)

where the noise process had a standard deviation sy = 2.5 ppmv (Petit et al. 1999) and persistence time ty = iy = 0.92 ka.

The third error source (tX) was simulated (Mudelsee 2001b) using the depth points of the ice core samples (Petit et al. 1999) and a non-parametric fit of the "sedimentation rate" (Fig. 4.13). The simulated

8.6 Lagged regression 320280-

160-

-500

"I-1-1-1-

-450

-400

Figure 8.12. Vostok deuterium and CO2, parabolic fit. Data points are lagged CO2 (H = -1.3 ka) against interpolated ¿D (n — h = 280). The fit line is given by y = —1482 — 9.05x — 0.012x2. (After Mudelsee 2001b.)

Step 1

{tY (j),v(j)}rnh

Step 2

Lag estimate via minimization of SSQGV (ft, H, ty)

H

Step 3

Simulated time series;

{txb(i),x*bmz,

b, counter

{tYb(3),y*b(j)}7=i

Step 4

Replication

H *b

Step 5

Go to Step 3 until b = B (usually B = 2000)

replications exist

{H *b}B i

Step 6

Calculate standard error and CIs

Algorithm 8.2. Determination of bootstrap standard error and construction of CIs for lag estimate in lagged regression, surrogate data approach (Sections 3.3.3 and 3.4). The algorithm is applicable also to other estimation techniques than SSQGV minimization (Step 2).

sedimentation rate was obtained parametrically (Mudelsee 2001b) by imposing a relative, Gaussian error of 1.2%. The simulated sedimen-

CTice-gas

Figure 8.13. Vostok deuterium and CO2, sensitivity study of lag estimation error.

tation rate, combined with the depth points, resulted in a simulated timescale (Section 4.1.7). In a final step, the simulated timescale was randomly compressed or expanded to fit into the GT4 timescale error range (Petit et al. 1999), which is < 5 ka for the last 110 ka, < 10 ka for "most of the record" (interpreted as 110-300 ka by Mudelsee (2001b)) and < 15 ka in the early part.

The fourth error source (ty) was simulated on basis of the simulated ice-ages (tX). The additional error contribution comes from the uncertainty in the ice-gas age difference,

Petit et al. (1999: p. 434 therein) reported aice-gas to be 1 ka or more.

The surrogate data approach yielded (Mudelsee 2001b) se^ = 1.0 ka. To restate, the lag estimation result is that temperature variations occurred 1.3 ± 1.0 ka before CO2 variations.

The crucial point for achieving such a small estimation error is that x and y were measured on the same core (Vostok). This means a rather close coupling of t*X and ty (Eq. 8.25). Only the uncertainty in the icegas age difference weakens the coupling. Had CO2 been measured on a core from a different site, no coupling would exist and t*X and ty had to be simulated independently of each other, leading to a clearly larger lag estimation error than in the present case.

The lag estimation result underlines the importance of the uncertainty, aice-gas, in the ice-gas age difference, which contributes nearly 100% to the lag estimation error of 1.0 ka. A sensitivity study (Fig. 8.13) quantifies this contribution over a range of prescribed aice-gas values. For example, in the case of aice-gas = 0.2 ka the lag estimation error would be seH = 0.27 ka. In the case of a perfectly known difference

ice-gas

(^ice-gas equal to zero), the remaining error sources would propagate into Se^ = 0.13 ka.

As regards causal explanations of the late Pleistocene glacial cycles, Mudelsee (2001b) noted that Vostok's air temperature (5D) represents, at best, the southern hemisphere and that there exists a time lag of the variations relative to the northern hemisphere (Blunier et al. 1998). However, the complexity of the ice-age climate may be better understood, that is, the set of feasible causal scenarios (Broecker and Henderson 1998: Table 1 therein) further constrained, with the help of quantified phase relations.

8.7 Background material

OLSBC estimation of the slope has also been denoted as attenuation-corrected OLS (ACOLS) estimation (Ammann et al. 2009).

The method of moments estimator of the standard deviation of the predictor in the case of homoscedasticity, SX, is (Fuller 1987: Eq. (1.3.10) therein):

Sx = (25) 1 <! myy + 5mxx - (myy - 5mxx)2 + 45mXy

where

the moments are n myy = £ [y(i) - y]2/(n - 1) , (8.28)

WLSXY estimation of a linear relationship between two variables that are both subject to error has been studied, and the geometric interpretation been made, already before and at the beginning of the twentieth century (Adcock 1877, 1878; Pearson 1901); see also Wald (1940) and Fuller (1987, 1999). The method to fit a hyperplane to data with errors in all their coordinates (possibly more than two) is also denoted as total least squares (Nievergelt 1998).

Non-Gaussian, heteroscedastic noise components in the linear errors-in-variables regression model can be taken into account in the estimation using GLS, that is, using the covariance matrix, analogously to Section 4.1.2. In practical applications to climatological problems, where the covariance matrix is unknown and has to be estimated, an iterative procedure may be used. Fuller (1987: Section 3.1 therein) describes GLS estimation for serially independent noise components and gives a result (standard errors of parameters) that is valid for large data sizes. He advises to consider developing a model for the error structure if the data size is small. However, it is not clear whether such a classical approach to parameter error determination can be applied also to serially dependent noise components.

Correlated noise components in the linear errors-in-variables regression model can be taken into account. York (1969) adapts a least-squares criterion to recognize correlation between Xnoise(i) and Ynoise(i) and gives an example from radiometric dating. Freedman (1984) and Freedman and Peters (1984) present two-stage regression with bootstrap resampling as a method to treat a correlation between Xnoise(i) and Y(i). Fuller (1987: Section 3.4 therein) presents a transformation for dealing with correlation between Xnoise(i) and X(i).

Multiplicative measurement error may occur in form of X(i) = Xtrue(i) ■ Xnoise(i), where the primed noise component is dimensionless. Carroll et al. (2006: Section 4.5 therein) mention transformation methods that may be applied in this case.

Nonlinear errors-in-variables models can be estimated on basis of several assumptions about the model and the noise properties, by using numerical techniques for solving the maximum likelihood or least-squares optimizations (Fuller 1987: Section 3.3 therein). A recent book (Carroll et al. 2006) gives more details.

The pairwise-MBBres algorithm from Section 8.2 is a response to resolving the "quite nonstandard" (Hall and Ma 2007: p. 2621 therein) situation, where neither the true predictor variable, Xtrue(i), nor the errors, Xnoise(i), "can be directly accessed." Previously, Efron and Tibshi-rani (1993: Section 9.5 therein) and Davison and Hinkley (1997: Section 6.2.4 therein) considered that pairwise bootstrap resampling is applicable to errors-in-variables regression problems. Linder and Babu (1994) presented another alternative to the simple pairwise resampling. These authors scaled the residuals in both dimensions (X, Y) and resampled independently from both sets. They analysed maximum likelihood estimation with known standard deviation ratio and tested the accuracy of bootstrap CIs (percentile and Student's t) by means of Monte Carlo experiments, finding acceptable levels of accuracy. This was confirmed in an additional simulation study of slope estimation (Musekiwa 2005) with small data sizes (n = 20,30). It should be interesting to investigate further the approach of Linder and Babu (1994), adapted to the clima-tologically more realistic situation where the standard deviation ratio is not exactly known and the errors exhibit serial dependence.

The approaching of finite RMSE values or, equivalently, the absence of shrinking CIs with n ^ to was verified for slope estimation and falsified for intercept estimation (Section 8.3.3). Previously, Booth and Hall (1993) found a non-shrinking bootstrap confidence band in a Monte Carlo experiment on prediction (Section 8.5), y(n + 1) = /o + /31 x(n + 1). Thus, it appears that this observation (Booth and Hall 1993) has its origin in the non-shrinking of RMSE^.

The simulation—extrapolation algorithm (Carroll et al. 2006: Chapter 5 therein) is a bias correction method based on Monte Carlo simulations. The idea is to add artificial measurement error to the data and study the dependence of an estimate (say, /31) in dependence of the size of the artificial error. Extrapolation to zero size should, so the idea, provide an unbiased estimate.

Prediction and forecasting by means of regression and other models is reviewed by Ledolter (1986). The success of prediction depends, of course, on the validness of the regression model and the absence of disturbing "latent" variables (Box 1966). A physical theory behind the model is a guard against wrong conclusions based on such disturbances. For example, radiation physics and meteorology support the concept of climate sensitivity (estimated by means of a regression of changes in temperature on changes in radiative forcing, see Section 8.4) and refute claims that time acts as a latent variable. On the other hand, a re gression of annual temperature on the annual output of scientific papers on global warming over the past, say, 150 years, would find a strong relation (highly significant slope)—however, a spurious relation because the latent variable time is acting and no theory exists that supports the model.

Lagged regression as presented in Section 8.6 (that means, with one single lag parameter, h), is a special case of rational distributed lag models (Koyck 1954; Dhrymes 1981; Doran 1983; Pankratz 1991), where

Y(i) = ft + (i) + (i + 1) + (i + 2) + ■ ■ ■ + sy ■ Ynoise(i).

The sequence {^1,0,^1,1,^1,2,...} is called impulse response function. Note that the equation ignores errors in the predictor. Fitting such models may be performed using maximum likelihood (Dhrymes 1981) or frequency-domain techniques (Hannan and Robinson 1973; Hannan and Thomson 1974; Hamon and Hannan 1974; Foutz 1980). This technique is frequently applied in econometrics. One of the rare applications to climatology is the work by Tol and de Vos (1993), who estimated a lagged regression of annual-mean temperature, 1951-1979, on atmospheric CO2 concentration. Insofar climate is a dispersive system, where the response of one variable on the input of another is frequency-dependent, it should be worth developing further such models and fitting techniques that take into account typical properties of paleoclimatic series (measurement errors, unequal times and uncertain timescales).

The effective climate sensitivity is usually denoted as A-1. Various estimation approaches have been carried out, Table 8.8 gives a short overview. The approach via the heat capacity (Schwartz 2007) opened an interesting exchange of arguments in the Journal of Geophysical Research. Let C denote the effective heat capacity (change in heat per change in temperature) per unit area that is coupled to the relevant timescale of a perturbation (i.e., years to decades). The perturbation regards the radiative balance of the Earth (change in forcing). Schwartz (2007) estimated C (with standard error) to be 17 ± 7 Wam-2K-1. The C value reflects mainly the upper part of the ocean, which can take up heat on the discussed timescale of (anthropogenically enhanced) radiative perturbations. The simple equation, t = C ■ A-1, (8.34)

describes the time span (relaxation, t) over which a radiative perturbation (C) has an effect on the temperature (A-1). Schwartz ( ) estimated t by fitting an AR(1) model (Chapter 2) to observational data. The criticism on this approach (Foster et al. 2008; Knutti et al.

2008; Scafetta 2008) was centred on the AR(1) model as over-simplistic and estimation bias. In his reply, Schwartz (2008) kept the AR(1) model but conceded t to be larger (8.5 ± 2.5 a) than in his original contribution (5 ± 1a). The revised estimate for t leads to the entry in Table 8.8.

Table 8.8. Estimates of the effective climate sensitivity.

As 1 Estimatea (IS W-1m2)

Approach

Reference

0.29 [0.05; 0.53]b'c 0.48 [0.24;0.72]b'd 0.51 [-0.01; 1.03]b 0.65 [0.09; 1.21]b 0.70 [0.38; 1.55]e

Direct observations, 2000-2006 Direct observations, 2000-2006 Physical principles (heat capacity) Thermodynamical model Climate model and observations, 1000-2001

Direct observations, 1850-2001 Direct observations, 1861-1900 and 1957-1994

Chylek et al. (2007) Chylek et al. (2007) Schwartz (2008) Scafetta and West (2007) Hegerl et al. (2006)

This book

Gregory et al. (2002)

b CI calculated as ±2 standard error interval. c Ignoring ocean heat uptake. d Assuming strong ocean heat uptake.

e Calculated from originally estimated equilibrium climate sensitivity. f Median given instead of estimate.

The leads and lags of carbon dioxide variations relative to those of temperature in the Pleistocene have been studied by several researchers on time series from ice cores from Antarctica. Previously to Mudelsee (2001b), who estimated H = -1.3 ± 1.0 ka (a lag of CO2), the original authors of the 0-420 ka Vostok paper (Petit et al. 1999) had found, seemingly by per-eye inspection, that CO2 decreases lag behind temperature decreases by several ka. Cuffey and Vimeux (2001: p. 523 therein) believed that the lag, "especially during the onset of the last glaciation, about 120 ka ago," is largely an "artefact caused by variations of climate in the water vapour source regions." They presented model simulations that correct for this effect and lead to H & 0 ka using the Vostok data, 0-150 ka and 150-350 ka. Subsequently, Monnin et al. (2001) analysed high-resolution records (d & 0.18 ka for CO2) from the EPICA Dome C site over the interval 9-22 ka by means of an explorative technique ("correlation maximization") similar to the brute-force search (Section 8.6.1). They obtained an estimate of H = -0.41 ka, which was assessed as not significant owing to the uncertainty of the ice-gas age difference. Caillon et al. (2003) revisited the Vostok ice core, inspected the time interval around Termination III (230-255 ka) and took argon isotopes instead of deuterium as proxy for temperature changes. The motivation for performing the new measurements was the idea that the poorer proxy quality of argon isotopes would be more than compensated by the smaller timescale uncertainties. Since argon is, as CO2, contained in the enclosed air bubbles in the ice, no uncertainty of the ice-gas age difference influences lag estimation (Eq. 8.25). The result of correlation maximization (Caillon et al. 2003) was a lag of CO2, H = -0.8 ± 0.2 ka, where the error bar value is based on an assessment of the uncertainty of the ice accumulation (but not on an additional consideration of the proxy errors). The "EPICA challenge" (Wolff et al. 2005), issued to paleocli-matologists, was to predict CO2 concentration for the interval 420-740 ka on basis of the then available EPICA Dome C deuterium (temperature) and dust records (EPICA community members 2004). The simple model, lagged regression of Vostok CO2 on EPICA deuterium (temperature), calibrated on the 0-420 ka records, did not produce the worst of the eight predictions, as was found when the EPICA Dome C CO2 series became known. The complete interval back to 800 ka from the EPICA ice core archive of past changes in temperature (Jouzel et al. 2007) and CO2 (Siegenthaler et al. 2005; Liithi et al. 2008) enables to analyse also temporal changes of the lag, H, concepts that the ice-age climate relationships changed for a while after a glacial termination (Raynaud et al. 1993). To summarize, the overall lag of CO2 variations behind temperature variations during the late Pleistocene appears significant. This is a quantitative basis for developing and testing concepts about causes and effects of long-term climate changes (Broecker and Henderson 1998; Saltzman 2002), about how the external astronomical forcing (Milankovitch cycles) propagates into the geographic regions, and how the climate variables respond. Further refining the ice-age theory is currently an active research field (Kawamura et al. 2007; Huybers and Denton 2008; Wolff et al. 2009). It is important to note that the characteristic timescales on which the analysed Pleistocene climate changes occurred, are relatively long: the average spacing (d), the estimated lag (H) and its estimation error (se^) are between several hundred and a few thousand years. The late Pleistocene lag estimates are therefore hardly relevant as regards concepts about the ongoing climate change, which is anthropogenically enhanced since, say, 150 years. This recent change is considerably faster than the late Pleistocene change, it leads to CO2 levels not experienced during at least the past 800 ka and it affects other physical processes. The consideration from physics and meteorology that the recent change has a positive time lag (CO2 rise before temperature rise) is not contradicted by the finding that the late Pleistocene change had a negative time lag. The scientifically interesting question is whether humans are able to put a (temporary) end to the succession of glacials and interglacials (Berger and Loutre 2002).

Errors-in-variables regression models for climatology have not often been formulated in an explicit manner in the research literature. Allen and Stott (2003) showed theoretically the importance of an unbiased slope estimation for linear models that relate temperature changes predicted by an AOGCM with observed temperature changes. Hegerl et al. (2007a) studied in that manner proxy calibration to reconstruct 30-90°N mean annual land-surface temperature for the past 1500 years. Kwon et al. (2002) fitted a nonlinear model to dating samples,

i = 1,..., n. They used five paired samples of Y(i) = 40Ar/39Ar ratio and X(i) = reference age, observed with small, homoscedastic (SX, SY), mutually independent measurement errors of assumed Gaussian shape. The estimation parameters were A40K (decay constant of 40K) and tfcs (age of Fish Canyon sanidine age standard). Kwon et al. (2002) derived maximum likelihood estimators and analysed bootstrap standard errors based on the surrogate data approach. Their Monte Carlo study showed that the estimates do not strongly depend on the Gaussian assumption. The result, A40K = 5.4755 ± 0.0170 ■ 10-10 a-1, leads to a half-life estimate (Section 1.6) of TT1/2 = ln(2)/A«K = 1.266 ± 0.004 Ga. Bloomfield et al. (1996) made a multivariate nonlinear regression of daily tropospheric ozone concentration in the Chicago metropolitan area, 1981-1991, on a variety of predictors, including temperature, wind speed and relative humidity. The interesting point in the context of this chapter is that also lagged predictors (H prescribed as 1 or 2 days) were included. Bloomfield et al. (1996) used GLS estimation (Gallant 1987: Sections 2.1 and 2.2 therein) and obtained parameter standard errors by means of jackknife resampling (Section 3.8), which takes serial dependence into account.

8.8 Technical issues

WLSXY minimization of SSQWXY(ft, ft) is numerically difficult because the slope, ft, appears in the denominator of the least-squares sum (Eq. 8.8). The routine Fitexy (Press et al. 1992) parameterizes the slope as ft = tan-1(ft), scales the y values and uses Brent's search (Section 4.5) with a starting value for the slope from an initial OLS estimation. (A more recent Numerical Recipes edition is Press et al.

(2007).) Papers on the way from the work of Deming (1943) and York (l966) to the routine Fitexy include Reed (1989, 1992) and Squire (1990). This book follows those authors in usage of WLSXY for estimation, but it does not so for parameter error determination; for that purpose it uses instead bootstrap resampling. Extensions of WLSXY to nonlinear regression problems (nonlinear in the parameters) were considered by Jefferys (1980, 1981) and Lybanon (1984). A review of least-squares fitting when both variables are subject to error (Macdonald and Thompson 1992) studied besides WLSXY also other weighting techniques. It appears that a generalized least-squares estimation method for the case of (1) serial correlations in both X and Y errors and (2) correlation between X and Y errors, supported by a proof of optimality (in terms of, say, RMSE) under the Gaussian distributional assumption, has not yet been developed.

LEIV1 is another Fortran implementation of WLSXY estimation (York 1966), available at http://lib.stat.edu/multi/leiv1 (26 October 2009).

LEIV3 is a Fortran software for maximum likelihood fitting of linear errors-in-variables models with heteroscedastic noise components (Ripley and Thompson 1987), available at http://lib.stat.edu/multi/leiv3 (26 October 2009).

Bootstrap software for errors-in-variables regression is not abundant. Carroll et al. (2006) and Hardin et al. (2003) mention Stata software, available from the site http://www.stat.tamu.edu/~carroll (26 October 2009). Software for block bootstrap resampling seems to be unavailable—limiting the ability to study errors-in-variables regression with autocorrelated noise components.

Part IV Outlook

0 0

Post a comment