0 500 1000 1500 2000 2500 3000


Fig. 127. Temperature anomalies calculated from the surface temperature monitored in Prague-Sporilov during the period 1994-2001.

Fig. 128. Histogram of occurrences of different temperature anomalies in the time series shown in previous figure. The Gaussian distribution is shown for reference.

in cold seasons. The probability distribution of the temperature anomalies (Figure 128) is skewed5 to the right (moment coefficient of skewness is equal to 1.59). More heavy ("fat") right tail indicates that warmer extremes are more frequent than colder extremes. This finding agrees well with the knowledge obtained for the larger scales of aggregation. An infrequent occurrence of cold extremes in daily temperatures in the last two decades relative to warmer extremes was reported both for local and for global temperatures (Jones et al., 1999; Rebetez, 2001). Data presented here confirmed this fact for the higher frequency variation up to 6-h aggregation level. Note that the temperature anomaly distribution is also more peaked6 with respect to the normal distribution (kurtosis is equal to 9.55).

The appropriateness of the existing numerous variability measures is judged by their power to detect/describe the details of variability pattern. The measure used for the detection of the temporal changes in variability was suggested in the work by Karl et al. (1995) and is defined as the absolute value of the temperature difference between two adjacent periods of time. The measure, which we call N-point change, is calculated as the absolute

5Skewness is a measure of the asymmetry of the probability distribution. A distribution is right-skewed if the right tail (higher values of variable) is longer, and, on the contrary, is left-skewed if the left tail (lower values) is longer. Symmetric distribution looks the same to the left and right of the center and has zero moment coefficient of skewness. Skewness of the normal distribution is 0.

6Kurtosis is a measure of the "peakedness" of the data relative to a normal probability distribution. Positive kur-tosis means that the distribution has a distinct peak near the mean, declines rather rapidly, and has heavy tails. In other words, it means that more of the variance occurs due to infrequent extremes, as opposed to frequent lower size variations. Negative kurtosis indicates a "flat" distribution. Kurtosis of the normal distribution is equal to 0.

difference between the average for the N-point long sequence that begins at measured point t and the similar average of N anomalies that begins at point t + At. Generally, At <N, which implies the possibility of overlapping. The overlapping is useful for longer intervals when the application of the strictly non-overlapping differences artificially constrains their number, and may lead to noisy seasonal estimates. We have temperature anomalies time series T1, T2, T3,..., Ti, The measure of variability ATN (N-point change) is defined as the absolute difference between the average of a sequence of temperature anomalies for N points that begins at point I and the average for the N-point long sequence beginning at point (i + N - k)


For the time lag k > 0 there are partly overlapping running differences (Karl et al., 1995).

The measure of variability was calculated successively for the whole temperature time series to obtain time series of variability measure. The values of N were chosen as 1, 2, 3, 4, and 20, 40 corresponding to the averaging intervals from 6 to 24h as well as to 5 and 10 days. There are some natural separations of the temporal scales of the climate system variability. Perhaps, the most important of them is that between weather and climate. Mainly in technical reasons, weather refers to variability in the climate system at timescales less than about 10-14 days, while the climate variability refers to the longer timescales. While the multihour timescale data aggregation patterns still reflect variability of the weather fluctuations, the 5- and 10-day aggregation can be attributed exclusively to the short-term climate variability. Daily periodic variability (the daily wave) was removed from the observed temperatures (see above); thus, it cannot have any influence on the calculated variability patterns discussed later.

Four upper panels of Figure 129 show variability changes on the 6-h to day-by-day scales of aggregation for 8 years. The variability patterns depend on the length of the averaging interval N, and this dependence reflects essential features of the climate dynamics at different timescales. Variability time series for 6-h intervals do not show any significant linear trend, but they exhibit apparent quasi-seasonal oscillations. In all studied time intervals the variability increases during the spring season (partly also in the summer) and decreases in the autumn-winter seasons. Except for extremely variable year 2000, the spring "explosions" in variability are very short. The 12- and 18-h patterns do not offer any special features and represent only a gradual transition from higher to lower frequency variability pattern. Detected quasi-seasonality is relatively less pronounced in the day-by-day variability, the oscillations of which are more irregular. Day-by-day variability rarely falls to very low values and even it does then only for a short time interval. On the other hand, the 24-h variability exhibits a general decreasing trend of -0.038 ± 0.002 K/year similar to what is predicted by the most of greenhouse warming simulations. Climate model simulations associated with the build-up of greenhouse gases predict not only climate warming but also a general decrease in climate variability (e.g., Karl et al., 1999; McGuffie et al., 1999). This trend is absent in the higher frequency variability. Such decreasing trend existing during the second half of the twentieth century was reported also in works cited above dealing with the changes in the diurnal range of the SAT.

Two lower panels of Figure 129 show variability changes on the 5- and 10-day scales of aggregation for the same 8 years. Surface temperature variability is somewhat higher than at the daily scale; however, its pattern is comparable with the day-by-day variability oscillations. Quasi-seasonal oscillations characteristic for the short-term scales of aggregation are absent, and decreasing trend of the same order as in the variability time series on 18- and 24-h scales of aggregation is preserved.

Probably, part of the variability decrease observed in the Sporilov data may be attributed to the urbanization effect (see, e.g., Jones et al., 1990; Griffiths et al., 2005) characteristic for the intensively developing suburban part of city of Prague. However, its most significant part can be attributed to the NAO forcing. As known, the climate of the European-Atlantic sector exhibits considerable spatial and temporal variability. Recent studies have indicated that the variability of atmospheric circulation patterns in the Northern Hemisphere may be affected by the differences in the sea level pressure between the Atlantic Subtropical High centered near the Azores and its Sub-polar Low near SW Iceland. This phenomenon is referred to as the North Atlantic Oscillation (NAO; for more details see It has roughly decadal pattern with a dominant period of 12 years and, as shown by the recent studies, has a strong impact on weather (both temperature and rainfall regimes) and climate from the eastern coast of the United States to Eurasia and from North Africa and Middle East to the Arctic regions especially in the wintertime (see, e.g., Rodwell et al., 1999; Marshall et al., 2001; and the references therein). As shown in the work by Bodri and Cermak (2003) at all frequencies there is a significant correlation between Sporilov variability and the NAO index. At the investigated location correlation is positive and appears more prominently in winter periods, when the NAO control over the weather is stronger.

4.3.3 Structure of the stochastic component of the short-term climate variability

There is little doubt that climate change involves a number of non-linear processes. Thus, except for the deterministic trend components, the climate contains significant stochastic part. A deterministic signal is traditionally defined as anything that is not noise (i.e., an analytic signal, or perfectly predictable part, predictable from measurements over any continuous interval, etc.). Deterministic components have reduced the degree of uncertainty and normally correspond to the main modes of the system behavior. They arise as a result of their own physical mechanisms and a sum of contributions from various forcings. Stochastic component (noise) represents an accumulation of random influences (the day-by-day weather variations, stochastic climate change on longer timescales, etc.) superimposed on the deterministic part on the climatic signal. The unpredictable weather fluctuations represent a permanent source of stochastic noise in the climatic time series. Induced by these fluctuations, noise variability can mask climate changes caused by anthropogenic and other deterministic influences, and its presence causes additional challenges to the climate researcher that deals with climate variability.

Gaussian Filter
Fig. 129. Variations of differences in average temperature anomalies for 6-, 12-, 18-, 24-h and 5-, 10-day averaging intervals and their linear trends (thick lines). Low-frequency changes are highlighted by a Gaussian filter, roughly corresponding to 10-day moving averages.

On the other hand, the weather noise cannot be regarded as annoying hindrance. It represents essential part of the climate variability. Characteristic timescales of the deterministic and stochastic variability are matching. Sometimes an addition of the stochastic noise can significantly amplify deterministic signal. This so-called stochastic resonance has become widely recognized as a paradigm for noise-induced effects in driven non-linear dynamic systems. This phenomenon has been propounded as, e.g., a possible explanation for the ice ages and the noise-induced transitions in thermohaline circulation. The early work by Hasselmann (1976) has first introduced the idea of the separation of timescales observed in the climatic records and treating their short-term components as stochastic variables. Further studies have indicated that an application of the ideas of stochastic processes provides a useful insight into the climate physics.

Weather-induced climate variability can be studied with stochastic climate models using stochastic processes and stochastic differential equations that are able to capture complex patterns of both signal and noise and their "cooperation".

Modes of the stochastic climate variability can be identified by statistical analysis of the observational data. Various tools of mathematical statistics have found wide application in climatologic research. Fractal dimensional analysis represents a powerful tool for the detection of the stochastic component of climate and/or the construction of stochastic terms of the climate models. This analysis (fractal dimension analysis) consists of an assessment of the invariant quantities that arise from the scaling properties of records and is based on the numerical evaluation of variance (a quadratic measure of variability). Fractal dimensional analysis of geophysical time series is a well-established research tool to investigate their dynamics. It was initiated by a series of papers by Mandelbrot and Van Ness (1968) and Mandelbrot and Wallis (1968,1969) and has been followed by the application of the fractal/multifractal technique to various geophysical processes (Mandelbrot, 1982; Lovejoy and Mandelbrot, 1985; Ladoy et al., 1991; Turcotte, 1992; Schertzer and Lovejoy, 1995). Fractal dimension analysis is particularly well suited for an assessment of the time series variability (Hastings and Sugihara, 1993).

Scale invariance has been found to hold empirically for a number of geophysical processes. The mathematical definition of the "simple scaling" or scaling of the increments is as follows. The function Y(x) is termed scale invariant, if it fulfills the condition:

where A Y( A x) = Y(x1) - Y(x0), A x = x1 - x0 and A Y( yA x) = Y(x2) - Y(x0), x2 = x0 + y (x1 - x0) for arbitrary scale ratios y and A x. Equality in Eq. (55) means equality in probability distributions. The random variables u and v are equal in this sense when Pr(u > q) = Pr(v > q) for any threshold q ("Pr" means probability). The "simple scaling" means that if we scale the coordinate x by means of an appropriate choice of the exponent H, then we always recover the same function. The parameter H is a constant called the-unique-scaling parameter (0 < H < 1).

An assessment of scaling properties of the climatic time series starts with the assumption that they can be modeled as a stationary stochastic process. There are many standard methods to assess the scaling structure of {Y;j. A typical (and probably simplest) procedure consists in performing a Fourier (spectral) analysis of the time series.

Spectral analysis is concerned with the detection of cyclic patterns of the data and expresses the amount of variance in a time series that occurs at different frequencies or timescales. In the case of deterministic time series the purpose of this analysis is to decompose a multicyclic time series into a few sinusoidal functions with particular frequency. If a time series represents a complex output of the stochastic process, distinct periodicities are generally absent and power density is distributed across the entire spectrum. In this case, spectral analysis represents a conventional method of analyzing time series data to determine the power (mean square amplitude) as a function of frequency. A stochastic, or noise, signal is fully described by its power spectral density which gives the expected signal power versus frequency. Assuming that a process can be described by a single dimension, H allows one to use the energy spectrum E(f), where f is the frequency, of the observed variable Y(x) for scaling investigations. The energy spectrum is scaling when it can be described by a power law relationship according to (e.g., Ladoy et al., 1991):

where b > 0. In the simple scaling case, exponents H and b are related according to b = 2H + 1. When E( f) is of this form over given frequency range, fluctuations occur at all scales with no characteristic time and hence within this range the process is scale invariant. Most geophysical time series and particularly climatic time series obey this behavior.

Spectra of climatic time series are characterized by two important features: (1) continuity and (2) so-called "red noise" behavior (slope towards longer timescales in the logarithmic representation of the power spectra). The "redness" can be attributed to stochastic mechanisms where random high-frequency fluctuations (e.g., unpredictable weather variations) are being integrated by the components of the climate system with slower response, e.g., ocean, while the low-frequency fluctuations develop and grow in the amplitude with increasing timescale (Hasselmann, 1976). Different values of b represent the cases of the "colored noise". For example, white noise has equal power density across the entire spectrum (constant energy at all frequencies) as white light. In the logarithmic power spectral density versus frequency diagrams it appears as flat, with b = 0. Thus, an exponent b can be interpreted as a measure of departure from the non-correlated random white noise. The scaling spectrum with b ^ 0 has an "excess" of energy at low frequencies and thus is known as a "red noise" (in the sense of Gilman et al., 1963). It got this name after a connection with red light, which is on the low end of the visible light spectrum. In the logarithmic power spectral density versus frequency diagrams, red noise appears as descending line with the slope b. Figure 130 shows different kinds of the time series of the "red noise". Brownian noise is a kind of signal noise produced by Brownian motion (one-dimensional random walk).7 It is named in honor of Robert Brown (1773-1858), leading British botanist, the discoverer of the Brownian motion.

7A continuous process {Y(t)} represents a continuous-time random walk or a Brownian process if, for any time step At, the increments Ay(t) = y(t + AT) - y(t) are: (1) Gaussian, (2) of mean 0, and (3) of variance proportional to At (to At2H in the case of fractional Brownian noise).

Fig. 130. Synthetically generated kinds of the fractional Brownian noise ("red noise").

An ordinary Brownian noise has b = 2 meaning that it has more energy at lower frequencies. For the ordinary Brownian noise, the change, or increment, from one moment to the next is random (non-correlated) and normally distributed. For the simple scaling case the coefficient of correlation r of successive increments is equal to 22H = 2 + 2r (r = 2b—2 — 1), where —1/2 < r < 1 is independent of the time step At (Hastings and Sugihara, 1993). In the case b = 2, this equation gives r = 0. In other words, successive increments are uncorrelated. Because of the absence of the correlation between amplitude of oscillations corresponding to two successive time intervals, such signal is unpredictable. Brownian noise can be produced by integrating white noise.

In the intervals of 2 < b < 3 and 1 < b < 2 stochastic time series exhibit two distinct types of behavior: persistence or antipersistence. Persistence is a presence in time series of significant dependence between observations a long time span apart. Persistence represents a long-range correlated or long memory process and may be characterized by a correlation function decaying hyperbolically as the lag increases, as opposed to the exponential decay of short memory processes. In this case, even sufficiently distant from each other, fluctuations are strongly influenced by the long-term, persistent trends. Such records are qualified as less variable. Visual appearance of persistent noise with spectral exponent 2 < b < 3 looks like random fluctuations superposed upon a "background" that performs several quasi-cycles. Because the future trend is more likely to follow an established trend, persistent processes are more predictable. In the case of b = 3, correlation coefficient between two successive increments is 1, and the function is completely differentiable (deterministic). Beran (1994) has characterized the family of strongly persistent time series. They are well known in geophysical and in particular in climatic time series (for more details see Section 2.3.4). The signals with higher b values obey less erratic or more regular, trend-reinforcing behavior. In the signals with b between 1 and 2 (antipersistent), inversely correlated fluctuations dominat and the signal reveals a more "nervous", rough appearance with frequent reversals. The upper panel of Figure 130 represents the signal with familiar Kolmogorov8 spectrum with b = 5/3 characteristic for the turbulent wind fluctuations. In spite of the relative complexity of the antipersistent time series, the predictability again increases below b = 2. It occurs due to inverse correlation of fluctuations in such series. It means that an increase in the amplitude of the process is more likely to lead to its decrease in the next time interval.

The scaling regime describes the random part of climate variability: in the range of timescales where the scale invariant law holds, the climatic system has no characteristic timescale, and the climate changes result from the accumulation of random fluctuations. Possible breaks of scaling that are often observed in climatic time series (e.g., Fraedrich and Larnder, 1993; Olsson, 1995) signify the appearance of the basic characteristic timescales of climate system and identify the boundary between the random and deterministic regimes.

Due to strong intermittency, scaling studies require vast amount of the measured data and preferably many independent realizations. Some of the analyses using climatic time series have suffered from the shortness and the low quality of the data. Results of precise, decade(s)-long temperature monitoring appear to be especially suitable for this kind of analysis. Below we illustrate an application of above technique using time series of the temperature anomalies recorded at 0.05 m above the ground and at 1 and 10 m depth at Prague-Sporilov. All data are 6-h averaged, and thus still contain significant part of the weather fluctuations. Data were preprocessed in a similar way as the GSTs, shown in Figure 126. As in Figure 127, the temporal oscillations of calculated temperature anomalies are erratic and do not exhibit apparent regularity, trends, or cyclic pattern. Figure 131 shows examples of the power spectra of temperature anomalies measured at Prague-Sporilov at 0.05 m above the surface and at 1 and 10m depth. All power spectra are similar. There is no evidence of periodic variations at any particular frequency; the background seems to be quite dominant. All spectra exhibit clear red noise behavior over all normalized frequency domain with spectral exponent b between 1 and 2 signifying antipersistence. The exponent b is the largest for the air temperature

8A.N. Kolmogorov (1903-1987) is a Russian mathematician who made major advances in the fields of probability theory and topology. He has also worked on turbulence, classical mechanics, and information theory. In 1941, Kolmogorov has published a paper in which he derived a formula for the energy spectrum of turbulence.

Fig. 131. Power spectra of temperature anomalies monitored at Prague-Sporilov at 0.05 m above ground surface and at 1 and 10 m depth. Frequency dependence of the power spectral density variations are shown in a log-log plot. The values are relative: the frequencies are normalized to the lowest frequency in the spectrum, the power spectral density to that at the lowest frequency. Solid lines represent the least squares fit to the data.

Fig. 131. Power spectra of temperature anomalies monitored at Prague-Sporilov at 0.05 m above ground surface and at 1 and 10 m depth. Frequency dependence of the power spectral density variations are shown in a log-log plot. The values are relative: the frequencies are normalized to the lowest frequency in the spectrum, the power spectral density to that at the lowest frequency. Solid lines represent the least squares fit to the data.

anomalies and progressively decreases with depth. This means that the degree of antipersistence (variability) is the highest for the temperatures recorded in the air and decreases into the subsurface because of the well-known gradual filtering out of the high-frequency oscillations. The antipersistence of the temperature time series reflects, in particular, the turbulent nature of the atmospheric and ocean dynamics responsible for weather fluctuations.

As the signal penetrates into the surface its probability distribution comes nearer to the Gaussian. Figure 132 shows the histogram of occurrences of different temperature anomalies measured in Sporilov station at 1m depth. Its comparison with the Gaussian distribution that is shown for reference, as well as with similar diagram calculated for the SAT temperature anomalies (Figure 128), demonstrates that the former histogram is more close to the Gaussian distribution. The right tail (the prevalence of warm extremes) disappears. The skewness of the distribution (degree of asymmetry) is close to zero characteristic for the normal distribution. The distribution is still more peaked than the Gaussian (kurtosis is equal to 4.75). However, this value is two times lower than that obtained for the air temperature anomalies. The differences from the Gaussian distribution appear in the more damped form in the temperature anomalies measured at deeper levels. Above calculations hint that the underground temperature monitoring could provide reliable information on the short-term variability of stochastic component of the surface temperature signal. The Earth smoothes extremes and filters out high-frequency fluctuations; thus, only the most important time resistant irregularities are preserved in the ground temperatures.

More complex stochastic model that reproduces the variability and the long-term correlation observed in climatic time series was suggested in the work by Lavallee and Beltrami (2004). The stochastic model proposed by these authors represents a convolution between the Fourier transform of the random variable (white noise) {XJ, i = 1,..., N

and the function with a power law dependence (Eq. (56)) in the frequency space. Its output {yj can be presented as

where Ff (X) is the discrete Fourier transform of the random variable and j is related to f by f = 2n( j - 1). In this case, the power spectrum of {y} takes the form of Eq. (56). Using this relation, the scaling exponent b of the measured time series can be estimated from observed data. The values of the underlying random variable {XJ can be calculated from the relationship:

where F-1 is the Fourier inverse. While analysis of the Prague-Sporilov data presented above has assumed the Gaussian distribution of the measured data, in the model by Lavallee and Beltrami (2004) the probability distribution, controlling the variability of stochastic model, is unspecified. When it is identified from the analysis of the probability density function of {Xi}, the statistical properties of the stochastic model can be regarded as completely known.

Fig. 132. Histogram of occurrences of different temperature anomalies in time series monitored in Sporilov station at 1m depths. The Gaussian distribution is shown for reference.

The authors applied the model outlined above to the 1500-2000 years' long dendro-chronological time series. Obtained exponent b ranged between -0.5 and -0.7, thus, revealed much less departure from the non-correlated random white noise than Prague-Sporilov time series described above. Lavallee and Beltrami (2004) have also investigated some possible probability laws including the Gaussian, the Cauchy, and the Levy distributions (three of the few distributions that are stable and that have probability density functions that are analytically expressible) to find the best fitted to their data probability distribution. The authors have compared the cumulative probability distributions (probability that random fluctuation DT' exceeds a fixed value DT) of the three probability density functions mentioned above. The misfit of the theoretical and measured probability density functions is more obvious in such plots. The cumulative probability distribution of climatic time series generally has a nearly Gaussian shape in the center and a tail (probability of the extreme events) that is "heavier" than expected for a normal distribution (see Section 2.3.4 and Figure 24, Chapter 2). Note that the "fat-tailed" probability distributions are general characteristics of the long-term climatic time series. When the fluctuations are of this type, the phenomenon is so intermittent that the return times of extreme events are much shorter than those for Gaussian process. According to the Gaussian law, very strong fluctuations have almost zero probability of being observed. The Lavallee and Beltrami's analysis has shown that the stochastic model based on Levy's law reproduces the climatic variability archived in dendrochronological time series in the most precise manner. Similar cumulative probability plot presented in Figure 133 was calculated for the ground temperature anomalies monitored at 1 m depth in Sporilov borehole. The misfit of the

Fig. 133. The log-log plot of the cumulative probability distribution for the temperature anomalies measured at 1m depth at Sporilov site. The Gaussian cumulative probability is given for comparison.

measured data with the Gaussian law is minimal. It hints that this distribution reproduces measured data with enough accuracy. The "fat-tailed" distributions characteristic for long-term climatologic time series (e.g., Levy's law that was found for the dendrochronological time series in the above-cited work) means the higher probability of large fluctuations. The Gaussian distribution characteristic for the underground temperatures reflects the main properties of the heat conduction process: progressive smoothing of the surface signal and filtering out of its high-frequency component.

The variability of stochastic component of climate can be studied from different viewpoints. The fractal approach presented above provides the simplest non-trivial example of scale invariance, and is appropriate for dealing with extreme and ubiquitous variability of climate. Numerous similar (mono-) fractal studies were performed in different climatic as well as geophysical fields. An assumption of a unique dimension was abandoned in the later works. More sophisticated statistical summaries consider the multifractal theory in combination with the multiplicative processes similar to energy flux cascade in turbulence (Davis et al., 1994; Schertzer and Lovejoy, 1995, 2004; Lovejoy et al., 2001; Schertzer et al., 2002). Except for the single dimension, it involves a moment scaling function that describes the behavior of the statistical moments at different scales and is able to embrace an entire range of complexity of the geophysical signals. An applicability of the multifractal theory has been thoroughly investigated during the last decade. The discrete wavelet transform (DWT) is a powerful signal processing technique that also offers several advantages over traditional spectral analysis techniques. It can be used for the analysis of the non-stationary time series (one of the primary limitations of Fourier analysis). It is scale adaptive and allows to decompose original time series into a collection of new time series, each of which represents the variability in the signal over a characteristic band of scales. Unlike Fourier coefficients that capture variability over the entire time series, the DWT captures variability associated with their local features giving better estimates of the variance attributable to local, intermittent variations in time series. Further developments of the DWT, e.g., the maximum overlapping discrete wavelet transform (MODWT), provides several advantages over the DWT. Other examples of statistical methods specific to climate research are presented in a book by Von Storch and Zwiers (1999). The book describes applications ranging from simple use of sampling distributions to obtain estimates of the uncertainty of a climatological mean to complex statistical methodologies composing the basis for calculations that are capable of revealing the dynamics of the climate system.

In the past decade, climate variability research has made considerable progress in understanding and modeling climate changes on timescales of years to decades. In the previous section we looked briefly at several applications of stochastic processes to the detection of the short-term variability that presents in the time series arising from subsurface temperature monitoring. The examples discussed have shown that while GST reconstructions from the borehole temperature logs represent a useful tool for inferring long-term climate trends, time series resulting from borehole temperature monitoring can be of key importance in assessing the patterns of temporal climate variability. This suggests a direction for future research. The investigations of variability likewise the investigations of warming trends can be used for the validation of the simulated models for various scenarios of greenhouse-gas emission and land use. A detailed understanding of climate variability is also important for the prediction of extreme climatic events.

Was this article helpful?

0 0

Post a comment