Fci fcj

is the delete-one estimate.

The cross-validated bandwidth can be seen as a compromise between small h (large variance and small bias of A) and large h (small variance and large bias).

-| i | i ) i | i | i | \ | i |-800 1000 1200 1400 1600 1800 2000 2200

Year

Figure 6.4. Elbe winter floods, pseudodata generation. The heavy events (magnitudes 2-3) are taken from the complete record (Fig. 1.1) and plotted (a, b) as bars (m = 73). The "twopoint" rule is used to generate four pseudodata points (b, asterisks ) outside the observation interval. Occurrence rates are estimated with h = 35 a, and without (a m) = 73) or with (b m) = 77) pseudodata.

-| i | i ) i | i | i | \ | i |-800 1000 1200 1400 1600 1800 2000 2200

Year

Figure 6.4. Elbe winter floods, pseudodata generation. The heavy events (magnitudes 2-3) are taken from the complete record (Fig. 1.1) and plotted (a, b) as bars (m = 73). The "twopoint" rule is used to generate four pseudodata points (b, asterisks ) outside the observation interval. Occurrence rates are estimated with h = 35 a, and without (a m) = 73) or with (b m) = 77) pseudodata.

6.3.2.5 Example: Elbe winter floods (continued)

The number of heavy (magnitudes 2-3) floods of the Elbe in winter is m = 73. The first event was in 1141. However, the historical information back to 1021 was analysed (Mudelsee et al. 2003), and the observation interval is [1021;2002]. Pseudodata generation (Fig. 6.4) uses the "twopoint" rule to take (climatic and other) trends in flood risk at the boundaries into account.

The cross-validation function (Fig. 6.5) has a minimum at h = 41 a. For suppressing potential extrapolation effects (Section 6.3.2.3) and further reducing the bias (Section 6.3.2.4) it may be advisable to under-smooth slightly. For this reason and for achieving consistency with results from other flood records (Elbe, summer; Oder, winter and summer), Mudelsee et al. (2003) set the analysis bandwith to h = 35 a. The estimated flood occurrence rate (Fig. 6.4) reveals—in the case of heavy

Figure 6.5. Elbe winter floods, cross-validation function, heavy events (magnitudes 2-3).

Figure 6.5. Elbe winter floods, cross-validation function, heavy events (magnitudes 2-3).

winter floods of the Elbe—little boundary bias. The reason is that the occurrence rate at the boundaries is rather low.

Bandwidth selection has large effects on flood occurrence rate estimation. Too strong undersmoothing with h = 5 a (Fig. 6.6a) allows too many variations. Within the bootstrap confidence band (Section 6.3.2.6), most of these wiggles are not significant (not shown). Too strong oversmoothing with h = 100 a (Fig. 6.6b) reduces the estimation variance but enhances the bias: too many significant variations in flood occurrence rate are smoothed away. The right amount of smoothing appears to be indicated by cross-validation; an only slight undersmoothing with h = 35 a (Fig. 6.6c) lets the significant variations appear. The example of the heavy Elbe winter floods is pursued further in Section 6.3.2.7.

6.3.2.6 Bootstrap confidence band

A measure of the uncertainty of A(T) (Eq. 6.33) is essential for interpreting results. For example, it might be asked if the low in A(T) at T w 1700 for the heavy winter floods of the Elbe (Fig. 6.6c) is real or the mere product of sampling variability. Cowling et al. (1996) devised bootstrap algorithms for constructing a confidence band around A(T); one is shown as Algorithm 6.1.

Step 2 of the algorithm, discretization of T, uses a large number, , in the order of several hundred, to render a smooth estimate. For Step 4, alternative bootstrap methods, where also the size of the simulated set is a random variable, were tested by Cowling et al. (1996). Studentiza-tion (Step 8) draws advantage from the fact that the auxiliary variable Tstud(T, b) is approximately pivotal (independent of T). Alternative CI

1000

1200

1400 1600 Year

1800

Figure 6.6. Elbe winter floods, bandwidth selection, heavy events (magnitudes 2-3). The occurrence rate is estimated with pseudodata ("twopoint" rule) and bandwidth h = 5a (a), 100a (b) and 35a (c).

1000

1200

1400 1600 Year

1800

2000

Figure 6.6. Elbe winter floods, bandwidth selection, heavy events (magnitudes 2-3). The occurrence rate is estimated with pseudodata ("twopoint" rule) and bandwidth h = 5a (a), 100a (b) and 35a (c).

construction methods (percentile) at this step were tested by Cowling et al. (1996). The resulting confidence band (Step 12) is a pointwise.

The coverage performance of the confidence band (Algorithm 6.1) was tested by means of Monte Carlo simulations (Cowling 1995; Cowling et al. 1996; Hall P 2008, personal communication). The prescribed A(T) functions had the form of a sinusoid added to a linear trend. This nonmonotonic curve resembles what may be found in climate (Fig. 6.6c). This makes the experiments relevant in the context of this book. The number of extreme data, j, was in the order of a few hundreds. The Monte Carlo results revealed good coverage performance of the method (Algorithm 6.1), and also of the alternatives in resampling or CI construction.

Step 1 Event times, augmented by pseudodata (Eq. 6.34) Step 2 Discretization of time T

(Nt points) Step 3 Kernel occurrence rate estimate (Eq. 6.33) Step 4 From data set (Step 1), draw with replacement a simulated set of size m) Step 5 Kernel occurrence rate estimate, simulated data, using same h as in Step 3 Step 6 Go to Step 4 until b = B (usually B = 2000) replications exist Step 7 Average Step 8 Studentize Step 9 Determine ta as Step 10 Lower CI bound at T Step 11 Upper CI bound at T Step 12 Confidence band is given by joint CIs over T

Algorithm 6.1. Construction of a bootstrap confidence band for kernel occurrence rate estimation (Cowling et al. 1996). (Step 9 requires interpolation because the number of cases, #, is discrete.) The CI type is called percentile-t.

6.3.2.7 Example: Elbe winter floods (continued)

Figure 6.7 shows the occurrence rate of heavy Elbe winter floods with 90% confidence band. A very long increase starting from the beginning of the millennium culminated in a high during the second half of the sixteenth century, when A(T) w 0.17 a-1, corresponding to a return period of about 6 years. The changes to a low at around 1700 (A(T) w 0.08 a-1) and a subsequent high in the first half of the nineteenth century (A(T) w 0.22 a-1) are significant, as attested by the confidence band. The upper CI bound for that high is approximately 0.31 a-1. Elbe winter flood risk then decreased, and this trend has continued until the present.

T ro

Figure 6.7. Elbe winter floods, occurrence rate estimation, heavy events (magnitudes 2-3). The confidence band is shaded. Estimation parameters as in Fig. 6.6c: pseudodata generation rule "twopoint," h = 35 a; NT = 1322, B = 2000; confidence level: 1 - 2a = 90%.

1000 1200 1400 1600 1800 2000

Year

Figure 6.7. Elbe winter floods, occurrence rate estimation, heavy events (magnitudes 2-3). The confidence band is shaded. Estimation parameters as in Fig. 6.6c: pseudodata generation rule "twopoint," h = 35 a; NT = 1322, B = 2000; confidence level: 1 - 2a = 90%.

In a short interpretation of the mathematical finding, the long-term increase is a result of data inhomogeneity in the form of document loss. It is likely that documents from before the invention of printing in Europe (fifteenth century) were not many, and information about past floods may have been lost before finding entrance into secondary compilations. Therefore the confidence band is drawn only for the interval after a.d. 1500. The end of the sixteenth century was reportedly wet also in other parts of central and southwest Europe (Brazdil et al. 1999). The relatively low flood risk in the decades around T = 1700 may be a manifestation of the dry (and cold) European climate (Luterbacher et al. 2001) of the Late Maunder Minimum (Fig. 2.12). The downwards trend from T w 1830 to the present reflects a reduced risk of ice floods (like that in

1784), which in turn is a product of surface warming in the Elbe region (Mudelsee et al. 2003, 2004).

6.3.2.8 Example: volcanic peaks in the NGRIP sulfate record (continued)

Figure 6.8 shows a number of highs and lows in occurrence of extreme sulfate peaks in the NGRIP ice core record from ~ 10 to ~ 110 ka. Applying a more liberal detection threshold (z = 5.0) leads to more events, smaller relative errors (rc m-i/2) and higher significances of the changes in A(T), but also with a more conservative threshold (z = 10.0) the changes appear as significant. Estimates close to the boundaries of the observation interval depend on the pseudodata generation rule (not shown) and should be interpreted cautiously.

Construction of the "excess" sulfate record (Fig. 1.4) and extremes detection (Fig. 4.16) had the purpose of extracting from the ice core record the information about the times major volcanic eruptions occurred. For bandwidth selection, we ignore cross-validation and set h = 5 ka to be able to inspect changes in volcanic activity on Milankovitch timescales (^ 19 ka). Ice-age climate varied on such orbital timescales (Chapter 5), and studying causal relationships between volcanic activity and ice-age climate is facilitated by having common dynamical scales. See background material (Section 6.5).

6.3.2.9 Example: hurricane peaks in the Lower Mystic Lake varve thickness record (continued)

Figure 6.9 shows the occurrence rate of hurricanes in the Boston area (Lower Mystic Lake). Bandwidth selection imposes a slight un-dersmoothing (h = 50 a); a further undersmoothing would produce too many nonsignificant wiggles. There has been a significantly higher hurricane activity during the thirteenth century; the upper bound of the 90% CI is close to one event per decade. Hurricane activity after, and likely also before, that period was lower. The Cox-Lewis test (Section 6.3.2.10) about an overall trend is inconclusive (u = —1.15, p = 0.12) due to the nonmonotonic risk curve and the limited sample size.

The climatic interpretation may notice a relation between the high in hurricane activity and the Medieval Warm Period. The elevated hurricane risk may thus be a result of the Carnot machine in the tropical Atlantic region (Emanuel 1987, 1999), fuelled by higher sea-surface temperatures during that time (Keigwin 1996). However, Besonen et al. (2008: Section 4 therein) recognized "that the LML [Lower Mystic Lake] record is a single point source record representative for the greater Boston area,

Figure 6.8. NGRIP sulfate record, volcanic activity estimation. Sulfate extremes stemming from volcanic eruptions were detected (Fig. 4.16) by applying thresholds of z = 5.0 (a) and z = 10.0 (b) and declustering. Event times (a m = 1525; b m = 475) are shown as bars, occurrence rate as solid line, confidence band shaded. Estimation parameters: pseudodata generation rule "reflection," h = 5000 a; NT = 574, B = 2000; confidence level: 1 - 2a = 90%.

Figure 6.8. NGRIP sulfate record, volcanic activity estimation. Sulfate extremes stemming from volcanic eruptions were detected (Fig. 4.16) by applying thresholds of z = 5.0 (a) and z = 10.0 (b) and declustering. Event times (a m = 1525; b m = 475) are shown as bars, occurrence rate as solid line, confidence band shaded. Estimation parameters: pseudodata generation rule "reflection," h = 5000 a; NT = 574, B = 2000; confidence level: 1 - 2a = 90%.

and hurricanes that passed a few hundred km to the east or west may not have produced the very heavy rainfall amounts and vegetation disturbance in the lake watershed necessary to produce a strong signal within the LML sediments."

6.3.2.10 Parametric Poisson models and hypothesis tests

It is possible to formulate a parametric regression model (Chapter 4) for the occurrence rate. Since A(T) cannot be negative, it is convenient to employ the exponential function. A particularly simple model is

A(T) = exp (Ao + AT). (6.39) Another is the logistic model, exp (Ao + Ai T)

Figure 6.9. Lower Mystic Lake varve thickness record, hurricane activity estimation. Hurricane events were detected (Fig. 4.17) by applying a threshold of z = 5.2 and imposing a second condition (graded bed). Event times (m = 36) are shown as bars, occurrence rate as solid line, confidence band shaded. Estimation parameters: pseudodata generation rule "reflection," h = 50 a; NT = 616, B = 2000; confidence level: 1 2a = 90%.

Figure 6.9. Lower Mystic Lake varve thickness record, hurricane activity estimation. Hurricane events were detected (Fig. 4.17) by applying a threshold of z = 5.2 and imposing a second condition (graded bed). Event times (m = 36) are shown as bars, occurrence rate as solid line, confidence band shaded. Estimation parameters: pseudodata generation rule "reflection," h = 50 a; NT = 616, B = 2000; confidence level: 1 2a = 90%.

These two are monotonic functions, and they can be used to model simple increases (decreases) of the occurrence rate. Section 6.5 lists more parametric occurrence rate models. These models do not offer the flexibility of the nonparametric kernel approach (Section 6.3.2.2). The parametric models are better suited to a situation where the task is not quantification of A(T) but rather testing whether A(T) shows an increase (decrease) or not. Cox and Lewis (1966) use the simple model (Eq. 6.39) to test the hypothesis H1: > 0" (increasing occurrence rate) against Hq = 0" (constant occurrence rate). Their test statistic is

which becomes, with increasing m, rapidly standard normally distributed in shape (Cramer 1946: p. 245 therein). On the sample level, plug in {t0ut(j)}" 1, t(1) (observation interval, start) and t(n) (observation interval, end) to obtain u.

6.3.2.11 Monte Carlo experiment: Cox—Lewis test versus Mann—Kendall test

The Cox-Lewis statistic (Eq. 6.41) can be used to test for monotonic trends in the occurrence of extremes, the Mann-Kendall statistic (Eq. 4.61) was developed to test for changes in Xtrend (T). This theoretical unsuitability of the Mann-Kendall test (Zhang et al. 2004) has, however, not hindered climatologists and hydrologists to apply it for studying extremes.

We analyse the performance of both tests in a Monte Carlo experiment with climatologically realistic properties of the data generating process: a persistent noise component with non-normal distributional shape and an outlier or extreme component that exhibits an upwards trend in occurrence rate. That means, we study the performance of the overall procedure that is employed in practice: detecting extremes and testing for trends in their occurrence.

Figure 6.10. Density functions used in Monte Carlo experiment (Tables 6.1, 6.2, 6.3 and 6.4). The PDF of the noise component (solid line) is a lognormal, the PDF of the extreme component (which replaces the noise component in the case an extreme occurs) is a chi-squared distribution with v = 1 degrees of freedom and shifted in redirection by a value of 1.0 (short-dashed line) and 3.0 (long-dashed line), respectively.

Figure 6.10. Density functions used in Monte Carlo experiment (Tables 6.1, 6.2, 6.3 and 6.4). The PDF of the noise component (solid line) is a lognormal, the PDF of the extreme component (which replaces the noise component in the case an extreme occurs) is a chi-squared distribution with v = 1 degrees of freedom and shifted in redirection by a value of 1.0 (short-dashed line) and 3.0 (long-dashed line), respectively.

Figure 6.10 shows that in one simulation setting (the outlier component shifted by 1.0) the PDFs of outlier and noise components overlap to a good degree, while in the other (shifted by 3.0) the PDFs overlap only to a strongly reduced degree.

Table 6.1. Monte Carlo experiment, hypothesis tests for trends in occurrence of extremes. nsim = 90,000 random samples were generated from X(i) = Xout(i) + Xnoise(i), where T(i) = i,i = 1,... ,n and the noise is an AR(1) process with a = 1/e ~ 0.37, lognormal shape, mean 1.0 and standard deviation 0.5 (Table 3.5). The number of extremes, mtrue, was prescribed. The extreme event times, Tout(j), were generated by taking a random variable uniformly distributed over [0; 1] to the power of k and mapping it linearly on [T(1); T(n)]; the parameter k served to prescribe the trend in occurrence rate. Xout(j) was drawn from a shifted (+1 in x-direction) chi-squared distribution with v = 1; this extreme value replaced the value Xnoise(i) for which the time, T(i), was closest to Tout (j). Extremes detection employed a constant threshold of median + 3.5 MAD (Fig. 4.15) for the POT approach and a block length of k = 12 (Fig. 6.2) for the block extremes approach. The Cox-Lewis test was applied to the detected POT data, the Mann-Kendall test to the POT data and also the block extremes. The significance level of the one-sided tests was a = 0.10.

n

mtrue

Kb

Empirical powerc

Test

Cox-Lewis

Mann-Kendall

Mann-Kendall

(POT)

(block extremes)

(POT)

120

10

0.75

0.161

0.089

0.041

240

20

0.75

0.181

0.099

0.069

600

50

0.75

0.236

0.150

0.077

1200

100

0.75

0.313

0.217

0.096

2400

200

0.75

0.434

0.338

0.125

6000

500

0.75

0.680

0.619

0.194

12,000

1000

0.75

0.883

0.868

0.300

120

10

0.9

0.129

0.065

0.036

240

20

0.9

0.132

0.066

0.058

600

50

0.9

0.147

0.080

0.059

1200

100

0.9

0.166

0.093

0.065

2400

200

0.9

0.195

0.115

0.073

6000

500

0.9

0.268

0.176

0.089

12,000

1000

0.9

0.357

0.261

0.109

a True (prescribed) number of extremes.

b Prescribed occurrence rate trend parameter, A(T) <x T 1/k—1.

c Number of simulations where Ho: "no trend" is rejected against H1: "upwards trend," divided by nsim. Standard error is (Efron and Tibshirani 1993) nominally [a(1 - a)/nsim]1/2 =0.001.

a True (prescribed) number of extremes.

b Prescribed occurrence rate trend parameter, A(T) <x T 1/k—1.

c Number of simulations where Ho: "no trend" is rejected against H1: "upwards trend," divided by nsim. Standard error is (Efron and Tibshirani 1993) nominally [a(1 - a)/nsim]1/2 =0.001.

The results (Tables 6.1, 6.2, 6.3 and 6.4) can be summarized as follows.

1. Higher numbers of extremes allow better detectability of trends in A(T).

Table 6.2. Monte Carlo experiment, hypothesis tests for trends in occurrence of extremes (continued). The number of simulations was in each case nsim = 47,500. The significance level of the one-sided tests was a = 0.05. The shift parameter of the outlier component was 1.0. See Table 6.1 for further details.

Table 6.2. Monte Carlo experiment, hypothesis tests for trends in occurrence of extremes (continued). The number of simulations was in each case nsim = 47,500. The significance level of the one-sided tests was a = 0.05. The shift parameter of the outlier component was 1.0. See Table 6.1 for further details.

n

mtrue

Kb

Empirical powerc

Test

Cox-Lewis

Mann-Kendall

Mann-Kendall

(POT)

(block extremes)

(POT)

120

10

0.75

0.085

0.043

0.020

240

20

0.75

0.101

0.058

0.029

600

50

0.75

0.139

0.092

0.040

1200

100

0.75

0.195

0.142

0.051

2400

200

0.75

0.297

0.238

0.070

6000

500

0.75

0.542

0.494

0.120

12,000

1000

0.75

0.797

0.785

0.200

120

10

0.9

0.065

0.030

0.018

240

20

0.9

0.069

0.035

0.026

600

50

0.9

0.079

0.044

0.029

1200

100

0.9

0.089

0.052

0.033

2400

200

0.9

0.111

0.069

0.038

6000

500

0.9

0.166

0.108

0.049

12,000

1000

0.9

0.234

0.173

0.061

a True (prescribed) number of extremes.

b Prescribed occurrence rate trend parameter, A(T) <x T 1/k—1.

c Number of simulations where Ho: "no trend" is rejected against Hi: "upwards trend," divided by nsim. Standard error is nominally [a( 1 — a)/nsim]i/2 = 0.001.

a True (prescribed) number of extremes.

b Prescribed occurrence rate trend parameter, A(T) <x T 1/k—1.

c Number of simulations where Ho: "no trend" is rejected against Hi: "upwards trend," divided by nsim. Standard error is nominally [a( 1 — a)/nsim]i/2 = 0.001.

2. Giving the extremes larger values (shift parameter) enhances their detectability and the power of the tests for trends in A(T).

3. Performing a test at a lower significance level (a) reduces the power (as for hypothesis tests in general).

4. A stronger trend in A(T) (parameter k) can be easier detected (higher power).

5. The best performance, for all settings studied, was achieved by the Cox-Lewis test. For example, when the data size is n = 1200, the shift parameter is 3.0, the prescribed number of extremes is mtrue = 100, which is equivalent to an average A(T) of 1/12, and k = 0.75, which means an increase of A(T) « t(.333, then can this upwards trend be detected by the Cox-Lewis test at the 10% level in approximately 84.2% of all cases (Table 6.3).

Table 6.3. Monte Carlo experiment, hypothesis tests for trends in occurrence of extremes (continued). The number of simulations was in each case nsim = 90,000. The significance level of the one-sided tests was a = 0.10. The shift parameter of the outlier component was 3.0. See Table 6.1 for further details.

Table 6.3. Monte Carlo experiment, hypothesis tests for trends in occurrence of extremes (continued). The number of simulations was in each case nsim = 90,000. The significance level of the one-sided tests was a = 0.10. The shift parameter of the outlier component was 3.0. See Table 6.1 for further details.

n

mtrue

Kb

Empirical powerc

Test

Cox-Lewis

Mann-Kendall

Mann-Kendall

(POT )

(blocfc extremes)

(POT )

120

10

0.75

0.267

0.145

0.064

240

20

0.75

0.377

0.200

0.069

600

50

0.75

0.622

0.379

0.080

1200

100

0.75

0.842

0.603

0.098

2400

200

0.75

0.977

0.857

0.125

6000

500

0.75

1.000

0.996

0.188

12,000

1000

0.75

1.000

1.000

0.280

120

10

0.9

0.143

0.080

0.056

240

20

0.9

0.169

0.088

0.057

600

50

0.9

0.229

0.122

0.062

1200

100

0.9

0.308

0.167

0.065

2400

200

0.9

0.442

0.246

0.070

6000

500

0.9

0.709

0.450

0.089

12,000

1000

0.9

0.909

0.696

0.106

a True (prescribed) number of extremes.

b Prescribed occurrence rate trend parameter, A(T) <x T 1/k—l.

c Number of simulations where Ho: "no trend" is rejected against Hi: "upwards trend," divided by nsim. Standard error is nominally [a(1 — a)/nsim]l/2 = 0.001.

a True (prescribed) number of extremes.

b Prescribed occurrence rate trend parameter, A(T) <x T 1/k—l.

c Number of simulations where Ho: "no trend" is rejected against Hi: "upwards trend," divided by nsim. Standard error is nominally [a(1 — a)/nsim]l/2 = 0.001.

6. The Mann-Kendall test may be applied to the block extreme data, {Tout(j),Xout(j)}j=i, where the central time of a block is taken as Tout(j). This leads to power levels that may be acceptable in practice. However, in all simulation settings the Cox-Lewis test performed significantly better than the Mann-Kendall test. (Note that the tuning of the block length, k, resulted in m = mtrue. This may have elevated the test power compared to a situation where k has to be adjusted.)

7. The Mann-Kendall test applied to the POT data leads to an inacceptable test power.

We therefore recommend to use the Cox-Lewis test rather than any form of the Mann-Kendall test for studying trends in the occurrence of extreme events.

Table 6.4. Monte Carlo experiment, hypothesis tests for trends in occurrence of extremes (continued). The number of simulations was in each case nsim = 47,500. The significance level of the one-sided tests was a = 0.05. The shift parameter of the outlier component was 3.0. See Table 6.1 for further details.

Table 6.4. Monte Carlo experiment, hypothesis tests for trends in occurrence of extremes (continued). The number of simulations was in each case nsim = 47,500. The significance level of the one-sided tests was a = 0.05. The shift parameter of the outlier component was 3.0. See Table 6.1 for further details.

n

mtrue

Kb

Empirical powerc

Test

Cox-Lewis

Mann-Kendall

Mann-Kendall

(POT)

(block extremes)

(POT)

120

10

0.75

0.154

0.072

0.028

240

20

0.75

0.240

0.116

0.035

600

50

0.75

0.465

0.262

0.043

1200

100

0.75

0.728

0.474

0.055

2400

200

0.75

0.944

0.771

0.072

6000

500

0.75

1.000

0.991

0.117

12,000

1000

0.75

1.000

1.000

0.185

120

10

0.9

0.073

0.036

0.025

240

20

0.9

0.090

0.046

0.028

600

50

0.9

0.128

0.068

0.031

1200

100

0.9

0.188

0.099

0.034

2400

200

0.9

0.298

0.157

0.037

6000

500

0.9

0.567

0.329

0.048

12,000

1000

0.9

0.831

0.577

0.060

a True (prescribed) number of extremes.

b Prescribed occurrence rate trend parameter, A(T) <x T 1/k—l.

c Number of simulations where Ho: "no trend" is rejected against Hi: "upwards trend," divided by nsim. Standard error is nominally [a(1 — a)/«,sim]l/2 = 0.001.

a True (prescribed) number of extremes.

b Prescribed occurrence rate trend parameter, A(T) <x T 1/k—l.

c Number of simulations where Ho: "no trend" is rejected against Hi: "upwards trend," divided by nsim. Standard error is nominally [a(1 — a)/«,sim]l/2 = 0.001.

Was this article helpful?

0 0

Post a comment