Z3 z 5z5 16z3 3z

384v 3

where z" = z(0) is the percentage point of the standard normal distribution. For v > 10 and 0.0025 < 0 < 0.9975, this approximation has a relative accuracy of less than 0.015% (own determination using Johnson et al. (1995: Table 28.7 therein)). See Johnson et al. (1995: Chapter 28 therein) for more details on the t distribution.

The chi-squared distribution with v degrees of freedom has following PDF:

f (x) = exp(-x/2)xv/2-1/ 2v/2 ■ r(v/2) , x > 0, v > 0. (3.59)

It has mean v and variance 2v. Approximations are used for calculating the percentage point, XV(0). For the Monte Carlo simulation experiments in this book, the following formula (Goldstein 1973) is employed:

9v 1215v2

229,635v3

3 162v 5832v2

301z^ - 1519z^ - 32,769z3 - 79,349z" + 7,873,200v3

where zp = z(ft) is the percentage point of the standard normal distribution. For v > 10 and 0.001 < ft < 0.999, this approximation has a relative accuracy of less than 0.05% (Zar 1978). See Johnson et al. (1994: Chapter 18 therein) for more details on the chi-squared distribution.

The lognormal distribution can be defined as follows. If ln [X(i)] is distributed as N(^, ct2), then X(i) has a lognormal distribution with parameters ^ and ct (shape). It has the PDF

f (x) = (2n)-1/2 ■ ct-1 ■ x-1 ■ exp {- [ln(x/b)]2 /(2ct2) } , x > 0,

where b = exp(^). The lognormal has expectation exp(^ + ct2/2) and variance {exp(2^) ■ exp(CT2) ■ [exp(CT2) — 1]}. Other definitions with an additional shift parameter ((X(i) — ¿) instead of X(i)) exist. See Aitchi-son and Brown (1957), Antle (1985), Crow and Shimizu (1988) or Johnson et al. (1994: Chapter 14 therein) for more details on the lognormal distribution.

The geometric distribution is a discrete distribution with prob (X = x) = p ■ qx, x = 0,1, 2,..., (3.62)

where q = 1—p and 0 < p < 1. It has expectation q/p. See Johnson et al. (1993: Chapter 5 therein) for more details on the geometric distribution.

BCa CI construction has numerical pitfalls. Regarding the bias correction, 2o, in the case of a discretely distributed, unsmooth estimator, 0, own experiments with median estimation and x(i) G Z (whole numbers) have shown that a higher CI accuracy is achieved when using instead of Eq. (3.37) the following formula:

'#(9*b <£) #{0*b = 0 20 = F-1 1 1 B J + 1 2B J | . (3.63)

Because only a finite number, B, of 0* values are computed, 0*(a1) and 0* (a2) are calculated by interpolation. If now B is too small, the acceleration, 0, too large and a too small, then a1 may become too small or a2 too large to carry out the interpolation. The choice of values for this book (B = 2000, a > 0.025), however, prohibits this problem. See Efron and Tibshirani (1993: Section 14.7 therein) and Davison and Hinkley (1997: Section 5.3.2 therein) on the interpolation pitfall, and further Andrews and Buchinsky (2000, 2002) on the choice of B. Refer to Polansky (1999) on the finite sample bounds on coverage for percentile based CIs. As regards estimation of the acceleration, possible alternatives to Eq. (3.38) are analysed by Frangos and Schucany (1990).

The balanced bootstrap (Davison et al. 1986) is a bootstrap variant where over all n ■ B resampling operations, each of the values |x(i)}™=1 is prescribed to be drawn equally often (B times). This can increase the accuracy of bootstrap estimates or, instead, allow to reduce B with the same accuracy as when using the "unbalanced" bootstrap with a higher number of resamples. In the case of a process without serial dependence, a simple algorithm for a balanced version of the ordinary bootstrap is as follows (Davison and Hinkley 1997: Section 9.2.1 therein). Step 1. Concatenate B copies of |x(i)}™=1 into a single set S of size n ■ B. Step 2. Permute the elements of S at random and call this set S*. Step 3. For b = 1,..., B, take successive sets of n elements of S* as balanced re-samples {x*b(i)}n=1. In the case of serial dependence, a balanced version of the MBB would permute blocks of elements of S. A reduced number of resamples, B, means reduced computing costs for the balanced bootstrap. How large this gain is depends on the type of estimation. The gain may not be large for quantile estimation (Davison and Hinkley 1997), which is required in BCa CI construction (Section 3.4.4).

2SAMPLES (Mudelsee and Alkio 2007) is a Fortran 90 program for performing comparisons of location measures (mean and median) and variability measures (standard deviation and MAD) between two samples. The difference measures are estimated with BCa CI. It is freely available from http://www.mudelsee.com (27 November 2009).

DOS and Excel resampling programs are freely available for download on http://userweb.port.ac.uk/~woodm/programs.htm (2 August 2005).

Good (2005) is a reference where routines for bootstrap resampling, BCa and bootstrap-t CI construction can be found. Also two- and multi-sample comparisons are included. Following languages/environments are supported: C++, EViews, Excel, GAUSS, Matlab, R, Resampling Stats, SAS, S-Plus and Stata.

A Matlab/R computer code for practical implementation of the block length selector of Politis and White (2004) can be downloaded from http://econ.duke.edu/~ap172/ (29 June 2010).

Resampling Stats is a resampling software purchasable as standalone, Excel and Matlab versions from http://www.resample.com (2 August 2005).

Shazam is a commercial econometrics software that includes bootstrap resampling (http://shazam.econ.ubc.ca, 2 August 2005).

SPSS is a software package that includes bootstrap resampling and CI construction (Version 13.0: SPSS, Inc., Chicago, IL 60606, USA; IBM SPSS Statistics Version 18: http://spss.com/software/statistics, 5 January 2010).

Part II

Univariate Time Series

0 0

Post a comment