ABSTRACT

The role of the acceleration constant a is less obvious, but it is partially related to bias in the estimation of the standard error. For example, if zo = 0 and given that z<a) is negative and zl--a is positive, it can be shown that changing li from zero to a small positive value will widen the BCa bootstrap percentile confidence intervals. More generally, we could argue that the usual normal approximation

~-! ""N(O,l) se(f3)

(where se(~) is the standard enor of~) may be generalized lo

(9)

(10)

for some increasing transformation m, where seo(m(~)) denotes the standard error of m(~) when the true value f3 equals any conveniently chosen f3o: recall the point of the exercise is that in finite samples, se(m(~)) will depend on the value of f3 and the approximation in the denominator in (lO) attempts to capture this. Efron or Efron and Tibshirani (1993, pp. 326-327) show that if we use the normalizing transform m, calculate confidence intervals based on the normal distribution. and then transform back using m -l, we obtain the BCa intervals except that a and zo need to be estimated. Note that m does not need to be known. These papers also argue that in one-parameter families, a good approximation for a is one-sixth of tlu; skewness coefficient of the score function of {3, evaluated at ~; for multi parameter families, they offer a formula based on the infinitesimal jackknife. However, most econometricians will prefer, at least computationally, the simpler jackknife formula (Efron and Tibshirani, 1993, p. 186):

6{'2._(~1 ~(i))2}3/2 (IJ)

where the summations run from 1 ton, ~(i) is~ calculated on a sample with the ith observation deleted, and~], the jackknife estimator of {3, is the average of the ~(i)·

C. Percentile-t Methods

An alternative refinement to BCa methods is the percentile-t bootstrap confidence intervals. Suppose after estimating~ and se(~) on the original sample,~* and se(~*) are estimated on each bootstrap sample. (A key requirement for this method is that some form of standard error estimate is available for both the original and bootstrapped data.) If one thinks of the bootstrap process as a Monte Carlo experiment, it is natural to think of~* = ~ as the null hypothesis to be tested in each trial and hence natural to calculate the t-ratio t* = (~* - ~)/se(~*) on each trial. The bootstrap procedure therefore essentially generates a distribution for this t-ratio under a particular null hypothesis and the 1 - 2a percentile-t confidence intervals become

(12)

where t*a is the ath percentile of the t*'s. Essentially this technique uses the bootstrap to create its own critical values instead of using those supplied by the usual t-distribution.