ABSTRACT

Empirical economists and econometricians have long looked to simulation studies for guidance concerning the choice and performance of estimators whose theoretical justification is often asymptotic. For example, in any econometrics class that describes a method based on an asymptotic distribution, an instructor will almost always encounter the question, "How large a sample is large enough?" It is a good question, particularly when in most applications the character of the data (e.g., trending vs. nontrending) matters as much as the actual number of observations. Most instructors will give an answer based on simulation studies, but in an application there is often a considerable gap between cases that have been studied by Monte Carlo methods and the case at hand. The bootstrap method may be regarded as a simulation study that is tailored to the actual data being studied, with the results used either to fill in statistical gaps that do not yield easily to analytic methods (such as providing standard errors or confidence intervals when they are otherwise unavailable) or to adjust the original statistical estimates in an attempt to improve finitesample accuracy. It is therefore not surprising that the bootstrap has proven useful to many empirical researchers in economics, especially as the approach replaces difficult or intractable theoretical calculations with computer calculations that are becoming cheaper and cheaper over time. While bootstrap-like notions had existed previously, even within econometrics, the seminal work for these developments is Efron (1979), the classic paper in statistics that named the bootstrap, developed it as a unified technique, and demonstrated how computer power could widen the scope of its implementation.