Namespaces
Variants
Actions

Difference between revisions of "Bootstrap method"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
(Revised article to recover some original statements.)
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
A computer-intensive "resampling"  method, introduced in statistics by B. Efron in 1979 [[#References|[a3]]] for estimating the variability of statistical quantities and for setting confidence regions (cf. also [[Sample|Sample]]; [[Confidence set|Confidence set]]). The name "bootstrap"  refers to the analogy with pulling oneself up by one's own bootstraps. Efron's bootstrap is to resample the data. Given observations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b1203801.png" />, artificial bootstrap samples are drawn with replacement from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b1203802.png" />, putting equal probability mass <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b1203803.png" /> at each <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b1203804.png" />. For example, with sample size <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b1203805.png" /> and distinct observations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b1203806.png" /> one might obtain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b1203807.png" /> as bootstrap (re)sample. In fact, there are <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b1203808.png" /> distinct bootstrap samples in this case.
+
A computer-intensive '''re-[[Sample|sampling]]''' method, introduced in statistics by B. Efron in 1979 ([[#References|[a3]]]) for estimating the variability of statistical quantities and for setting [[Confidence set|confidence regions]]. The name ‘bootstrap’ refers to the analogy of pulling oneself up by one’s own bootstraps. Efron’s bootstrap is to re-sample the data — given observations $ X_{1},\ldots,X_{n} $, artificial bootstrap samples are drawn with replacement from $ X_{1},\ldots,X_{n} $, putting an equal probability mass of $ \dfrac{1}{n} $ on $ X_{i} $ for each $ i \in \{ 1,\ldots,n \} $. For example, with a sample size of $ n = 5 $ and distinct observations $ X_{1},X_{2},X_{3},X_{4},X_{5} $, one might obtain $ X_{3},X_{3},X_{1},X_{5},X_{4} $ as a bootstrap sample. In fact, there are $ 126 $ distinct bootstrap samples in this case.
  
A more formal description of Efron's non-parametric bootstrap in a simple setting is as follows. Suppose <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b1203809.png" /> is a random sample of size <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038010.png" /> from a population with unknown [[Distribution function|distribution function]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038011.png" /> on the real line; i.e. the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038012.png" />'s are assumed to be independent and identically distributed random variables with common distribution function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038013.png" /> (cf. also [[Random variable|Random variable]]). Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038014.png" /> denote a real-valued parameter to be estimated. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038015.png" /> denote an estimate of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038016.png" />, based on the data <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038017.png" /> (cf. also [[Statistical estimation|Statistical estimation]]; [[Statistical estimator|Statistical estimator]]). The object of interest is the [[Probability distribution|probability distribution]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038018.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038019.png" />; i.e.
+
A more rigorous description of Efron’s non-parametric bootstrap in a simple setting is as follows. Suppose that $ (X_{1},\ldots,X_{n}) $ is a random sample of size $ n $ drawn from a population whose underlying probability space is $ (\Omega,\mathscr{S},\mathsf{P}) $, i.e., the $ X_{i} $’s are independent and identically-distributed [[Random variable|random variables]] on $ (\Omega,\mathscr{S},\mathsf{P}) $. Let $ F $ denote the common [[Distribution function|cumulative distribution function (cdf)]] of the $ X_{i} $’s, which we assume to be unknown. Let $ \Theta $ be some pre-specified (non-linear) functional on the space of cdf’s; the parameter that we want to estimate is $ \theta \stackrel{\text{df}}{=} \Theta(F) $. Let $ T: \mathbf{R}^{n} \to \mathbf{R} $ be a [[Statistical estimator|statistical estimator]] for $ \theta $ based on $ (X_{1},\ldots,X_{n}) $ (cf. also [[Statistical estimation|statistical estimation]]), and let $ T_{n} \stackrel{\text{df}}{=} T(X_{1},\ldots,X_{n}) $. The object of interest is then the cdf $ G_{n} $ of the random variable $ \sqrt{n} (T_{n} - \theta) $ on $ (\Omega,\mathscr{S},\mathsf{P}) $ defined by
 +
$$
 +
\forall x \in \mathbf{R}: \qquad
 +
{G_{n}}(x) \stackrel{\text{df}}{=} \mathsf{P}(\{ \omega \in \Omega \mid \sqrt{n} [{T_{n}}(\omega) - \theta] \leq x \}).
 +
$$
 +
This is the cdf of $ T_{n} $ properly normalized. The scaling factor $ \sqrt{n} $ is a classical one, while the centering of $ T_{n} $ is by the parameter $ \theta $.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038020.png" /></td> </tr></table>
+
Efron’s non-parametric bootstrap estimator of $ G_{n} $, which we denote by $ G_{n}^{\ast} $, is now defined by
 +
$$
 +
\forall x \in \mathbf{R}, ~ \forall \omega \in \Omega: \qquad
 +
{G_{n}^{\ast}}(x;\omega) \stackrel{\text{df}}{=}
 +
[{\mathsf{P}_{n}^{\ast}}(\omega)](\{ \mathbf{a} \in \{ {X_{1}}(\omega),\ldots,{X_{n}}(\omega) \}^{n} \mid \sqrt{n} [[{T_{n}^{\ast}}(\omega)](\mathbf{a}) - {\theta_{n}}(\omega)] \leq x \}).
 +
$$
 +
Here, $ {T_{n}^{\ast}}(\omega) \stackrel{\text{df}}{=} T({X_{1}^{\ast}}(\omega),\ldots,{X_{n}^{\ast}}(\omega)) $, where we have the following:
 +
* $ ({X_{1}^{\ast}}(\omega),\ldots,{X_{n}^{\ast}}(\omega)) $ is a random sample (the bootstrap sample) drawn from the set $ \{ {X_{1}}(\omega),\ldots,{X_{n}}(\omega) \}^{n} $.
 +
* For each $ i \in \{ 1,\ldots,n \} $, the random variable $ {X_{i}^{\ast}}(\omega) $ is defined by $ [{X_{i}^{\ast}}(\omega)](\mathbf{a}) \stackrel{\text{df}}{=} \mathbf{a}(i) $ for all $ \mathbf{a} \in \{ {X_{1}}(\omega),\ldots,{X_{n}}(\omega) \}^{n} $.
 +
The cdf of the $ {X_{i}^{\ast}}(\omega) $’s is designated to be $ {\hat{F}_{n}}(\bullet;\omega) $, where $ \hat{F}_{n} $ denotes the [[Empirical distribution|empirical cdf]] associated with the sample $ (X_{1},\ldots,X_{n}) $. Note that $ \hat{F}_{n} $ (which is a random step function) puts a probability mass of $ \dfrac{1}{n} $ on $ X_{i} $ for each $ i \in \{ 1,\ldots,n \} $, and it is sometimes referred to as the '''re-sampling distribution'''. Finally, $ {\mathsf{P}_{n}^{\ast}}(\omega) $ denotes the probability measure on $ \{ {X_{1}}(\omega),\ldots,{X_{n}}(\omega) \}^{n} $ that corresponds to $ {\hat{F}_{n}}(\bullet;\omega) $, and $ {\theta_{n}}(\omega) \stackrel{\text{df}}{=} \Theta \! \left( {\hat{F}_{n}}(\bullet;\omega) \right) $.
  
for all real <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038021.png" />, the exact distribution function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038022.png" />, properly normalized. The scaling factor <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038023.png" /> is a classical one, while the centring of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038024.png" /> is by the parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038025.png" />. Here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038026.png" /> denotes  "probability"  corresponding to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038027.png" />.
+
Obviously, given an observed sample $ ({X_{1}}(\omega),\ldots,{X_{n}}(\omega)) $, the cdf $ {\hat{F}_{n}}(\bullet;\omega) $ is completely known and hence, $ {G_{n}^{\ast}}(\bullet;\omega) $ is also completely known (at least in principle). One may view $ G_{n}^{\ast} $ as the empirical counterpart in the ‘bootstrap world’ to $ G_{n} $ in the ‘real world’. In practice, an exact computation of $ G_{n}^{\ast} $ is usually impractical (for an observed sample $ ({X_{1}}(\omega),\ldots,{X_{n}}(\omega)) $ consisting of $ n $ distinct numbers, there are $ \displaystyle \binom{2 n - 1}{n} $ distinct bootstrap samples), but $ G_{n}^{\ast} $ can be approximated by means of [[Monte-Carlo method|Monte-Carlo simulation]]. Efficient bootstrap simulation is discussed, for example, in [[#References|[a2]]] and [[#References|[a10]]].
  
Efron's non-parametric bootstrap estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038028.png" /> is now given by
+
When does Efron’s bootstrap work? The consistency of the bootstrap approximation $ G_{n}^{\ast} $ viewed as an estimate of $ G_{n} $, i.e., the requirement that
 +
$$
 +
\sup_{x \in \mathbf{R}} |{G_{n}^{\ast}}(x;\bullet) - {G_{n}}(x)| \stackrel{\mathsf{P}}{\longrightarrow} 0, \qquad (\text{Convergence in probability})
 +
$$
 +
is generally viewed as an absolute prerequisite for Efron’s bootstrap to work in the problem at hand. Of course, bootstrap consistency is only a first-order asymptotic result, and the error committed when $ G_{n} $ is estimated by $ G_{n}^{\ast} $ may still be quite large in finite samples. Second-order asymptotics (cf. [[Edgeworth series|Edgeworth series]]) enables one to investigate the speed at which $ \displaystyle \sup_{x \in \mathbf{R}} |{G_{n}^{\ast}}(x;\bullet) - {G_{n}}(x)| $ converges to $ 0 $ in probability, and also to identify cases where the rate of convergence is faster than $ \dfrac{1}{\sqrt{n}} $ — the classical Berry-Esseen-type rate for the normal approximation. An example in which the bootstrap possesses the beneficial property of being more accurate than the traditional normal approximation is the Student $ t $-statistic and, more generally, Studentized statistics. For this reason, the use of bootstrapped Studentized statistics for setting confidence intervals is strongly advocated in a number of important problems. A general reference is [[#References|[a7]]].
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038029.png" /></td> </tr></table>
+
When does the bootstrap fail? It has been proved in [[#References|[a1]]] that in the case of the mean, Efron’s bootstrap fails when $ F $ is the [[Attraction domain of a stable distribution|domain of attraction]] of an $ \alpha $-stable law with $ 0 < \alpha < 2 $. However, by re-sampling from $ \hat{F}_{n} $ with a (smaller) re-sample size $ m(n) $ that satisfies $ m(n) \to \infty $ and $ \dfrac{m(n)}{n} \to 0 $ as $ n \to \infty $, it can be shown that the (modified) bootstrap works. More generally, in recent years, the importance of a proper choice of the re-sampling distribution has become clear (see [[#References|[a5]]], [[#References|[a9]]] and [[#References|[a10]]]).
  
for all real <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038030.png" />. Here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038031.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038032.png" /> denotes an artificial random sample (the bootstrap sample) from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038033.png" />, the [[Empirical distribution|empirical distribution]] function of the original observations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038034.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038035.png" />. Note that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038036.png" /> is the random distribution (a step function) which puts probability mass <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038037.png" /> at each of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038038.png" />'s (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038039.png" />), sometimes referred to as the resampling distribution; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038040.png" /> denotes  "probability"  corresponding to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038041.png" />, conditionally given <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038042.png" />, i.e. given the observations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038043.png" />. Obviously, given the observed values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038044.png" /> in the sample, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038045.png" /> is completely known and (at least in principle) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038046.png" /> is also completely known. One may view <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038047.png" /> as the empirical counterpart in the  "bootstrap world"  to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038048.png" /> in the  "real world" . In practice, exact computation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038049.png" /> is usually impossible (for a sample <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038050.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038051.png" /> distinct numbers there are <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038052.png" /> distinct bootstrap (re)samples), but <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038053.png" /> can be approximated by means of Monte-Carlo simulation (cf. also [[Monte-Carlo method|Monte-Carlo method]]). Efficient bootstrap simulation is discussed e.g. in [[#References|[a2]]], [[#References|[a10]]].
+
The bootstrap can be an effective tool in many problems of statistical inference, for example, the construction of a confidence band in non-parametric regression, testing for the number of modes of a density, or the calibration of confidence bounds (see [[#References|[a2]]], [[#References|[a4]]] and [[#References|[a8]]]). Re-sampling methods for dependent data, such as the '''block bootstrap''', is another important topic of recent research (see [[#References|[a2]]] and [[#References|[a6]]]).
  
When does Efron's bootstrap work? The consistency of the bootstrap approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038054.png" />, viewed as an estimate of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038055.png" />, i.e. one requires
+
====References====
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038056.png" /></td> </tr></table>
 
 
 
to hold, in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038057.png" />-probability, is generally viewed as an absolute prerequisite for Efron's bootstrap to work in the problem at hand. Of course, bootstrap consistency is only a first-order asymptotic result and the error committed when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038058.png" /> is estimated by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038059.png" /> may still be quite large in finite samples. Second-order asymptotics (Edgeworth expansions; cf. also [[Edgeworth series|Edgeworth series]]) enables one to investigate the speed at which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038060.png" /> approaches zero, and also to identify cases where the rate of convergence is faster than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038061.png" />, the classical Berry–Esseen-type rate for the normal approximation. An example in which the bootstrap possesses the beneficial property of being more accurate than the traditional normal approximation is the Student <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038063.png" />-statistic and more generally Studentized statistics. For this reason the use of bootstrapped Studentized statistics for setting confidence intervals is strongly advocated in a number of important problems. A general reference is [[#References|[a7]]].
 
 
 
When does the bootstrap fail? It has been proved [[#References|[a1]]] that in the case of the mean, Efron's bootstrap fails when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038064.png" /> is the domain of attraction of an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038065.png" />-stable law with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038066.png" /> (cf. also [[Attraction domain of a stable distribution|Attraction domain of a stable distribution]]). However, by resampling from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038067.png" />, with (smaller) resample size <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038068.png" />, satisfying <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038069.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120380/b12038070.png" />, it can be shown that the (modified) bootstrap works. More generally, in recent years the importance of a proper choice of the resampling distribution has become clear, see e.g. [[#References|[a5]]], [[#References|[a9]]], [[#References|[a10]]].
 
  
The bootstrap can be an effective tool in many problems of statistical inference; e.g. the construction of a confidence band in non-parametric regression, testing for the number of modes of a density, or the calibration of confidence bounds, see e.g. [[#References|[a2]]], [[#References|[a4]]], [[#References|[a8]]]. Resampling methods for dependent data, such as the  "block bootstrap" , is another important topic of recent research, see e.g. [[#References|[a2]]], [[#References|[a6]]].
+
<table>
 
+
<TR><TD valign="top">[a1]</TD><TD valign="top">
====References====
+
K.B. Athreya, “Bootstrap of the mean in the infinite variance case”, ''Ann. Statist.'', '''15''' (1987), pp. 724–731.</TD></TR>
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> K.B. Athreya,   "Bootstrap of the mean in the infinite variance case"  ''Ann. Statist.'' , '''15''' (1987) pp. 724–731</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> A.C. Davison,   D.V. Hinkley,   "Bootstrap methods and their application" , Cambridge Univ. Press (1997)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> B. Efron,   "Bootstrap methods: another look at the jackknife"  ''Ann. Statist.'' , '''7''' (1979) pp. 1–26</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> B. Efron,   R.J. Tibshirani,   "An introduction to the bootstrap" , Chapman&amp;Hall (1993)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> E. Giné,   "Lectures on some aspects of the bootstrap"  P. Bernard (ed.) , ''Ecole d'Eté de Probab. Saint Flour XXVI-1996'' , ''Lecture Notes Math.'' , '''1665''' , Springer (1997)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top"> F. Götze,   H.R. Künsch,   "Second order correctness of the blockwise bootstrap for stationary observations"  ''Ann. Statist.'' , '''24''' (1996) pp. 1914–1933</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top"> P. Hall,   "The bootstrap and Edgeworth expansion" , Springer (1992)</TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top"> E. Mammen,   "When does bootstrap work? Asymptotic results and simulations" , ''Lecture Notes Statist.'' , '''77''' , Springer (1992)</TD></TR><TR><TD valign="top">[a9]</TD> <TD valign="top"> H. Putter,   W.R. van Zwet,   "Resampling: consistency of substitution estimators"  ''Ann. Statist.'' , '''24''' (1996) pp. 2297–2318</TD></TR><TR><TD valign="top">[a10]</TD> <TD valign="top"> J. Shao,   D. Tu,   "The jackknife and bootstrap" , Springer (1995)</TD></TR></table>
+
<TR><TD valign="top">[a2]</TD><TD valign="top">
 +
A.C. Davison, D.V. Hinkley, “Bootstrap methods and their application”, Cambridge Univ. Press (1997).</TD></TR>
 +
<TR><TD valign="top">[a3]</TD><TD valign="top">
 +
B. Efron, “Bootstrap methods: another look at the jackknife”, ''Ann. Statist.'', '''7''' (1979), pp. 1–26.</TD></TR>
 +
<TR><TD valign="top">[a4]</TD><TD valign="top">
 +
B. Efron, R.J. Tibshirani, “An introduction to the bootstrap”, Chapman &amp; Hall (1993).</TD></TR>
 +
<TR><TD valign="top">[a5]</TD> <TD valign="top">
 +
E. Giné, “Lectures on some aspects of the bootstrap”, P. Bernard (ed.), ''Ecole d'Eté de Probab. Saint Flour XXVI-1996'', ''Lecture Notes Math.'', '''1665''', Springer (1997).</TD></TR>
 +
<TR><TD valign="top">[a6]</TD><TD valign="top">
 +
F. Götze, H.R. Künsch, “Second order correctness of the blockwise bootstrap for stationary observations”, ''Ann. Statist.'', '''24''' (1996), pp. 1914–1933.</TD></TR>
 +
<TR><TD valign="top">[a7]</TD><TD valign="top">
 +
P. Hall, “The bootstrap and Edgeworth expansion”, Springer (1992).</TD></TR>
 +
<TR><TD valign="top">[a8]</TD><TD valign="top">
 +
E. Mammen, “When does bootstrap work? Asymptotic results and simulations”, ''Lecture Notes Statist.'', '''77''', Springer (1992).</TD></TR>
 +
<TR><TD valign="top">[a9]</TD><TD valign="top">
 +
H. Putter, W.R. van Zwet, “Resampling: consistency of substitution estimators”, ''Ann. Statist.'', '''24''' (1996), pp. 2297–2318.</TD></TR>
 +
<TR><TD valign="top">[a10]</TD> <TD valign="top">
 +
J. Shao, D. Tu, “The jackknife and bootstrap”, Springer (1995).</TD></TR>
 +
</table>

Latest revision as of 05:52, 15 June 2017

A computer-intensive re-sampling method, introduced in statistics by B. Efron in 1979 ([a3]) for estimating the variability of statistical quantities and for setting confidence regions. The name ‘bootstrap’ refers to the analogy of pulling oneself up by one’s own bootstraps. Efron’s bootstrap is to re-sample the data — given observations $ X_{1},\ldots,X_{n} $, artificial bootstrap samples are drawn with replacement from $ X_{1},\ldots,X_{n} $, putting an equal probability mass of $ \dfrac{1}{n} $ on $ X_{i} $ for each $ i \in \{ 1,\ldots,n \} $. For example, with a sample size of $ n = 5 $ and distinct observations $ X_{1},X_{2},X_{3},X_{4},X_{5} $, one might obtain $ X_{3},X_{3},X_{1},X_{5},X_{4} $ as a bootstrap sample. In fact, there are $ 126 $ distinct bootstrap samples in this case.

A more rigorous description of Efron’s non-parametric bootstrap in a simple setting is as follows. Suppose that $ (X_{1},\ldots,X_{n}) $ is a random sample of size $ n $ drawn from a population whose underlying probability space is $ (\Omega,\mathscr{S},\mathsf{P}) $, i.e., the $ X_{i} $’s are independent and identically-distributed random variables on $ (\Omega,\mathscr{S},\mathsf{P}) $. Let $ F $ denote the common cumulative distribution function (cdf) of the $ X_{i} $’s, which we assume to be unknown. Let $ \Theta $ be some pre-specified (non-linear) functional on the space of cdf’s; the parameter that we want to estimate is $ \theta \stackrel{\text{df}}{=} \Theta(F) $. Let $ T: \mathbf{R}^{n} \to \mathbf{R} $ be a statistical estimator for $ \theta $ based on $ (X_{1},\ldots,X_{n}) $ (cf. also statistical estimation), and let $ T_{n} \stackrel{\text{df}}{=} T(X_{1},\ldots,X_{n}) $. The object of interest is then the cdf $ G_{n} $ of the random variable $ \sqrt{n} (T_{n} - \theta) $ on $ (\Omega,\mathscr{S},\mathsf{P}) $ defined by $$ \forall x \in \mathbf{R}: \qquad {G_{n}}(x) \stackrel{\text{df}}{=} \mathsf{P}(\{ \omega \in \Omega \mid \sqrt{n} [{T_{n}}(\omega) - \theta] \leq x \}). $$ This is the cdf of $ T_{n} $ properly normalized. The scaling factor $ \sqrt{n} $ is a classical one, while the centering of $ T_{n} $ is by the parameter $ \theta $.

Efron’s non-parametric bootstrap estimator of $ G_{n} $, which we denote by $ G_{n}^{\ast} $, is now defined by $$ \forall x \in \mathbf{R}, ~ \forall \omega \in \Omega: \qquad {G_{n}^{\ast}}(x;\omega) \stackrel{\text{df}}{=} [{\mathsf{P}_{n}^{\ast}}(\omega)](\{ \mathbf{a} \in \{ {X_{1}}(\omega),\ldots,{X_{n}}(\omega) \}^{n} \mid \sqrt{n} [[{T_{n}^{\ast}}(\omega)](\mathbf{a}) - {\theta_{n}}(\omega)] \leq x \}). $$ Here, $ {T_{n}^{\ast}}(\omega) \stackrel{\text{df}}{=} T({X_{1}^{\ast}}(\omega),\ldots,{X_{n}^{\ast}}(\omega)) $, where we have the following:

  • $ ({X_{1}^{\ast}}(\omega),\ldots,{X_{n}^{\ast}}(\omega)) $ is a random sample (the bootstrap sample) drawn from the set $ \{ {X_{1}}(\omega),\ldots,{X_{n}}(\omega) \}^{n} $.
  • For each $ i \in \{ 1,\ldots,n \} $, the random variable $ {X_{i}^{\ast}}(\omega) $ is defined by $ [{X_{i}^{\ast}}(\omega)](\mathbf{a}) \stackrel{\text{df}}{=} \mathbf{a}(i) $ for all $ \mathbf{a} \in \{ {X_{1}}(\omega),\ldots,{X_{n}}(\omega) \}^{n} $.

The cdf of the $ {X_{i}^{\ast}}(\omega) $’s is designated to be $ {\hat{F}_{n}}(\bullet;\omega) $, where $ \hat{F}_{n} $ denotes the empirical cdf associated with the sample $ (X_{1},\ldots,X_{n}) $. Note that $ \hat{F}_{n} $ (which is a random step function) puts a probability mass of $ \dfrac{1}{n} $ on $ X_{i} $ for each $ i \in \{ 1,\ldots,n \} $, and it is sometimes referred to as the re-sampling distribution. Finally, $ {\mathsf{P}_{n}^{\ast}}(\omega) $ denotes the probability measure on $ \{ {X_{1}}(\omega),\ldots,{X_{n}}(\omega) \}^{n} $ that corresponds to $ {\hat{F}_{n}}(\bullet;\omega) $, and $ {\theta_{n}}(\omega) \stackrel{\text{df}}{=} \Theta \! \left( {\hat{F}_{n}}(\bullet;\omega) \right) $.

Obviously, given an observed sample $ ({X_{1}}(\omega),\ldots,{X_{n}}(\omega)) $, the cdf $ {\hat{F}_{n}}(\bullet;\omega) $ is completely known and hence, $ {G_{n}^{\ast}}(\bullet;\omega) $ is also completely known (at least in principle). One may view $ G_{n}^{\ast} $ as the empirical counterpart in the ‘bootstrap world’ to $ G_{n} $ in the ‘real world’. In practice, an exact computation of $ G_{n}^{\ast} $ is usually impractical (for an observed sample $ ({X_{1}}(\omega),\ldots,{X_{n}}(\omega)) $ consisting of $ n $ distinct numbers, there are $ \displaystyle \binom{2 n - 1}{n} $ distinct bootstrap samples), but $ G_{n}^{\ast} $ can be approximated by means of Monte-Carlo simulation. Efficient bootstrap simulation is discussed, for example, in [a2] and [a10].

When does Efron’s bootstrap work? The consistency of the bootstrap approximation $ G_{n}^{\ast} $ viewed as an estimate of $ G_{n} $, i.e., the requirement that $$ \sup_{x \in \mathbf{R}} |{G_{n}^{\ast}}(x;\bullet) - {G_{n}}(x)| \stackrel{\mathsf{P}}{\longrightarrow} 0, \qquad (\text{Convergence in probability}) $$ is generally viewed as an absolute prerequisite for Efron’s bootstrap to work in the problem at hand. Of course, bootstrap consistency is only a first-order asymptotic result, and the error committed when $ G_{n} $ is estimated by $ G_{n}^{\ast} $ may still be quite large in finite samples. Second-order asymptotics (cf. Edgeworth series) enables one to investigate the speed at which $ \displaystyle \sup_{x \in \mathbf{R}} |{G_{n}^{\ast}}(x;\bullet) - {G_{n}}(x)| $ converges to $ 0 $ in probability, and also to identify cases where the rate of convergence is faster than $ \dfrac{1}{\sqrt{n}} $ — the classical Berry-Esseen-type rate for the normal approximation. An example in which the bootstrap possesses the beneficial property of being more accurate than the traditional normal approximation is the Student $ t $-statistic and, more generally, Studentized statistics. For this reason, the use of bootstrapped Studentized statistics for setting confidence intervals is strongly advocated in a number of important problems. A general reference is [a7].

When does the bootstrap fail? It has been proved in [a1] that in the case of the mean, Efron’s bootstrap fails when $ F $ is the domain of attraction of an $ \alpha $-stable law with $ 0 < \alpha < 2 $. However, by re-sampling from $ \hat{F}_{n} $ with a (smaller) re-sample size $ m(n) $ that satisfies $ m(n) \to \infty $ and $ \dfrac{m(n)}{n} \to 0 $ as $ n \to \infty $, it can be shown that the (modified) bootstrap works. More generally, in recent years, the importance of a proper choice of the re-sampling distribution has become clear (see [a5], [a9] and [a10]).

The bootstrap can be an effective tool in many problems of statistical inference, for example, the construction of a confidence band in non-parametric regression, testing for the number of modes of a density, or the calibration of confidence bounds (see [a2], [a4] and [a8]). Re-sampling methods for dependent data, such as the block bootstrap, is another important topic of recent research (see [a2] and [a6]).

References

[a1] K.B. Athreya, “Bootstrap of the mean in the infinite variance case”, Ann. Statist., 15 (1987), pp. 724–731.
[a2] A.C. Davison, D.V. Hinkley, “Bootstrap methods and their application”, Cambridge Univ. Press (1997).
[a3] B. Efron, “Bootstrap methods: another look at the jackknife”, Ann. Statist., 7 (1979), pp. 1–26.
[a4] B. Efron, R.J. Tibshirani, “An introduction to the bootstrap”, Chapman & Hall (1993).
[a5] E. Giné, “Lectures on some aspects of the bootstrap”, P. Bernard (ed.), Ecole d'Eté de Probab. Saint Flour XXVI-1996, Lecture Notes Math., 1665, Springer (1997).
[a6] F. Götze, H.R. Künsch, “Second order correctness of the blockwise bootstrap for stationary observations”, Ann. Statist., 24 (1996), pp. 1914–1933.
[a7] P. Hall, “The bootstrap and Edgeworth expansion”, Springer (1992).
[a8] E. Mammen, “When does bootstrap work? Asymptotic results and simulations”, Lecture Notes Statist., 77, Springer (1992).
[a9] H. Putter, W.R. van Zwet, “Resampling: consistency of substitution estimators”, Ann. Statist., 24 (1996), pp. 2297–2318.
[a10] J. Shao, D. Tu, “The jackknife and bootstrap”, Springer (1995).
How to Cite This Entry:
Bootstrap method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Bootstrap_method&oldid=11753
This article was adapted from an original article by Roelof Helmers (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article