Difference between revisions of "Bootstrap method"
(Importing text file) |
m (Completed rendering of article in TeX.) |
||
Line 1: | Line 1: | ||
− | A computer-intensive | + | A computer-intensive '''re-sampling''' method, introduced in statistics by B. Efron in 1979 ([[#References|[a3]]]) for estimating the variability of statistical quantities and for setting [[Confidence set|confidence regions]] (cf. also [[Sample|sample]]). The name ‘bootstrap’ refers to the analogy of pulling oneself up by one’s own bootstraps. Efron’s bootstrap is to re-sample the data. Given observations , artificial bootstrap samples are drawn with replacement from X_{1},\ldots,X_{n} , putting an equal probability mass of \dfrac{1}{n} on X_{i} for each i \in \{ 1,\ldots,n \} . For example, with a sample size of $ n = 5 $ and distinct observations X_{1},X_{2},X_{3},X_{4},X_{5} , one might obtain X_{3},X_{3},X_{1},X_{5},X_{4} as a bootstrap sample. In fact, there are 126 distinct bootstrap samples in this case. |
− | A more formal description of | + | A more formal description of Efron’s non-parametric bootstrap in a simple setting is as follows. Suppose that (X_{1},\ldots,X_{n}) is a random sample of size n from a population with an unknown [[Distribution function|distribution function]] F_{\theta} on the real line, i.e., the X_{i} ’s are assumed to be independent and identically-distributed [[Random variable|random variables]] with a common distribution function F_{\theta} that depends on a real-valued parameter \theta . Let $ T_{n} = {T_{n}}(X_{1},\ldots,X_{n}) $ denote a [[Statistical estimator|statistical estimator]] for \theta , based on the data X_{1},\ldots,X_{n} (cf. also [[Statistical estimation|statistical estimation]]). The object of interest is then the [[Probability distribution|probability distribution]] G_{n} of \sqrt{n} (T_{n} - \theta) defined by |
+ | $$ | ||
+ | \forall x \in \mathbf{R}: \qquad | ||
+ | {G_{n}}(x) \stackrel{\text{df}}{=} {\mathsf{P}_{\theta}}(\sqrt{n} (T_{n} - \theta) \leq x), | ||
+ | $$ | ||
+ | which is the exact distribution function of T_{n} properly normalized. The scaling factor \sqrt{n} is a classical one, while the centering of T_{n} is by the parameter \theta . Here, \mathsf{P}_{\theta} denotes the probability measure corresponding to F_{\theta} . | ||
− | + | Efron’s non-parametric bootstrap estimator of G_{n} is now defined by | |
+ | $$ | ||
+ | \forall x \in \mathbf{R}: \qquad | ||
+ | {G_{n}^{\ast}}(x) \stackrel{\text{df}}{=} {\mathsf{P}_{n}^{\ast}}(\sqrt{n} (T_{n}^{\ast} - \theta_{n}) \leq x). | ||
+ | $$ | ||
+ | Here, $ T_{n}^{\ast} = {T_{n}}(X_{1}^{\ast},\ldots,X_{n}^{\ast}) , where (X_{1}^{\ast},\ldots,X_{n}^{\ast}) denotes an artificial random sample (the bootstrap sample) from \hat{F}_{n} , the [[Empirical distribution|empirical distribution]] function of the original observations (X_{1},\ldots,X_{n}) , and \theta_{n} = \theta \! \left( \hat{F}_{n} \right) . Note that \hat{F}_{n} is the random distribution (a step function) that puts a probability mass of \dfrac{1}{n} on X_{i} for each i \in \{ 1,\ldots,n \} $, sometimes referred to as the '''re-sampling distribution'''; \mathsf{P}_{n}^{\ast} denotes the probability measure corresponding to \hat{F}_{n} , conditionally given \hat{F}_{n} , i.e., given the observations X_{1},\ldots,X_{n} . Obviously, given the observed values X_{1},\ldots,X_{n} in the sample, \hat{F}_{n} is completely known and (at least in principle) G_{n}^{\ast} is also completely known. One may view G_{n}^{\ast} as the empirical counterpart in the ‘bootstrap world’ to G_{n} in the ‘real world’. In practice, an exact computation of G_{n}^{\ast} is usually impossible (for a sample X_{1},\ldots,X_{n} of n distinct numbers, there are \displaystyle \binom{2 n - 1}{2 n} distinct bootstrap samples), but G_{n}^{\ast} can be approximated by means of [[Monte-Carlo method|Monte-Carlo simulation]]. Efficient bootstrap simulation is discussed, for example, in [[#References|[a2]]] and [[#References|[a10]]]. | ||
− | + | When does Efron’s bootstrap work? The consistency of the bootstrap approximation G_{n}^{\ast} , viewed as an estimate of G_{n} — i.e., one requires | |
+ | $$ | ||
+ | \sup_{x \in \mathbf{R}} \left| {G_{n}}(x) - {G_{n}^{\ast}}(x) \right| \to 0, \quad \text{as} ~ n \to \infty, | ||
+ | $$ | ||
+ | to hold in \mathsf{P} -probability — is generally viewed as an absolute prerequisite for Efron’s bootstrap to work in the problem at hand. Of course, bootstrap consistency is only a first-order asymptotic result, and the error committed when G_{n} is estimated by G_{n}^{\ast} may still be quite large in finite samples. Second-order asymptotics (cf. [[Edgeworth series|Edgeworth series]]) enables one to investigate the speed at which \displaystyle \sup_{x \in \mathbf{R}} \left| {G_{n}}(x) - {G_{n}^{\ast}}(x) \right| approaches $ 0 $, and also to identify cases where the rate of convergence is faster than \dfrac{1}{\sqrt{n}} — the classical Berry–Esseen-type rate for the normal approximation. An example in which the bootstrap possesses the beneficial property of being more accurate than the traditional normal approximation is the Student t -statistic and, more generally, Studentized statistics. For this reason, the use of bootstrapped Studentized statistics for setting confidence intervals is strongly advocated in a number of important problems. A general reference is [[#References|[a7]]]. | ||
− | + | When does the bootstrap fail? It has been proved in [[#References|[a1]]] that in the case of the mean, Efron’s bootstrap fails when F is the [[Attraction domain of a stable distribution|domain of attraction]] of an \alpha -stable law with $ 0 < \alpha < 2 . However, by re-sampling from \hat{F}_{n} with a (smaller) re-sample size m that satisfies m = m(n) \to \infty and \dfrac{m(n)}{n} \to 0 $, it can be shown that the (modified) bootstrap works. More generally, in recent years, the importance of a proper choice of the re-sampling distribution has become clear (see [[#References|[a5]]], [[#References|[a9]]] and [[#References|[a10]]]). | |
− | + | The bootstrap can be an effective tool in many problems of statistical inference, for example, the construction of a confidence band in non-parametric regression, testing for the number of modes of a density, or the calibration of confidence bounds (see [[#References|[a2]]], [[#References|[a4]]] and [[#References|[a8]]]). Re-sampling methods for dependent data, such as the '''block bootstrap''', is another important topic of recent research (see [[#References|[a2]]] and [[#References|[a6]]]). | |
− | + | ====References==== | |
− | |||
− | |||
− | |||
− | |||
− | + | <table> | |
− | + | <TR><TD valign="top">[a1]</TD><TD valign="top"> | |
− | + | K.B. Athreya, “Bootstrap of the mean in the infinite variance case”, ''Ann. Statist.'', '''15''' (1987), pp. 724–731.</TD></TR> | |
− | + | <TR><TD valign="top">[a2]</TD><TD valign="top"> | |
− | + | A.C. Davison, D.V. Hinkley, “Bootstrap methods and their application”, Cambridge Univ. Press (1997).</TD></TR> | |
− | + | <TR><TD valign="top">[a3]</TD><TD valign="top"> | |
− | + | B. Efron, “Bootstrap methods: another look at the jackknife”, ''Ann. Statist.'', '''7''' (1979), pp. 1–26.</TD></TR> | |
− | + | <TR><TD valign="top">[a4]</TD><TD valign="top"> | |
+ | B. Efron, R.J. Tibshirani, “An introduction to the bootstrap”, Chapman&Hall (1993).</TD></TR> | ||
+ | <TR><TD valign="top">[a5]</TD> <TD valign="top"> | ||
+ | E. Giné, “Lectures on some aspects of the bootstrap”, P. Bernard (ed.), ''Ecole d'Eté de Probab. Saint Flour XXVI-1996'', ''Lecture Notes Math.'', '''1665''', Springer (1997).</TD></TR> | ||
+ | <TR><TD valign="top">[a6]</TD><TD valign="top"> | ||
+ | F. Götze, H.R. Künsch, “Second order correctness of the blockwise bootstrap for stationary observations”, ''Ann. Statist.'', '''24''' (1996), pp. 1914–1933.</TD></TR> | ||
+ | <TR><TD valign="top">[a7]</TD><TD valign="top"> | ||
+ | P. Hall, “The bootstrap and Edgeworth expansion”, Springer (1992).</TD></TR> | ||
+ | <TR><TD valign="top">[a8]</TD><TD valign="top"> | ||
+ | E. Mammen, “When does bootstrap work? Asymptotic results and simulations”, ''Lecture Notes Statist.'', '''77''', Springer (1992).</TD></TR> | ||
+ | <TR><TD valign="top">[a9]</TD><TD valign="top"> | ||
+ | H. Putter, W.R. van Zwet, “Resampling: consistency of substitution estimators”, ''Ann. Statist.'', '''24''' (1996), pp. 2297–2318.</TD></TR> | ||
+ | <TR><TD valign="top">[a10]</TD> <TD valign="top"> | ||
+ | J. Shao, D. Tu, “The jackknife and bootstrap”, Springer (1995).</TD></TR> | ||
+ | </table> |
Revision as of 05:18, 18 April 2017
A computer-intensive re-sampling method, introduced in statistics by B. Efron in 1979 ([a3]) for estimating the variability of statistical quantities and for setting confidence regions (cf. also sample). The name ‘bootstrap’ refers to the analogy of pulling oneself up by one’s own bootstraps. Efron’s bootstrap is to re-sample the data. Given observations X_{1},\ldots,X_{n} , artificial bootstrap samples are drawn with replacement from X_{1},\ldots,X_{n} , putting an equal probability mass of \dfrac{1}{n} on X_{i} for each i \in \{ 1,\ldots,n \} . For example, with a sample size of n = 5 and distinct observations X_{1},X_{2},X_{3},X_{4},X_{5} , one might obtain X_{3},X_{3},X_{1},X_{5},X_{4} as a bootstrap sample. In fact, there are 126 distinct bootstrap samples in this case.
A more formal description of Efron’s non-parametric bootstrap in a simple setting is as follows. Suppose that (X_{1},\ldots,X_{n}) is a random sample of size n from a population with an unknown distribution function F_{\theta} on the real line, i.e., the X_{i} ’s are assumed to be independent and identically-distributed random variables with a common distribution function F_{\theta} that depends on a real-valued parameter \theta . Let T_{n} = {T_{n}}(X_{1},\ldots,X_{n}) denote a statistical estimator for \theta , based on the data X_{1},\ldots,X_{n} (cf. also statistical estimation). The object of interest is then the probability distribution G_{n} of \sqrt{n} (T_{n} - \theta) defined by \forall x \in \mathbf{R}: \qquad {G_{n}}(x) \stackrel{\text{df}}{=} {\mathsf{P}_{\theta}}(\sqrt{n} (T_{n} - \theta) \leq x), which is the exact distribution function of T_{n} properly normalized. The scaling factor \sqrt{n} is a classical one, while the centering of T_{n} is by the parameter \theta . Here, \mathsf{P}_{\theta} denotes the probability measure corresponding to F_{\theta} .
Efron’s non-parametric bootstrap estimator of G_{n} is now defined by \forall x \in \mathbf{R}: \qquad {G_{n}^{\ast}}(x) \stackrel{\text{df}}{=} {\mathsf{P}_{n}^{\ast}}(\sqrt{n} (T_{n}^{\ast} - \theta_{n}) \leq x). Here, T_{n}^{\ast} = {T_{n}}(X_{1}^{\ast},\ldots,X_{n}^{\ast}) , where (X_{1}^{\ast},\ldots,X_{n}^{\ast}) denotes an artificial random sample (the bootstrap sample) from \hat{F}_{n} , the empirical distribution function of the original observations (X_{1},\ldots,X_{n}) , and \theta_{n} = \theta \! \left( \hat{F}_{n} \right) . Note that \hat{F}_{n} is the random distribution (a step function) that puts a probability mass of \dfrac{1}{n} on X_{i} for each i \in \{ 1,\ldots,n \} , sometimes referred to as the re-sampling distribution; \mathsf{P}_{n}^{\ast} denotes the probability measure corresponding to \hat{F}_{n} , conditionally given \hat{F}_{n} , i.e., given the observations X_{1},\ldots,X_{n} . Obviously, given the observed values X_{1},\ldots,X_{n} in the sample, \hat{F}_{n} is completely known and (at least in principle) G_{n}^{\ast} is also completely known. One may view G_{n}^{\ast} as the empirical counterpart in the ‘bootstrap world’ to G_{n} in the ‘real world’. In practice, an exact computation of G_{n}^{\ast} is usually impossible (for a sample X_{1},\ldots,X_{n} of n distinct numbers, there are \displaystyle \binom{2 n - 1}{2 n} distinct bootstrap samples), but G_{n}^{\ast} can be approximated by means of Monte-Carlo simulation. Efficient bootstrap simulation is discussed, for example, in [a2] and [a10].
When does Efron’s bootstrap work? The consistency of the bootstrap approximation G_{n}^{\ast} , viewed as an estimate of G_{n} — i.e., one requires \sup_{x \in \mathbf{R}} \left| {G_{n}}(x) - {G_{n}^{\ast}}(x) \right| \to 0, \quad \text{as} ~ n \to \infty, to hold in \mathsf{P} -probability — is generally viewed as an absolute prerequisite for Efron’s bootstrap to work in the problem at hand. Of course, bootstrap consistency is only a first-order asymptotic result, and the error committed when G_{n} is estimated by G_{n}^{\ast} may still be quite large in finite samples. Second-order asymptotics (cf. Edgeworth series) enables one to investigate the speed at which \displaystyle \sup_{x \in \mathbf{R}} \left| {G_{n}}(x) - {G_{n}^{\ast}}(x) \right| approaches 0 , and also to identify cases where the rate of convergence is faster than \dfrac{1}{\sqrt{n}} — the classical Berry–Esseen-type rate for the normal approximation. An example in which the bootstrap possesses the beneficial property of being more accurate than the traditional normal approximation is the Student t -statistic and, more generally, Studentized statistics. For this reason, the use of bootstrapped Studentized statistics for setting confidence intervals is strongly advocated in a number of important problems. A general reference is [a7].
When does the bootstrap fail? It has been proved in [a1] that in the case of the mean, Efron’s bootstrap fails when F is the domain of attraction of an \alpha -stable law with 0 < \alpha < 2 . However, by re-sampling from \hat{F}_{n} with a (smaller) re-sample size m that satisfies m = m(n) \to \infty and \dfrac{m(n)}{n} \to 0 , it can be shown that the (modified) bootstrap works. More generally, in recent years, the importance of a proper choice of the re-sampling distribution has become clear (see [a5], [a9] and [a10]).
The bootstrap can be an effective tool in many problems of statistical inference, for example, the construction of a confidence band in non-parametric regression, testing for the number of modes of a density, or the calibration of confidence bounds (see [a2], [a4] and [a8]). Re-sampling methods for dependent data, such as the block bootstrap, is another important topic of recent research (see [a2] and [a6]).
References
[a1] | K.B. Athreya, “Bootstrap of the mean in the infinite variance case”, Ann. Statist., 15 (1987), pp. 724–731. |
[a2] | A.C. Davison, D.V. Hinkley, “Bootstrap methods and their application”, Cambridge Univ. Press (1997). |
[a3] | B. Efron, “Bootstrap methods: another look at the jackknife”, Ann. Statist., 7 (1979), pp. 1–26. |
[a4] | B. Efron, R.J. Tibshirani, “An introduction to the bootstrap”, Chapman&Hall (1993). |
[a5] | E. Giné, “Lectures on some aspects of the bootstrap”, P. Bernard (ed.), Ecole d'Eté de Probab. Saint Flour XXVI-1996, Lecture Notes Math., 1665, Springer (1997). |
[a6] | F. Götze, H.R. Künsch, “Second order correctness of the blockwise bootstrap for stationary observations”, Ann. Statist., 24 (1996), pp. 1914–1933. |
[a7] | P. Hall, “The bootstrap and Edgeworth expansion”, Springer (1992). |
[a8] | E. Mammen, “When does bootstrap work? Asymptotic results and simulations”, Lecture Notes Statist., 77, Springer (1992). |
[a9] | H. Putter, W.R. van Zwet, “Resampling: consistency of substitution estimators”, Ann. Statist., 24 (1996), pp. 2297–2318. |
[a10] | J. Shao, D. Tu, “The jackknife and bootstrap”, Springer (1995). |
Bootstrap method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Bootstrap_method&oldid=11753