Namespaces
Variants
Actions

Difference between revisions of "Donsker invariance principle"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (AUTOMATIC EDIT (latexlist): Replaced 44 formulas out of 44 by TEX code with an average confidence of 2.0 and a minimal confidence of 2.0.)
Line 1: Line 1:
A principle [[#References|[a1]]] stating that under some conditions the distribution of a functional of normalized sums <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d1202601.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d1202602.png" />, of independent and identically distributed random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d1202603.png" /> converges to the distribution of this functional of the Wiener process.
+
<!--This article has been texified automatically. Since there was no Nroff source code for this article,  
 +
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
 +
was used.
 +
If the TeX and formula formatting is correct, please remove this message and the {{TEX|semi-auto}} category.
  
Donsker's theorem is as follows [[#References|[a2]]]. Suppose the random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d1202604.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d1202605.png" />, are independent and identically distributed with mean <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d1202606.png" /> and finite, positive variance <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d1202607.png" /> (cf. also [[Random variable|Random variable]]).
+
Out of 44 formulas, 44 were replaced by TEX code.-->
 +
 
 +
{{TEX|semi-auto}}{{TEX|done}}
 +
A principle [[#References|[a1]]] stating that under some conditions the distribution of a functional of normalized sums $S _ { n } = \sum _ { k = 1 } ^ { n } \xi _ { k }$, $n \geq 1$, of independent and identically distributed random variables $\xi _ { k }$ converges to the distribution of this functional of the Wiener process.
 +
 
 +
Donsker's theorem is as follows [[#References|[a2]]]. Suppose the random variables $\xi _ { k }$, $k \geq 1$, are independent and identically distributed with mean $0$ and finite, positive variance $\mathsf{E} \xi _ { k } ^ { 2 } = \sigma ^ { 2 } &gt; 0$ (cf. also [[Random variable|Random variable]]).
  
 
Then the random continuous functions
 
Then the random continuous functions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d1202608.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
\begin{equation} \tag{a1} X _ { n } ( t ) = \frac { 1 } { \sigma \sqrt { n } } [ S _ { [ n t ] } + ( n t - [ n t ] ) \xi_{ [ n t ] + 1} ], \end{equation}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d1202609.png" /></td> </tr></table>
+
\begin{equation*} 0 \leq t \leq 1, \end{equation*}
  
converge weakly (cf. also [[Weak convergence of probability measures|Weak convergence of probability measures]]) to the [[Wiener process|Wiener process]]: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026010.png" />; that is, for every bounded and continuous real-valued functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026011.png" /> on the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026012.png" /> of continuous functions on the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026013.png" />, with the [[Uniform topology|uniform topology]], the weak convergence
+
converge weakly (cf. also [[Weak convergence of probability measures|Weak convergence of probability measures]]) to the [[Wiener process|Wiener process]]: $X _ { n } ( t ) \Rightarrow w ( t )$; that is, for every bounded and continuous real-valued functional $f$ on the space $C [ 0,1]$ of continuous functions on the interval $[ 0,1 ]$, with the [[Uniform topology|uniform topology]], the weak convergence
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026014.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a2)</td></tr></table>
+
\begin{equation} \tag{a2} \mathsf{E} f ( X _ { n } ) \rightarrow \mathsf{E} f ( w ) , \quad n \rightarrow \infty,  \end{equation}
  
takes place; equivalently, for an arbitrary set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026015.png" /> in the Borel <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026016.png" />-algebra <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026017.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026018.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026019.png" />, one has
+
takes place; equivalently, for an arbitrary set $G$ in the Borel $\sigma$-algebra $B _ { c }$ in $C_{ [ 0,1 ]}$ with $\textsf{P} \{ w \in \partial G \} = 0$, one has
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026020.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a3)</td></tr></table>
+
\begin{equation} \tag{a3} \mathsf{P} \{ X _ { n } \in G \} \rightarrow \mathsf{P} \{ w \in G \}. \end{equation}
  
For the more general case of a triangular array with, in every line, independent random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026021.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026022.png" />, and under the conditions of the Lindeberg–Levy central limit theorem (cf. also [[Central limit theorem|Central limit theorem]]), the weak convergence in (a2) for sums <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026023.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026024.png" />, is called the Donsker–Prokhorov invariance principle [[#References|[a3]]], [[#References|[a4]]].
+
For the more general case of a triangular array with, in every line, independent random variables $\xi _ { n , k }$, $1 \leq k \leq n$, and under the conditions of the Lindeberg–Levy central limit theorem (cf. also [[Central limit theorem|Central limit theorem]]), the weak convergence in (a2) for sums $S _ { n } = \sum _ { k = 1 } ^ { n } \xi _ { n k }$, $n \geq 1$, is called the Donsker–Prokhorov invariance principle [[#References|[a3]]], [[#References|[a4]]].
  
The notion of  "invariance principle"  is applied as follows. The sums <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026025.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026026.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026027.png" />, can be interpreted as positions of a [[Random walk|random walk]]. The convergence (a2) means that all trajectories are trajectories of a [[Brownian motion|Brownian motion]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026028.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026029.png" />, when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026030.png" /> is large enough. This is the reason that the invariance principle is also called the functional central limit theorem.
+
The notion of  "invariance principle"  is applied as follows. The sums $S _ { [ n t] } $, $0 \leq t \leq 1$, $n \geq 1$, can be interpreted as positions of a [[Random walk|random walk]]. The convergence (a2) means that all trajectories are trajectories of a [[Brownian motion|Brownian motion]] $w ( t )$, $0 \leq t \leq 1$, when $n$ is large enough. This is the reason that the invariance principle is also called the functional central limit theorem.
  
An important application of the invariance principle is to prove limit theorems for various functions of the partial sums, for example, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026031.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026032.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026033.png" />, etc.
+
An important application of the invariance principle is to prove limit theorems for various functions of the partial sums, for example, $\overline{X} _ { n } = \operatorname { sup } _ { t } X _ { n } ( t )$, $X \underline { \square } _ { n } = \operatorname { inf } _ { t } X _ { n } ( t )$, $| \overline{X} _ { n } | = \operatorname { sup } _ { t } | X _ { n } ( t ) |$, etc.
  
The independence of the limiting distribution from the distribution of the random terms enables one to compute the limit distribution in certain easy special cases. For example, the distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026034.png" /> can be calculated from the partial sums <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026035.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026036.png" />, of independent and identically Bernoulli-distributed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026037.png" /> with probabilities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026038.png" /> by using the reflection principle [[#References|[a2]]]. This argument follows a general pattern. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026039.png" /> is a continuous functional on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026040.png" /> (or continuous except at points forming a set of Wiener measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026041.png" />), then one can find the distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026042.png" /> by finding the explicit distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026043.png" />, which is the distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120260/d12026044.png" /> by the invariance principle.
+
The independence of the limiting distribution from the distribution of the random terms enables one to compute the limit distribution in certain easy special cases. For example, the distribution $\mathsf{P} \{ \operatorname { sup } _ { t } w ( t ) &lt; z \}$ can be calculated from the partial sums $S _ { n } = \sum _ { k = 1 } ^ { n } \alpha _ { k }$, $n \geq 1$, of independent and identically Bernoulli-distributed $\alpha _ { k } = \pm 1$ with probabilities $\pm 1 / 2$ by using the reflection principle [[#References|[a2]]]. This argument follows a general pattern. If $f$ is a continuous functional on $C [ 0,1]$ (or continuous except at points forming a set of Wiener measure $0$), then one can find the distribution of $f ( w )$ by finding the explicit distribution of $f ( X _ { n } )$, which is the distribution of $f ( w )$ by the invariance principle.
  
 
The idea of computing the limiting distribution by using a special case and then passing to the general case was first realized by by A. Kolmogorov (1931) and subsequently for the various particular cases by P. Erdös and M. Kac (1946; [[#References|[a6]]]). Kolmogorov and Yu.V. Prokhorov (1954) were the first to point out the weak convergence (a2). Much work has been done in connection with the estimation of the rate of convergence in the invariance principle (see, for example, [[#References|[a5]]]).
 
The idea of computing the limiting distribution by using a special case and then passing to the general case was first realized by by A. Kolmogorov (1931) and subsequently for the various particular cases by P. Erdös and M. Kac (1946; [[#References|[a6]]]). Kolmogorov and Yu.V. Prokhorov (1954) were the first to point out the weak convergence (a2). Much work has been done in connection with the estimation of the rate of convergence in the invariance principle (see, for example, [[#References|[a5]]]).
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  M.D. Donsker,  "An invariant principle for certain probability limit theorems" , ''Memoirs'' , '''6''' , Amer. Math. Soc.  (1951)  pp. 1–10</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  P. Billingsley,  "Convergence of probability measures" , Wiley  (1968)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  Yu.V. Prokhorov,  "Methods of functional analysis in limit theorems of probability theory"  ''Vestnik Leningrad Univ.'' , '''11'''  (1954)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  J.J. Gikhman,  A.V. Skorokhod,  "Introduction to the theory of stochastic processes" , Springer  (1984)  (In Russian)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  A.A. Borovkov,  "Theory of probability" , Nauka  (1986)  (In Russian)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  P. Erdös,  M. Kac,  "On certain limit theorems of the theory of probability"  ''Bull. Amer. Math. Soc.'' , '''52'''  (1946)  pp. 292–302</TD></TR></table>
+
<table><tr><td valign="top">[a1]</td> <td valign="top">  M.D. Donsker,  "An invariant principle for certain probability limit theorems" , ''Memoirs'' , '''6''' , Amer. Math. Soc.  (1951)  pp. 1–10</td></tr><tr><td valign="top">[a2]</td> <td valign="top">  P. Billingsley,  "Convergence of probability measures" , Wiley  (1968)</td></tr><tr><td valign="top">[a3]</td> <td valign="top">  Yu.V. Prokhorov,  "Methods of functional analysis in limit theorems of probability theory"  ''Vestnik Leningrad Univ.'' , '''11'''  (1954)</td></tr><tr><td valign="top">[a4]</td> <td valign="top">  J.J. Gikhman,  A.V. Skorokhod,  "Introduction to the theory of stochastic processes" , Springer  (1984)  (In Russian)</td></tr><tr><td valign="top">[a5]</td> <td valign="top">  A.A. Borovkov,  "Theory of probability" , Nauka  (1986)  (In Russian)</td></tr><tr><td valign="top">[a6]</td> <td valign="top">  P. Erdös,  M. Kac,  "On certain limit theorems of the theory of probability"  ''Bull. Amer. Math. Soc.'' , '''52'''  (1946)  pp. 292–302</td></tr></table>

Revision as of 16:57, 1 July 2020

A principle [a1] stating that under some conditions the distribution of a functional of normalized sums $S _ { n } = \sum _ { k = 1 } ^ { n } \xi _ { k }$, $n \geq 1$, of independent and identically distributed random variables $\xi _ { k }$ converges to the distribution of this functional of the Wiener process.

Donsker's theorem is as follows [a2]. Suppose the random variables $\xi _ { k }$, $k \geq 1$, are independent and identically distributed with mean $0$ and finite, positive variance $\mathsf{E} \xi _ { k } ^ { 2 } = \sigma ^ { 2 } > 0$ (cf. also Random variable).

Then the random continuous functions

\begin{equation} \tag{a1} X _ { n } ( t ) = \frac { 1 } { \sigma \sqrt { n } } [ S _ { [ n t ] } + ( n t - [ n t ] ) \xi_{ [ n t ] + 1} ], \end{equation}

\begin{equation*} 0 \leq t \leq 1, \end{equation*}

converge weakly (cf. also Weak convergence of probability measures) to the Wiener process: $X _ { n } ( t ) \Rightarrow w ( t )$; that is, for every bounded and continuous real-valued functional $f$ on the space $C [ 0,1]$ of continuous functions on the interval $[ 0,1 ]$, with the uniform topology, the weak convergence

\begin{equation} \tag{a2} \mathsf{E} f ( X _ { n } ) \rightarrow \mathsf{E} f ( w ) , \quad n \rightarrow \infty, \end{equation}

takes place; equivalently, for an arbitrary set $G$ in the Borel $\sigma$-algebra $B _ { c }$ in $C_{ [ 0,1 ]}$ with $\textsf{P} \{ w \in \partial G \} = 0$, one has

\begin{equation} \tag{a3} \mathsf{P} \{ X _ { n } \in G \} \rightarrow \mathsf{P} \{ w \in G \}. \end{equation}

For the more general case of a triangular array with, in every line, independent random variables $\xi _ { n , k }$, $1 \leq k \leq n$, and under the conditions of the Lindeberg–Levy central limit theorem (cf. also Central limit theorem), the weak convergence in (a2) for sums $S _ { n } = \sum _ { k = 1 } ^ { n } \xi _ { n k }$, $n \geq 1$, is called the Donsker–Prokhorov invariance principle [a3], [a4].

The notion of "invariance principle" is applied as follows. The sums $S _ { [ n t] } $, $0 \leq t \leq 1$, $n \geq 1$, can be interpreted as positions of a random walk. The convergence (a2) means that all trajectories are trajectories of a Brownian motion $w ( t )$, $0 \leq t \leq 1$, when $n$ is large enough. This is the reason that the invariance principle is also called the functional central limit theorem.

An important application of the invariance principle is to prove limit theorems for various functions of the partial sums, for example, $\overline{X} _ { n } = \operatorname { sup } _ { t } X _ { n } ( t )$, $X \underline { \square } _ { n } = \operatorname { inf } _ { t } X _ { n } ( t )$, $| \overline{X} _ { n } | = \operatorname { sup } _ { t } | X _ { n } ( t ) |$, etc.

The independence of the limiting distribution from the distribution of the random terms enables one to compute the limit distribution in certain easy special cases. For example, the distribution $\mathsf{P} \{ \operatorname { sup } _ { t } w ( t ) < z \}$ can be calculated from the partial sums $S _ { n } = \sum _ { k = 1 } ^ { n } \alpha _ { k }$, $n \geq 1$, of independent and identically Bernoulli-distributed $\alpha _ { k } = \pm 1$ with probabilities $\pm 1 / 2$ by using the reflection principle [a2]. This argument follows a general pattern. If $f$ is a continuous functional on $C [ 0,1]$ (or continuous except at points forming a set of Wiener measure $0$), then one can find the distribution of $f ( w )$ by finding the explicit distribution of $f ( X _ { n } )$, which is the distribution of $f ( w )$ by the invariance principle.

The idea of computing the limiting distribution by using a special case and then passing to the general case was first realized by by A. Kolmogorov (1931) and subsequently for the various particular cases by P. Erdös and M. Kac (1946; [a6]). Kolmogorov and Yu.V. Prokhorov (1954) were the first to point out the weak convergence (a2). Much work has been done in connection with the estimation of the rate of convergence in the invariance principle (see, for example, [a5]).

References

[a1] M.D. Donsker, "An invariant principle for certain probability limit theorems" , Memoirs , 6 , Amer. Math. Soc. (1951) pp. 1–10
[a2] P. Billingsley, "Convergence of probability measures" , Wiley (1968)
[a3] Yu.V. Prokhorov, "Methods of functional analysis in limit theorems of probability theory" Vestnik Leningrad Univ. , 11 (1954)
[a4] J.J. Gikhman, A.V. Skorokhod, "Introduction to the theory of stochastic processes" , Springer (1984) (In Russian)
[a5] A.A. Borovkov, "Theory of probability" , Nauka (1986) (In Russian)
[a6] P. Erdös, M. Kac, "On certain limit theorems of the theory of probability" Bull. Amer. Math. Soc. , 52 (1946) pp. 292–302
How to Cite This Entry:
Donsker invariance principle. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Donsker_invariance_principle&oldid=50223
This article was adapted from an original article by V.S. Korolyuk (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article