# Difference between revisions of "Donsker invariance principle"

A principle [a1] stating that under some conditions the distribution of a functional of normalized sums $S _ { n } = \sum _ { k = 1 } ^ { n } \xi _ { k }$, $n \geq 1$, of independent and identically distributed random variables $\xi _ { k }$ converges to the distribution of this functional of the Wiener process.

Donsker's theorem is as follows [a2]. Suppose the random variables $\xi _ { k }$, $k \geq 1$, are independent and identically distributed with mean $0$ and finite, positive variance $\mathsf{E} \xi _ { k } ^ { 2 } = \sigma ^ { 2 } > 0$ (cf. also Random variable).

Then the random continuous functions

$$\tag{a1} X _ { n } ( t ) = \frac { 1 } { \sigma \sqrt { n } } [ S _ { [ n t ] } + ( n t - [ n t ] ) \xi_{ [ n t ] + 1} ],$$

\begin{equation*} 0 \leq t \leq 1, \end{equation*}

converge weakly (cf. also Weak convergence of probability measures) to the Wiener process: $X _ { n } ( t ) \Rightarrow w ( t )$; that is, for every bounded and continuous real-valued functional $f$ on the space $C [ 0,1]$ of continuous functions on the interval $[ 0,1 ]$, with the uniform topology, the weak convergence

$$\tag{a2} \mathsf{E} f ( X _ { n } ) \rightarrow \mathsf{E} f ( w ) , \quad n \rightarrow \infty,$$

takes place; equivalently, for an arbitrary set $G$ in the Borel $\sigma$-algebra $B _ { c }$ in $C_{ [ 0,1 ]}$ with $\textsf{P} \{ w \in \partial G \} = 0$, one has

$$\tag{a3} \mathsf{P} \{ X _ { n } \in G \} \rightarrow \mathsf{P} \{ w \in G \}.$$

For the more general case of a triangular array with, in every line, independent random variables $\xi _ { n , k }$, $1 \leq k \leq n$, and under the conditions of the Lindeberg–Levy central limit theorem (cf. also Central limit theorem), the weak convergence in (a2) for sums $S _ { n } = \sum _ { k = 1 } ^ { n } \xi _ { n k }$, $n \geq 1$, is called the Donsker–Prokhorov invariance principle [a3], [a4].

The notion of "invariance principle" is applied as follows. The sums $S _ { [ n t] }$, $0 \leq t \leq 1$, $n \geq 1$, can be interpreted as positions of a random walk. The convergence (a2) means that all trajectories are trajectories of a Brownian motion $w ( t )$, $0 \leq t \leq 1$, when $n$ is large enough. This is the reason that the invariance principle is also called the functional central limit theorem.

An important application of the invariance principle is to prove limit theorems for various functions of the partial sums, for example, $\overline{X} _ { n } = \operatorname { sup } _ { t } X _ { n } ( t )$, $X \underline { \square } _ { n } = \operatorname { inf } _ { t } X _ { n } ( t )$, $| \overline{X} _ { n } | = \operatorname { sup } _ { t } | X _ { n } ( t ) |$, etc.

The independence of the limiting distribution from the distribution of the random terms enables one to compute the limit distribution in certain easy special cases. For example, the distribution $\mathsf{P} \{ \operatorname { sup } _ { t } w ( t ) < z \}$ can be calculated from the partial sums $S _ { n } = \sum _ { k = 1 } ^ { n } \alpha _ { k }$, $n \geq 1$, of independent and identically Bernoulli-distributed $\alpha _ { k } = \pm 1$ with probabilities $\pm 1 / 2$ by using the reflection principle [a2]. This argument follows a general pattern. If $f$ is a continuous functional on $C [ 0,1]$ (or continuous except at points forming a set of Wiener measure $0$), then one can find the distribution of $f ( w )$ by finding the explicit distribution of $f ( X _ { n } )$, which is the distribution of $f ( w )$ by the invariance principle.

The idea of computing the limiting distribution by using a special case and then passing to the general case was first realized by A. Kolmogorov (1931) and subsequently for the various particular cases by P. Erdös and M. Kac (1946; [a6]). Kolmogorov and Yu.V. Prokhorov (1954) were the first to point out the weak convergence (a2). Much work has been done in connection with the estimation of the rate of convergence in the invariance principle (see, for example, [a5]).

#### References

 [a1] M.D. Donsker, "An invariant principle for certain probability limit theorems" , Memoirs , 6 , Amer. Math. Soc. (1951) pp. 1–10 [a2] P. Billingsley, "Convergence of probability measures" , Wiley (1968) [a3] Yu.V. Prokhorov, "Methods of functional analysis in limit theorems of probability theory" Vestnik Leningrad Univ. , 11 (1954) [a4] J.J. Gikhman, A.V. Skorokhod, "Introduction to the theory of stochastic processes" , Springer (1984) (In Russian) [a5] A.A. Borovkov, "Theory of probability" , Nauka (1986) (In Russian) [a6] P. Erdös, M. Kac, "On certain limit theorems of the theory of probability" Bull. Amer. Math. Soc. , 52 (1946) pp. 292–302
How to Cite This Entry:
Donsker invariance principle. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Donsker_invariance_principle&oldid=50984
This article was adapted from an original article by V.S. Korolyuk (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article