Weak convergence of probability measures
2020 Mathematics Subject Classification: Primary: 60B10 [MSN][ZBL] See also Convergence of measures.
The general setting for weak convergence of probability measures is that of a complete separable metric space $(X,\rho)$ (cf. also Complete space; Separable space), $\rho$ being the metric, with probability measures $\mu_i$, $i=0,1,\dots$ defined on the Borel sets of $X$. It is said that $\mu_n$ converges weakly to $\mu_0$ in $(X,\rho)$ if for every bounded continuous function $f$ on $X$ one has $\int f\,{\rm}d\mu_n\,\rightarrow\,\int f\,{\rm d}\mu_0$ as $n\rightarrow\infty$. If random elements $\xi_n$, $n=0,1,\dots$ taking values in $X$ are such that the distribution of $\xi_n$ is $\mu_n$, $n=0,1,\dots$ one writes $\xi_n\rightarrow^{d} \xi_0$, and says that $\xi_n$ converges in distribution to $\xi_0$ if $\mu_n$ converges weakly to $\mu_0$ (cf. also Convergence in distribution).
The metric spaces in most common use in probability are $\mathbb{R}^k$, $k$-dimensional Euclidean space, $C[0,1]$, the space of continuous functions on $[0,1]$, and $D[0,1]$, the space of functions on $[0,1]$ which are right continuous with left-hand limits.
Weak convergence in a suitably rich metric space is of considerably greater use than that in Euclidean space. This is because a wide variety of results on convergence in distribution on $\mathbb R$ can be derived from it with the aid of the continuous mapping theorem, which states that if $\xi_n\rightarrow^{d}\xi_0$ in $(X,\rho)$ and the mapping $h:X\rightarrow\mathbb R$ is continuous (or at least is measurable and $\mathsf P\{\xi_0\in D_h\}=0$, where $D_h$ is the set of discontinuities of $h$, then $h(\xi_n)\rightarrow^{d}h(\xi_0)$. In many applications the limit random element is Brownian motion, which has continuous paths with probability one.
One of the most fundamental weak convergence results is Donsker's theorem for sums $S_n=\sum_{i=1}^n X_i$, $n\ge 1$, of independent and identically-distributed random variables $X_i$ with $\mathsf EX_i=0$, $\mathsf EX_i^2=1$. This can be framed in $C[0,1]$ by setting $S_0=0$ and $S_n(t)=n^{-1/2}\{S_{[nt]}+(nt-[nt])X_{[nt]+1}\}$, $0\leq t\leq 1$, where $[x]$ denotes the integer part of $x$. Then Donsker's theorem asserts that $S_n(t)\rightarrow^{d} W(t)$, where $W(t)$ is standard Brownian motion. Application of the continuous mapping theorem then readily provides convergence-in-distribution results for functionals such as $\max_{1\leq k\leq n} S_k$, $\max_{1\leq k\leq n} k^{-1/2}|S_k|$, $\sum_{k=1}^n I(S_k\geq\alpha)$, and $\sum_{k=1}^n \gamma(S_k,S_{k+1})$, where $I$ is the indicator function and $\gamma(a,b)=1$ if $ab<0$ and $0$ otherwise.
References
[B] | P. Billingsley, "Convergence of probability measures" , Wiley (1968) pp. 9ff MR0233396 Zbl 0172.21201 |
Weak convergence of probability measures. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Weak_convergence_of_probability_measures&oldid=30723