Namespaces
Variants
Actions

Difference between revisions of "Local limit theorems"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
 +
<!--
 +
l0601601.png
 +
$#A+1 = 38 n = 0
 +
$#C+1 = 38 : ~/encyclopedia/old_files/data/L060/L.0600160 Local limit theorems
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''in probability theory''
 
''in probability theory''
  
 
Limit theorems for densities, that is, theorems that establish the convergence of the densities of a sequence of distributions to the density of the limit distribution (if the given densities exist), or a classical version of local limit theorems, namely local theorems for lattice distributions, the simplest of which is the local [[Laplace theorem|Laplace theorem]].
 
Limit theorems for densities, that is, theorems that establish the convergence of the densities of a sequence of distributions to the density of the limit distribution (if the given densities exist), or a classical version of local limit theorems, namely local theorems for lattice distributions, the simplest of which is the local [[Laplace theorem|Laplace theorem]].
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l0601601.png" /> be a sequence of independent random variables that have a common distribution function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l0601602.png" /> with mean <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l0601603.png" /> and finite positive variance <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l0601604.png" />. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l0601605.png" /> be the distribution function of the normalized sum
+
Let $  X _ {1} , X _ {2} \dots $
 +
be a sequence of independent random variables that have a common distribution function $  F ( x) $
 +
with mean $  a $
 +
and finite positive variance $  \sigma  ^ {2} $.  
 +
Let $  F _ {n} ( x) $
 +
be the distribution function of the normalized sum
 +
 
 +
$$
 +
Z _ {n}  = 
 +
\frac{1}{\sigma \sqrt n }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l0601606.png" /></td> </tr></table>
+
\sum _ { j= } 1 ^ { n }
 +
( X _ {j} - a )
 +
$$
  
and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l0601607.png" /> be the normal <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l0601608.png" />-distribution function. The assumptions ensure that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l0601609.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016010.png" /> for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016011.png" />. It can be shown that this relation does not imply the convergence of the density <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016012.png" /> of the distribution of the random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016013.png" /> to the normal density
+
and let $  \Phi ( x) $
 +
be the normal $  ( 0 , 1 ) $-
 +
distribution function. The assumptions ensure that $  F _ {n} ( x) \rightarrow \Phi ( x) $
 +
as $  n \rightarrow \infty $
 +
for any $  x $.  
 +
It can be shown that this relation does not imply the convergence of the density $  p _ {n} ( x) $
 +
of the distribution of the random variable $  Z _ {n} $
 +
to the normal density
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016014.png" /></td> </tr></table>
+
$$
  
even if the distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016015.png" /> has a density. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016016.png" />, for some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016017.png" />, has a bounded density <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016018.png" />, then
+
\frac{1}{\sqrt {2 \pi } }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016019.png" /></td> <td valign="top" style="width:5%;text-align:right;">(*)</td></tr></table>
+
e ^ {- x  ^ {2} / 2 } ,
 +
$$
  
uniformly with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016020.png" />. The condition that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016021.png" /> is bounded for some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016022.png" /> is necessary for (*) to hold uniformly with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016023.png" />.
+
even if the distribution  $  F $
 +
has a density. If  $  Z _ {n} $,
 +
for some  $  n = n _ {0} $,
 +
has a bounded density  $  p _ {n _ {0}  } ( x) $,
 +
then
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016024.png" /> be a sequence of independent random variables that have the same non-degenerate distribution, and suppose that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016025.png" /> takes values of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016026.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016027.png" /> with probability 1, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016028.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016029.png" /> are constants (that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016030.png" /> has a [[Lattice distribution|lattice distribution]] with step <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016031.png" />).
+
$$ \tag{* }
 +
p _ {n} ( x) \rightarrow \
  
Suppose that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016032.png" /> has finite variance <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016033.png" />, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016034.png" /> and let
+
\frac{1}{\sqrt {2 \pi } }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016035.png" /></td> </tr></table>
+
e ^ {- x  ^ {2} / 2 }
 +
$$
 +
 
 +
uniformly with respect to  $  x $.
 +
The condition that  $  p _ {n _ {0}  } ( x) $
 +
is bounded for some  $  n _ {0} $
 +
is necessary for (*) to hold uniformly with respect to  $  x $.
 +
 
 +
Let  $  X _ {1} , X _ {2} \dots $
 +
be a sequence of independent random variables that have the same non-degenerate distribution, and suppose that  $  X _ {1} $
 +
takes values of the form  $  b + N h $,
 +
$  N = 0 , \pm  1 , \pm  2 \dots $
 +
with probability 1, where  $  h > 0 $
 +
and  $  b $
 +
are constants (that is,  $  X _ {1} $
 +
has a [[Lattice distribution|lattice distribution]] with step  $  h $).
 +
 
 +
Suppose that  $  X _ {1} $
 +
has finite variance  $  \sigma  ^ {2} $,
 +
let  $  a = {\mathsf E} X _ {1} $
 +
and let
 +
 
 +
$$
 +
P _ {n} ( N)  = {\mathsf P} \left \{
 +
\sum _ { j= } 1 ^ { n }  X _ {j} = n b + N h \right \} .
 +
$$
  
 
In order that
 
In order that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016036.png" /></td> </tr></table>
+
$$
 +
\sup _ { N } \
 +
\left |
  
as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016037.png" /> it is necessary and sufficient that the step <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l060/l060160/l06016038.png" /> should be maximal. This theorem of B.V. Gnedenko is a generalization of the local Laplace theorem.
+
\frac{\sigma \sqrt n }{h}
 +
P _ {n} ( N) -
 +
 
 +
\frac{1}{\sqrt {2 \pi } }
 +
 
 +
\mathop{\rm exp} \left \{
 +
-
 +
\frac{1}{2}
 +
\left (
 +
\frac{n b + N h - n a }{\sigma \sqrt n }
 +
\right )  ^ {2}
 +
\right \} \right |  \rightarrow  0
 +
$$
 +
 
 +
as $  n \rightarrow \infty $
 +
it is necessary and sufficient that the step $  h $
 +
should be maximal. This theorem of B.V. Gnedenko is a generalization of the local Laplace theorem.
  
 
Local limit theorems for sums of independent non-identically distributed random variables serve as a basic mathematical tool in classical statistical mechanics and quantum statistics (see [[#References|[7]]], [[#References|[8]]]).
 
Local limit theorems for sums of independent non-identically distributed random variables serve as a basic mathematical tool in classical statistical mechanics and quantum statistics (see [[#References|[7]]], [[#References|[8]]]).
Line 35: Line 114:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  B.V. Gnedenko,  A.N. Kolmogorov,  "Limit distributions for sums of independent random variables" , Addison-Wesley  (1954)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  I.A. Ibragimov,  Yu.V. Linnik,  "Independent and stationary sequences of random variables" , Wolters-Noordhoff  (1971)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  V.V. Petrov,  "Sums of independent random variables" , Springer  (1975)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  Yu.V. [Yu.V. Prokhorov] Prohorov,  Yu.A. Rozanov,  "Probability theory, basic concepts. Limit theorems, random processes" , Springer  (1969)  (Translated from Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  S.Kh. Sirazhdinov,  "Limit theorems for homogeneous Markov chains" , Tashkent  (1955)  (In Russian)</TD></TR><TR><TD valign="top">[6a]</TD> <TD valign="top">  V.A. Statulyavichus,  "Limit theorems and asymptotic expansions for non-stationary Markov chains"  ''Litovsk. Mat. Sb.'' , '''1'''  (1961)  pp. 231–314  (In Russian)  (English abstract)</TD></TR><TR><TD valign="top">[6b]</TD> <TD valign="top">  V.A. Statulyavichus,  "Limit theorems for sums of random variables that are connected in a Markov chain I"  ''Litovsk. Mat. Sb.'' , '''9'''  (1969)  pp. 345–362  (In Russian)  (English abstract)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  A.Ya. Khinchin,  "Mathematical foundations of statistical mechanics" , Dover, reprint  (1949)  (Translated from Russian)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  A.Ya. Khinchin,  "Mathematical foundations of quantum statistics" , Moscow-Leningrad  (1951)  (In Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  B.V. Gnedenko,  A.N. Kolmogorov,  "Limit distributions for sums of independent random variables" , Addison-Wesley  (1954)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  I.A. Ibragimov,  Yu.V. Linnik,  "Independent and stationary sequences of random variables" , Wolters-Noordhoff  (1971)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  V.V. Petrov,  "Sums of independent random variables" , Springer  (1975)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  Yu.V. [Yu.V. Prokhorov] Prohorov,  Yu.A. Rozanov,  "Probability theory, basic concepts. Limit theorems, random processes" , Springer  (1969)  (Translated from Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  S.Kh. Sirazhdinov,  "Limit theorems for homogeneous Markov chains" , Tashkent  (1955)  (In Russian)</TD></TR><TR><TD valign="top">[6a]</TD> <TD valign="top">  V.A. Statulyavichus,  "Limit theorems and asymptotic expansions for non-stationary Markov chains"  ''Litovsk. Mat. Sb.'' , '''1'''  (1961)  pp. 231–314  (In Russian)  (English abstract)</TD></TR><TR><TD valign="top">[6b]</TD> <TD valign="top">  V.A. Statulyavichus,  "Limit theorems for sums of random variables that are connected in a Markov chain I"  ''Litovsk. Mat. Sb.'' , '''9'''  (1969)  pp. 345–362  (In Russian)  (English abstract)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  A.Ya. Khinchin,  "Mathematical foundations of statistical mechanics" , Dover, reprint  (1949)  (Translated from Russian)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  A.Ya. Khinchin,  "Mathematical foundations of quantum statistics" , Moscow-Leningrad  (1951)  (In Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  R.N. Bhattacharya,  R. Ranga Rao,  "Normal approximations and asymptotic expansions" , Wiley  (1976)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  V. Paulauskas,  "Approximation theory in the central limit theorem. Exact results in Banach spaces" , Kluwer  (1989)  (Translated from Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  R.N. Bhattacharya,  R. Ranga Rao,  "Normal approximations and asymptotic expansions" , Wiley  (1976)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  V. Paulauskas,  "Approximation theory in the central limit theorem. Exact results in Banach spaces" , Kluwer  (1989)  (Translated from Russian)</TD></TR></table>

Revision as of 22:17, 5 June 2020


in probability theory

Limit theorems for densities, that is, theorems that establish the convergence of the densities of a sequence of distributions to the density of the limit distribution (if the given densities exist), or a classical version of local limit theorems, namely local theorems for lattice distributions, the simplest of which is the local Laplace theorem.

Let $ X _ {1} , X _ {2} \dots $ be a sequence of independent random variables that have a common distribution function $ F ( x) $ with mean $ a $ and finite positive variance $ \sigma ^ {2} $. Let $ F _ {n} ( x) $ be the distribution function of the normalized sum

$$ Z _ {n} = \frac{1}{\sigma \sqrt n } \sum _ { j= } 1 ^ { n } ( X _ {j} - a ) $$

and let $ \Phi ( x) $ be the normal $ ( 0 , 1 ) $- distribution function. The assumptions ensure that $ F _ {n} ( x) \rightarrow \Phi ( x) $ as $ n \rightarrow \infty $ for any $ x $. It can be shown that this relation does not imply the convergence of the density $ p _ {n} ( x) $ of the distribution of the random variable $ Z _ {n} $ to the normal density

$$ \frac{1}{\sqrt {2 \pi } } e ^ {- x ^ {2} / 2 } , $$

even if the distribution $ F $ has a density. If $ Z _ {n} $, for some $ n = n _ {0} $, has a bounded density $ p _ {n _ {0} } ( x) $, then

$$ \tag{* } p _ {n} ( x) \rightarrow \ \frac{1}{\sqrt {2 \pi } } e ^ {- x ^ {2} / 2 } $$

uniformly with respect to $ x $. The condition that $ p _ {n _ {0} } ( x) $ is bounded for some $ n _ {0} $ is necessary for (*) to hold uniformly with respect to $ x $.

Let $ X _ {1} , X _ {2} \dots $ be a sequence of independent random variables that have the same non-degenerate distribution, and suppose that $ X _ {1} $ takes values of the form $ b + N h $, $ N = 0 , \pm 1 , \pm 2 \dots $ with probability 1, where $ h > 0 $ and $ b $ are constants (that is, $ X _ {1} $ has a lattice distribution with step $ h $).

Suppose that $ X _ {1} $ has finite variance $ \sigma ^ {2} $, let $ a = {\mathsf E} X _ {1} $ and let

$$ P _ {n} ( N) = {\mathsf P} \left \{ \sum _ { j= } 1 ^ { n } X _ {j} = n b + N h \right \} . $$

In order that

$$ \sup _ { N } \ \left | \frac{\sigma \sqrt n }{h} P _ {n} ( N) - \frac{1}{\sqrt {2 \pi } } \mathop{\rm exp} \left \{ - \frac{1}{2} \left ( \frac{n b + N h - n a }{\sigma \sqrt n } \right ) ^ {2} \right \} \right | \rightarrow 0 $$

as $ n \rightarrow \infty $ it is necessary and sufficient that the step $ h $ should be maximal. This theorem of B.V. Gnedenko is a generalization of the local Laplace theorem.

Local limit theorems for sums of independent non-identically distributed random variables serve as a basic mathematical tool in classical statistical mechanics and quantum statistics (see [7], [8]).

Local limit theorems have been intensively studied for sums of independent random variables and vectors, together with estimates of the rate of convergence in these theorems. The case of a limiting normal distribution has been most fully investigated (see [3], Chapt. 7); a number of papers have been devoted to local limit theorems for the case of an arbitrary stable distribution (see [2]). Similar investigations have been carried out for sums of dependent random variables, in particular for sums of random variables that form a Markov chain (see [5], ).

References

[1] B.V. Gnedenko, A.N. Kolmogorov, "Limit distributions for sums of independent random variables" , Addison-Wesley (1954) (Translated from Russian)
[2] I.A. Ibragimov, Yu.V. Linnik, "Independent and stationary sequences of random variables" , Wolters-Noordhoff (1971) (Translated from Russian)
[3] V.V. Petrov, "Sums of independent random variables" , Springer (1975) (Translated from Russian)
[4] Yu.V. [Yu.V. Prokhorov] Prohorov, Yu.A. Rozanov, "Probability theory, basic concepts. Limit theorems, random processes" , Springer (1969) (Translated from Russian)
[5] S.Kh. Sirazhdinov, "Limit theorems for homogeneous Markov chains" , Tashkent (1955) (In Russian)
[6a] V.A. Statulyavichus, "Limit theorems and asymptotic expansions for non-stationary Markov chains" Litovsk. Mat. Sb. , 1 (1961) pp. 231–314 (In Russian) (English abstract)
[6b] V.A. Statulyavichus, "Limit theorems for sums of random variables that are connected in a Markov chain I" Litovsk. Mat. Sb. , 9 (1969) pp. 345–362 (In Russian) (English abstract)
[7] A.Ya. Khinchin, "Mathematical foundations of statistical mechanics" , Dover, reprint (1949) (Translated from Russian)
[8] A.Ya. Khinchin, "Mathematical foundations of quantum statistics" , Moscow-Leningrad (1951) (In Russian)

Comments

References

[a1] R.N. Bhattacharya, R. Ranga Rao, "Normal approximations and asymptotic expansions" , Wiley (1976)
[a2] V. Paulauskas, "Approximation theory in the central limit theorem. Exact results in Banach spaces" , Kluwer (1989) (Translated from Russian)
How to Cite This Entry:
Local limit theorems. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Local_limit_theorems&oldid=11793
This article was adapted from an original article by V.V. Petrov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article