Namespaces
Variants
Actions

Difference between revisions of "Laplace theorem"

From Encyclopedia of Mathematics
Jump to: navigation, search
(refs format)
m (TeX-ed article.)
Line 1: Line 1:
Laplace's theorem on determinants. See [[Cofactor|Cofactor]].
+
Laplace’s theorem on determinants. See [[Cofactor|Cofactor]].
  
 
{{MSC|60F05}}
 
{{MSC|60F05}}
Line 5: Line 5:
 
[[Category:Limit theorems]]
 
[[Category:Limit theorems]]
  
Laplace's theorem on the approximation of the binomial distribution by the normal distribution. This is the first version of the [[Central limit theorem|central limit theorem]] of probability theory: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l0575301.png" /> is the number of "successes" in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l0575302.png" /> [[Bernoulli trials|Bernoulli trials]] with probability of success <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l0575303.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l0575304.png" />, then, as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l0575305.png" />, for any real numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l0575306.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l0575307.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l0575308.png" />) one has
+
Laplace’s theorem on the approximation of the binomial distribution by the normal distribution. This is the first version of the [[Central limit theorem|Central Limit Theorem]] of probability theory: If $ S_{n} $ denotes the number of “successes” in $ n $ [[Bernoulli trials|Bernoulli trials]] with probability of success $ p $ ($ 0 < p < 1 $), then for any two real numbers $ a $ and $ b $ satisfying $ a < b $, one has
 
+
$$
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l0575309.png" /></td> <td valign="top" style="width:5%;text-align:right;">(*)</td></tr></table>
+
\lim_{n \to \infty} \mathsf{P} \! \left( a < \frac{S_{n} - n p}{\sqrt{n p (1 - p)}} < b \right) = \Phi(b) - \Phi(a),
 
+
$$
where
+
where $ \Phi: \mathbb{R} \to (0,1) $ defined by
 
+
$$
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753010.png" /></td> </tr></table>
+
\forall x \in \mathbb{R}: \qquad
 
+
\Phi(x) \stackrel{\text{df}}{=} \frac{1}{\sqrt{2 \pi}} \int_{- \infty}^{x} e^{- y^{2} / 2} ~ \mathrm{d}{y}
is the distribution function of the standard normal law.
+
$$
 
+
is the cumulative distribution function of the standard normal law.
The local Laplace theorem has independent significance: For the probability
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753011.png" /></td> </tr></table>
 
  
 +
The local Laplace Theorem has independent significance: For the probability value
 +
$$
 +
\mathsf{P}(S_{n} = m) = \binom{n}{m} p^{m} (1 - p)^{n - m}, \qquad \text{where} ~ m \in \mathbb{Z} \cap [0,n],
 +
$$
 
one has
 
one has
 +
$$
 +
\mathsf{P}(S_{n} = m) = \frac{1}{\sqrt{n p (1 - p)}} (1 + \epsilon_{n}) \phi(x),
 +
$$
 +
where $ \phi: \mathbb{R} \to \mathbb{R}_{> 0} $ defined by
 +
$$
 +
\forall x \in \mathbb{R}: \qquad
 +
\phi(x) \stackrel{\text{df}}{=} \frac{1}{\sqrt{2 \pi}} e^{- x^{2} / 2}
 +
$$
 +
is the density of the standard normal distribution, and $ \epsilon_{n} \to 0 $ as $ n \to \infty $ uniformly for all $ m $ for which $ x = \dfrac{m - n p}{\sqrt{n p (1 - p)}} $ lies in some finite interval.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753012.png" /></td> </tr></table>
+
In its general form, the theorem was proved by P. S. Laplace {{Cite|L}}. The special case $ p = 0.5 $ of the Laplace Theorem was studied by A. de Moivre {{Cite|M}}. Therefore, the Laplace Theorem is sometimes called the “de Moivre–Laplace Theorem”.
  
where
+
For practical applications, the Laplace Theorem is important in order to obtain an idea of the errors that arise in the use of approximation formulas. In the more precise (by comparison with {{Cite|L}}) asymptotic formula
 +
$$
 +
\forall y \in \mathbb{R}: \qquad
 +
\mathsf{P}(S_{n} < y) = \Phi \! \left( \frac{y - n p + 0.5}{\sqrt{n p (1 - p)}} \right) + {R_{n}}(y),
 +
$$
 +
the remainder term $ {R_{n}}(y) $ has order $ \mathcal{O} \! \left( \dfrac{1}{\sqrt{n}} \right) $ uniformly for all real numbers $ y $. For the uniform approximation of the [[Binomial distribution|binomial distribution]] by means of the [[Normal distribution|normal distribution]], the following formula of Ya. Uspenskii (1937) is more useful: If $ \sigma = \sqrt{n p (1 - p)} $, then for any two real numbers $ a $ and $ b $ satisfying $ a < b $, one has
 +
$$
 +
\mathsf{P}(a < S_{n} < b) =
 +
\Phi \! \left( \frac{b - n p + 0.5}{\sigma} \right) -
 +
\Phi \! \left( \frac{a - n p + 0.5}{\sigma} \right) +
 +
\psi \! \left( \frac{b - n p + 0.5}{\sigma} \right) -
 +
\psi \! \left( \frac{a - n p + 0.5}{\sigma} \right) +
 +
\Delta,
 +
$$
 +
where $ \psi: \mathbb{R} \to \mathbb{R} $ is defined by
 +
$$
 +
\forall x \in \mathbb{R}: \qquad
 +
\psi(x) \stackrel{\text{df}}{=} \frac{1 - 2 p}{6 \sigma} (1 - x^{2}) \phi(x),
 +
$$
 +
and for $ \sigma \geq 5 $,
 +
$$
 +
|\Delta| < (0.13 + 0.18 |1 - 2 p|) \sigma^{2} + e^{- 3 \sigma / 2}.
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753013.png" /></td> </tr></table>
+
To improve the relative accuracy of the approximation, S. N. Bernstein [S. N. Bernstein] (1943) and W. Feller (1945) suggested other formulas.
  
is the density of the standard normal distribution and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753014.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753015.png" /> uniformly for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753016.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753017.png" /> belongs to some finite interval.
+
====References====
  
In its general form the theorem was proved by P.S. Laplace {{Cite|L}}. The special case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753018.png" /> of the Laplace theorem was studied by A. de Moivre {{Cite|M}}, and therefore the Laplace theorem is sometimes called the de Moivre–Laplace theorem.
 
 
For practical applications the Laplace theorem is important in order to obtain an idea of the errors that arise in the use of approximate formulas. In the more precise (by comparison with {{Cite|L}}) asymptotic formula
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753019.png" /></td> </tr></table>
 
 
the remainder term <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753020.png" /> has order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753021.png" /> uniformly for all real <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753022.png" />. For uniform approximation of the [[Binomial distribution|binomial distribution]] by means of the [[Normal distribution|normal distribution]] the formula of Ya. Uspenskii (1937) is more useful: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753023.png" />, then for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753024.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753025.png" />,
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753026.png" /></td> </tr></table>
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753027.png" /></td> </tr></table>
 
 
where
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753028.png" /></td> </tr></table>
 
 
and for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753029.png" />,
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057530/l05753030.png" /></td> </tr></table>
 
 
To improve the relative accuracy of the approximation S.N. Bernstein [S.N. Bernshtein] (1943) and W. Feller (1945) suggested other formulas.
 
 
====References====
 
 
{|
 
{|
|valign="top"|{{Ref|L}}|| P.S. Laplace, "Théorie analytique des probabilités" , Paris (1812) {{MR|2274728}} {{MR|1400403}} {{MR|1400402}} {{ZBL|1047.01534}} {{ZBL|1047.01533}}
+
|valign = "top"|{{Ref|L}}|| P. S. Laplace, “Théorie analytique des probabilités”, Paris (1812). {{MR|2274728}}, {{MR|1400403}}, {{MR|1400402}}, {{ZBL|1047.01534}}, {{ZBL|1047.01533}}
 
|-
 
|-
|valign="top"|{{Ref|M}}|| A. de Moivre, "Miscellanea analytica de seriebus et quadraturis" , London (1730)
+
|valign = "top"|{{Ref|M}}|| A. de Moivre, “Miscellanea analytica de seriebus et quadraturis”, London (1730).
 
|-
 
|-
|valign="top"|{{Ref|PR}}|| Yu.V. Prohorov, Yu.A. Rozanov, "Probability theory, basic concepts. Limit theorems, random processes" , Springer (1969) (Translated from Russian) {{MR|0251754}} {{ZBL|}}
+
|valign = "top"|{{Ref|PR}}|| Yu. V. Prohorov, Yu. A. Rozanov, “Probability theory, basic concepts. Limit theorems, random processes”, Springer (1969). (Translated from Russian) {{MR|0251754}}
 
|-
 
|-
|valign="top"|{{Ref|F}}|| W. Feller, "On the normal approximation to the binomial distribution" ''Ann. Math. Statist.'' , '''16''' (1945) pp. 319–329 {{MR|0015706}} {{ZBL|0060.28703}}
+
|valign = "top"|{{Ref|F}}|| W. Feller, “On the normal approximation to the binomial distribution”, ''Ann. Math. Statist.'', '''16''' (1945), pp. 319–329. {{MR|0015706}}, {{ZBL|0060.28703}}
 
|-
 
|-
|valign="top"|{{Ref|F2}}|| W. Feller, [[Feller, "An introduction to probability theory and its applications"|"An introduction to probability theory and its applications"]], '''1''', Wiley (1968)
+
|valign = "top"|{{Ref|F2}}|| W. Feller, [[Feller, "An introduction to probability theory and its applications"|“An introduction to probability theory and its applications”]], '''1''', Wiley (1968).
 
|}
 
|}
  
 
====Comments====
 
====Comments====
For a more detailed and more general discussion of approximation by the [[Normal distribution|normal distribution]] see {{Cite|P}}.
+
 
 +
For a more detailed and more general discussion of approximation by the [[Normal distribution|normal distribution]], see {{Cite|P}}.
  
 
====References====
 
====References====
 +
 
{|
 
{|
|valign="top"|{{Ref|P}}|| V.V. Petrov, "Sums of independent random variables" , Springer (1975) (Translated from Russian) {{MR|0388499}} {{ZBL|0322.60043}} {{ZBL|0322.60042}}
+
|valign = "top"|{{Ref|P}}|| V. V. Petrov, “Sums of independent random variables”, Springer (1975). (Translated from Russian) {{MR|0388499}}, {{ZBL|0322.60043}}, {{ZBL|0322.60042}}
 
|}
 
|}

Revision as of 19:11, 7 July 2016

Laplace’s theorem on determinants. See Cofactor.

2020 Mathematics Subject Classification: Primary: 60F05 [MSN][ZBL]

Laplace’s theorem on the approximation of the binomial distribution by the normal distribution. This is the first version of the Central Limit Theorem of probability theory: If $ S_{n} $ denotes the number of “successes” in $ n $ Bernoulli trials with probability of success $ p $ ($ 0 < p < 1 $), then for any two real numbers $ a $ and $ b $ satisfying $ a < b $, one has $$ \lim_{n \to \infty} \mathsf{P} \! \left( a < \frac{S_{n} - n p}{\sqrt{n p (1 - p)}} < b \right) = \Phi(b) - \Phi(a), $$ where $ \Phi: \mathbb{R} \to (0,1) $ defined by $$ \forall x \in \mathbb{R}: \qquad \Phi(x) \stackrel{\text{df}}{=} \frac{1}{\sqrt{2 \pi}} \int_{- \infty}^{x} e^{- y^{2} / 2} ~ \mathrm{d}{y} $$ is the cumulative distribution function of the standard normal law.

The local Laplace Theorem has independent significance: For the probability value $$ \mathsf{P}(S_{n} = m) = \binom{n}{m} p^{m} (1 - p)^{n - m}, \qquad \text{where} ~ m \in \mathbb{Z} \cap [0,n], $$ one has $$ \mathsf{P}(S_{n} = m) = \frac{1}{\sqrt{n p (1 - p)}} (1 + \epsilon_{n}) \phi(x), $$ where $ \phi: \mathbb{R} \to \mathbb{R}_{> 0} $ defined by $$ \forall x \in \mathbb{R}: \qquad \phi(x) \stackrel{\text{df}}{=} \frac{1}{\sqrt{2 \pi}} e^{- x^{2} / 2} $$ is the density of the standard normal distribution, and $ \epsilon_{n} \to 0 $ as $ n \to \infty $ uniformly for all $ m $ for which $ x = \dfrac{m - n p}{\sqrt{n p (1 - p)}} $ lies in some finite interval.

In its general form, the theorem was proved by P. S. Laplace [L]. The special case $ p = 0.5 $ of the Laplace Theorem was studied by A. de Moivre [M]. Therefore, the Laplace Theorem is sometimes called the “de Moivre–Laplace Theorem”.

For practical applications, the Laplace Theorem is important in order to obtain an idea of the errors that arise in the use of approximation formulas. In the more precise (by comparison with [L]) asymptotic formula $$ \forall y \in \mathbb{R}: \qquad \mathsf{P}(S_{n} < y) = \Phi \! \left( \frac{y - n p + 0.5}{\sqrt{n p (1 - p)}} \right) + {R_{n}}(y), $$ the remainder term $ {R_{n}}(y) $ has order $ \mathcal{O} \! \left( \dfrac{1}{\sqrt{n}} \right) $ uniformly for all real numbers $ y $. For the uniform approximation of the binomial distribution by means of the normal distribution, the following formula of Ya. Uspenskii (1937) is more useful: If $ \sigma = \sqrt{n p (1 - p)} $, then for any two real numbers $ a $ and $ b $ satisfying $ a < b $, one has $$ \mathsf{P}(a < S_{n} < b) = \Phi \! \left( \frac{b - n p + 0.5}{\sigma} \right) - \Phi \! \left( \frac{a - n p + 0.5}{\sigma} \right) + \psi \! \left( \frac{b - n p + 0.5}{\sigma} \right) - \psi \! \left( \frac{a - n p + 0.5}{\sigma} \right) + \Delta, $$ where $ \psi: \mathbb{R} \to \mathbb{R} $ is defined by $$ \forall x \in \mathbb{R}: \qquad \psi(x) \stackrel{\text{df}}{=} \frac{1 - 2 p}{6 \sigma} (1 - x^{2}) \phi(x), $$ and for $ \sigma \geq 5 $, $$ |\Delta| < (0.13 + 0.18 |1 - 2 p|) \sigma^{2} + e^{- 3 \sigma / 2}. $$

To improve the relative accuracy of the approximation, S. N. Bernstein [S. N. Bernstein] (1943) and W. Feller (1945) suggested other formulas.

References

[L] P. S. Laplace, “Théorie analytique des probabilités”, Paris (1812). MR2274728, MR1400403, MR1400402, Zbl 1047.01534, Zbl 1047.01533
[M] A. de Moivre, “Miscellanea analytica de seriebus et quadraturis”, London (1730).
[PR] Yu. V. Prohorov, Yu. A. Rozanov, “Probability theory, basic concepts. Limit theorems, random processes”, Springer (1969). (Translated from Russian) MR0251754
[F] W. Feller, “On the normal approximation to the binomial distribution”, Ann. Math. Statist., 16 (1945), pp. 319–329. MR0015706, Zbl 0060.28703
[F2] W. Feller, “An introduction to probability theory and its applications”, 1, Wiley (1968).

Comments

For a more detailed and more general discussion of approximation by the normal distribution, see [P].

References

[P] V. V. Petrov, “Sums of independent random variables”, Springer (1975). (Translated from Russian) MR0388499, Zbl 0322.60043, Zbl 0322.60042
How to Cite This Entry:
Laplace theorem. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Laplace_theorem&oldid=39015
This article was adapted from an original article by A.V. Prokhorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article