Laplace theorem

Laplace’s theorem on determinants. See Cofactor.

2020 Mathematics Subject Classification: Primary: 60F05 [MSN][ZBL]

Laplace’s theorem on the approximation of the binomial distribution by the normal distribution. This is the first version of the Central Limit Theorem of probability theory: If $S_{n}$ denotes the number of “successes” in $n$ Bernoulli trials with probability of success $p$ ($0 < p < 1$), then for any two real numbers $a$ and $b$ satisfying $a < b$, one has $$\lim_{n \to \infty} \mathsf{P} \! \left( a < \frac{S_{n} - n p}{\sqrt{n p (1 - p)}} < b \right) = \Phi(b) - \Phi(a),$$ where $\Phi: \mathbb{R} \to (0,1)$ defined by $$\forall x \in \mathbb{R}: \qquad \Phi(x) \stackrel{\text{df}}{=} \frac{1}{\sqrt{2 \pi}} \int_{- \infty}^{x} e^{- y^{2} / 2} ~ \mathrm{d}{y}$$ is the cumulative distribution function of the standard normal law.

The local Laplace Theorem has independent significance: For the probability value $$\mathsf{P}(S_{n} = m) = \binom{n}{m} p^{m} (1 - p)^{n - m}, \qquad \text{where} ~ m \in \mathbb{Z} \cap [0,n],$$ one has $$\mathsf{P}(S_{n} = m) = \frac{1}{\sqrt{n p (1 - p)}} (1 + \epsilon_{n}) \phi(x),$$ where $\phi: \mathbb{R} \to \mathbb{R}_{> 0}$ defined by $$\forall x \in \mathbb{R}: \qquad \phi(x) \stackrel{\text{df}}{=} \frac{1}{\sqrt{2 \pi}} e^{- x^{2} / 2}$$ is the density of the standard normal distribution, and $\epsilon_{n} \to 0$ as $n \to \infty$ uniformly for all $m$ for which $x = \dfrac{m - n p}{\sqrt{n p (1 - p)}}$ lies in some finite interval.

In its general form, the theorem was proved by P. S. Laplace [L]. The special case $p = 0.5$ of the Laplace Theorem was studied by A. de Moivre [M]. Therefore, the Laplace Theorem is sometimes called the “de Moivre–Laplace Theorem”.

For practical applications, the Laplace Theorem is important in order to obtain an idea of the errors that arise in the use of approximation formulas. In the more precise (by comparison with [L]) asymptotic formula $$\forall y \in \mathbb{R}: \qquad \mathsf{P}(S_{n} < y) = \Phi \! \left( \frac{y - n p + 0.5}{\sqrt{n p (1 - p)}} \right) + {R_{n}}(y),$$ the remainder term ${R_{n}}(y)$ has order $\mathcal{O} \! \left( \dfrac{1}{\sqrt{n}} \right)$ uniformly for all real numbers $y$. For the uniform approximation of the binomial distribution by means of the normal distribution, the following formula of Ya. Uspenskii (1937) is more useful: If $\sigma = \sqrt{n p (1 - p)}$, then for any two real numbers $a$ and $b$ satisfying $a < b$, one has $$\mathsf{P}(a < S_{n} < b) = \Phi \! \left( \frac{b - n p + 0.5}{\sigma} \right) - \Phi \! \left( \frac{a - n p - 0.5}{\sigma} \right) + \psi \! \left( \frac{b - n p + 0.5}{\sigma} \right) - \psi \! \left( \frac{a - n p - 0.5}{\sigma} \right) + \Delta,$$ where $\psi: \mathbb{R} \to \mathbb{R}$ is defined by $$\forall x \in \mathbb{R}: \qquad \psi(x) \stackrel{\text{df}}{=} \frac{1 - 2 p}{6 \sigma} (1 - x^{2}) \phi(x),$$ and for $\sigma \geq 5$, $$|\Delta| < \frac{(0.13 + 0.18 |1 - 2 p|)}{\sigma^{2}} + \frac{1}{e^{3 \sigma / 2}}.$$

To improve the relative accuracy of the approximation, S. N. Bernstein [S. N. Bernstein] (1943) and W. Feller (1945) suggested other formulas.

References

 [L] P. S. Laplace, “Théorie analytique des probabilités”, Paris (1812). MR2274728, MR1400403, MR1400402, Zbl 1047.01534, Zbl 1047.01533 [M] A. de Moivre, “Miscellanea analytica de seriebus et quadraturis”, London (1730). [PR] Yu. V. Prohorov, Yu. A. Rozanov, “Probability theory, basic concepts. Limit theorems, random processes”, Springer (1969). (Translated from Russian) MR0251754 [F] W. Feller, “On the normal approximation to the binomial distribution”, Ann. Math. Statist., 16 (1945), pp. 319–329. MR0015706, Zbl 0060.28703 [F2] W. Feller, “An introduction to probability theory and its applications”, 1, Wiley (1968).