Namespaces
Variants
Actions

Difference between revisions of "Chebyshev inequality in probability theory"

From Encyclopedia of Mathematics
Jump to: navigation, search
(rather, 60E15)
m (tex encoded by computer)
 
(2 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
<!--
 +
c0218901.png
 +
$#A+1 = 30 n = 0
 +
$#C+1 = 30 : ~/encyclopedia/old_files/data/C021/C.0201890 Chebyshev inequality in probability theory,
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''Bienaymé–Chebyshev inequality''
 
''Bienaymé–Chebyshev inequality''
  
Line 5: Line 17:
 
[[Category:Distribution theory]]
 
[[Category:Distribution theory]]
  
An inequality in probability theory that gives a bound on the probability of deviation of a given random variable from its mathematical expectation in terms of its variance. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c0218901.png" /> be a random variable with finite mathematical expectation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c0218902.png" /> and variance <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c0218903.png" />. Chebyshev's inequality states that for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c0218904.png" /> the probability of the event
+
An inequality in probability theory that gives a bound on the probability of deviation of a given random variable from its mathematical expectation in terms of its variance. Let $  X ( \omega ) $
 +
be a random variable with finite mathematical expectation $  {\mathsf E} X ( \omega ) $
 +
and variance $  {\mathsf D} X ( \omega ) $.  
 +
Chebyshev's inequality states that for any $  \epsilon > 0 $
 +
the probability of the event
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c0218905.png" /></td> </tr></table>
+
$$
 +
\{  \omega  : {| X ( \omega ) - {\mathsf E} X | \geq  \epsilon } \}
 +
$$
  
does not exceed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c0218906.png" />, or
+
does not exceed $  {\mathsf D} X / \epsilon  ^ {2} $,  
 +
or
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c0218907.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
{\mathsf P} \{ | X - {\mathsf E} X | \geq  t
 +
\sqrt { {\mathsf D} X } \}  \leq 
 +
\frac{1}{t  ^ {2} }
 +
.
 +
$$
  
 
This inequality was discovered independently by I. Bienaymé (1853) and P.L. Chebyshev (1866). In modern literature this inequality is usually referred to as Chebyshev's inequality, possibly because the name of Chebyshev is associated with an application of it in the proof of the [[Law of large numbers|law of large numbers]] (a theorem of Chebyshev).
 
This inequality was discovered independently by I. Bienaymé (1853) and P.L. Chebyshev (1866). In modern literature this inequality is usually referred to as Chebyshev's inequality, possibly because the name of Chebyshev is associated with an application of it in the proof of the [[Law of large numbers|law of large numbers]] (a theorem of Chebyshev).
  
Chebyshev's inequality is a representative of a whole class of inequalities of this type, the simplest of which asserts that for a non-negative random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c0218908.png" /> with finite mathematical expectation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c0218909.png" />,
+
Chebyshev's inequality is a representative of a whole class of inequalities of this type, the simplest of which asserts that for a non-negative random variable $  X $
 +
with finite mathematical expectation $  {\mathsf E} X $,
 +
 
 +
$$ \tag{2 }
 +
{\mathsf P} \{ X \geq  \epsilon \}  \leq 
 +
\frac{ {\mathsf E} X } \epsilon
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189010.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$
  
 
(this is sometimes called Markov's inequality). This implies an inequality for arbitrary random variables, which depends on the moments:
 
(this is sometimes called Markov's inequality). This implies an inequality for arbitrary random variables, which depends on the moments:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189011.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} \{ | X | \geq  \epsilon \}  \leq  \
 +
 
 +
\frac{ {\mathsf E} | X |  ^ {r} }{\epsilon  ^ {r} }
 +
,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189012.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} \{ | X - {\mathsf E} X | \geq  \epsilon \}  \leq 
 +
\frac{ {\mathsf E} |
 +
X - {\mathsf E} X |  ^ {r} }{\epsilon  ^ {r} }
 +
,\  r \geq  1
 +
$$
  
(for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189013.png" /> this is just the Chebyshev inequality), and also the more general inequality
+
(for $  r = 2 $
 +
this is just the Chebyshev inequality), and also the more general inequality
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189014.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
{\mathsf P} \{ | X | \geq  \epsilon \}  \leq  \
  
for a non-negative even function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189015.png" /> that is non-decreasing for positive <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189016.png" />. Inequality (3) indicates a way to obtain new inequalities of the same type, for example the exponential inequality
+
\frac{ {\mathsf E} f ( X) }{f ( \epsilon ) }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189017.png" /></td> </tr></table>
+
$$
  
It has become traditional to consider all these inequalities to be of Chebyshev type, and even to call them Chebyshev inequalities. There is a general principle for obtaining Chebyshev inequalities by imposing conditions on the moments, based on the use of the system of Chebyshev polynomials (cf. [[#References|[4]]]). For arbitrary random variables the Chebyshev inequalities give precise and best possible bounds, but in certain concrete situations these bounds can be improved. For example, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189018.png" /> has a unimodal distribution with mode <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189019.png" /> coinciding with the mathematical expectation, then Gauss' inequality holds:
+
for a non-negative even function  $  f ( x) $
 +
that is non-decreasing for positive  $  x $.  
 +
Inequality (3) indicates a way to obtain new inequalities of the same type, for example the exponential inequality
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189020.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} \{ X \geq  \epsilon \}  \leq  \
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189021.png" />.
+
\frac{ {\mathsf E} e  ^ {cX} }{e ^ {c \epsilon } }
 +
,\ \
 +
c > 0 .
 +
$$
 +
 
 +
It has become traditional to consider all these inequalities to be of Chebyshev type, and even to call them Chebyshev inequalities. There is a general principle for obtaining Chebyshev inequalities by imposing conditions on the moments, based on the use of the system of Chebyshev polynomials (cf. {{Cite|KS}}). For arbitrary random variables the Chebyshev inequalities give precise and best possible bounds, but in certain concrete situations these bounds can be improved. For example, if  $  X $
 +
has a unimodal distribution with mode  $  \mu $
 +
coinciding with the mathematical expectation, then Gauss' inequality holds:
 +
 
 +
$$
 +
{\mathsf P} \{ | X - \mu | \geq  \epsilon \}  \leq  \
 +
 
 +
\frac{4}{3}
 +
 +
\frac{\sigma  ^ {2} }{\epsilon  ^ {2} }
 +
,\ \
 +
\epsilon \geq 
 +
\frac{2 \sigma }{\sqrt 3 }
 +
,
 +
$$
 +
 
 +
where  $  \sigma  ^ {2} = {\mathsf D} X $.
  
 
The importance of Chebyshev's inequality in probability theory lies not so much in its exactness, but in its simplicity and universality. Chebyshev's inequality and its modifications, applied to sums of random variables, played a large part in the proofs of various forms of the law of large numbers and the law of the iterated logarithm. Chebyshev's inequality for sums of independent random variables has been subject to generalization and improvement in two different directions. The first of these is connected with the transition from the Chebyshev inequality
 
The importance of Chebyshev's inequality in probability theory lies not so much in its exactness, but in its simplicity and universality. Chebyshev's inequality and its modifications, applied to sums of random variables, played a large part in the proofs of various forms of the law of large numbers and the law of the iterated logarithm. Chebyshev's inequality for sums of independent random variables has been subject to generalization and improvement in two different directions. The first of these is connected with the transition from the Chebyshev inequality
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189022.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} \{ | X _ {1} + \dots + X _ {n} - ( {\mathsf E} X _ {1} + \dots
 +
+ {\mathsf E} X _ {n} ) | \geq  \epsilon \} \leq
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189023.png" /></td> </tr></table>
+
$$
 +
\leq  \
 +
 
 +
\frac{ {\mathsf D} ( X _ {1} + \dots + X _ {n} ) }{\epsilon  ^ {2} }
 +
 
 +
$$
  
 
to the significantly stronger inequality
 
to the significantly stronger inequality
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189024.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} \left \{ \max _ {1 \leq  k \leq  n }  | X _ {1} + \dots
 +
+ X _ {k} - ( {\mathsf E} X _ {1} + \dots + {\mathsf E} X _ {k} ) | \geq
 +
\epsilon \right \} \leq
 +
$$
 +
 
 +
$$
 +
\leq  \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189025.png" /></td> </tr></table>
+
\frac{ {\mathsf D} ( X _ {1} + \dots + X _ {n} ) }{\epsilon  ^ {2} }
 +
,
 +
$$
  
 
which was proved by A.N. Kolmogorov and applied by him to prove the [[Strong law of large numbers|strong law of large numbers]] (cf. [[Kolmogorov inequality|Kolmogorov inequality]]).
 
which was proved by A.N. Kolmogorov and applied by him to prove the [[Strong law of large numbers|strong law of large numbers]] (cf. [[Kolmogorov inequality|Kolmogorov inequality]]).
Line 55: Line 137:
 
The second direction is concerned with replacing the power in Chebyshev's inequality by something with exponential decay, and leads to the Bernshtein–Kolmogorov inequality:
 
The second direction is concerned with replacing the power in Chebyshev's inequality by something with exponential decay, and leads to the Bernshtein–Kolmogorov inequality:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189026.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} \{ | X _ {1} + \dots + X _ {n} | \geq  \epsilon \}  \leq  \
 +
2  \mathop{\rm exp} \left \{ -  
 +
\frac{\epsilon  ^ {2} }{2 \sigma  ^ {2} ( 1 + a / 3 ) }
 +
\right \} ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189027.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189028.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189029.png" /> (cf. [[Bernstein inequality|Bernstein inequality]]). Such improvements of Chebyshev inequalities are obtained under additional restrictions on the summands <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021890/c02189030.png" />.
+
where $  | X _ {i} | \leq  C , {\mathsf E} X _ {i} = 0 $,  
 +
$  \sigma  ^ {2} = {\mathsf D} ( X _ {1} + \dots + X _ {n} ) $,  
 +
$  a = C \epsilon / \sigma  ^ {2} $(
 +
cf. [[Bernstein inequality|Bernstein inequality]]). Such improvements of Chebyshev inequalities are obtained under additional restrictions on the summands $  X _ {i} $.
  
Multi-dimensional analogues have been obtained of some of the inequalities stated here (cf. [[#References|[5]]]).
+
Multi-dimensional analogues have been obtained of some of the inequalities stated here (cf. {{Cite|P}}).
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  P.L. Chebyshev,   ''Mat. Sb.'' , '''2''' (1867) pp. 1–9</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  A.A. Markov,   "Wahrscheinlichkeitsrechung" , Teubner (1912) (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"A.N. Kolmogorov,   "Foundations of the theory of probability" , Chelsea, reprint (1950) (Translated from German)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  S. Karlin,   V. Studden,   "Tchebycheff systems: with applications in analysis and statistics" , Interscience (1966)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  Yu.V. Prokhorov,   "Multivariate distributions: inequalities and limit theorems" ''J. Soviet Math.'' , '''2''' (1974) pp. 475–488 ''Itogi Nauk. i Tekhn. Teor. Veroyatnost. Mat. Stat. Teoret. Kibernet.'' , '''10''' (1972) pp. 5–24</TD></TR></table>
+
{|
 +
|valign="top"|{{Ref|C}}|| P.L. Chebyshev, "?", ''Mat. Sb.'' , '''2''' (1867) pp. 1–9
 +
|-
 +
|valign="top"|{{Ref|M}}|| A.A. Markov, "Wahrscheinlichkeitsrechung" , Teubner (1912) (Translated from Russian) {{MR|}} {{ZBL|39.0292.02}}
 +
|-
 +
|valign="top"|{{Ref|K}}|| A.N. Kolmogorov, "Foundations of the theory of probability" , Chelsea, reprint (1950) (Translated from German) {{MR|0032961}} {{ZBL|}}
 +
|-
 +
|valign="top"|{{Ref|KS}}|| S. Karlin, V. Studden, "Tchebycheff systems: with applications in analysis and statistics" , Interscience (1966) {{MR|0204922}} {{ZBL|0153.38902}}
 +
|-
 +
|valign="top"|{{Ref|P}}|| Yu.V. Prokhorov, "Multivariate distributions: inequalities and limit theorems" ''J. Soviet Math.'' , '''2''' (1974) pp. 475–488 ''Itogi Nauk. i Tekhn. Teor. Veroyatnost. Mat. Stat. Teoret. Kibernet.'' , '''10''' (1972) pp. 5–24 {{MR|}} {{ZBL|0295.60013}}
 +
|}

Latest revision as of 16:43, 4 June 2020


Bienaymé–Chebyshev inequality

2020 Mathematics Subject Classification: Primary: 60E15 [MSN][ZBL]

An inequality in probability theory that gives a bound on the probability of deviation of a given random variable from its mathematical expectation in terms of its variance. Let $ X ( \omega ) $ be a random variable with finite mathematical expectation $ {\mathsf E} X ( \omega ) $ and variance $ {\mathsf D} X ( \omega ) $. Chebyshev's inequality states that for any $ \epsilon > 0 $ the probability of the event

$$ \{ \omega : {| X ( \omega ) - {\mathsf E} X | \geq \epsilon } \} $$

does not exceed $ {\mathsf D} X / \epsilon ^ {2} $, or

$$ \tag{1 } {\mathsf P} \{ | X - {\mathsf E} X | \geq t \sqrt { {\mathsf D} X } \} \leq \frac{1}{t ^ {2} } . $$

This inequality was discovered independently by I. Bienaymé (1853) and P.L. Chebyshev (1866). In modern literature this inequality is usually referred to as Chebyshev's inequality, possibly because the name of Chebyshev is associated with an application of it in the proof of the law of large numbers (a theorem of Chebyshev).

Chebyshev's inequality is a representative of a whole class of inequalities of this type, the simplest of which asserts that for a non-negative random variable $ X $ with finite mathematical expectation $ {\mathsf E} X $,

$$ \tag{2 } {\mathsf P} \{ X \geq \epsilon \} \leq \frac{ {\mathsf E} X } \epsilon $$

(this is sometimes called Markov's inequality). This implies an inequality for arbitrary random variables, which depends on the moments:

$$ {\mathsf P} \{ | X | \geq \epsilon \} \leq \ \frac{ {\mathsf E} | X | ^ {r} }{\epsilon ^ {r} } , $$

$$ {\mathsf P} \{ | X - {\mathsf E} X | \geq \epsilon \} \leq \frac{ {\mathsf E} | X - {\mathsf E} X | ^ {r} }{\epsilon ^ {r} } ,\ r \geq 1 $$

(for $ r = 2 $ this is just the Chebyshev inequality), and also the more general inequality

$$ \tag{3 } {\mathsf P} \{ | X | \geq \epsilon \} \leq \ \frac{ {\mathsf E} f ( X) }{f ( \epsilon ) } $$

for a non-negative even function $ f ( x) $ that is non-decreasing for positive $ x $. Inequality (3) indicates a way to obtain new inequalities of the same type, for example the exponential inequality

$$ {\mathsf P} \{ X \geq \epsilon \} \leq \ \frac{ {\mathsf E} e ^ {cX} }{e ^ {c \epsilon } } ,\ \ c > 0 . $$

It has become traditional to consider all these inequalities to be of Chebyshev type, and even to call them Chebyshev inequalities. There is a general principle for obtaining Chebyshev inequalities by imposing conditions on the moments, based on the use of the system of Chebyshev polynomials (cf. [KS]). For arbitrary random variables the Chebyshev inequalities give precise and best possible bounds, but in certain concrete situations these bounds can be improved. For example, if $ X $ has a unimodal distribution with mode $ \mu $ coinciding with the mathematical expectation, then Gauss' inequality holds:

$$ {\mathsf P} \{ | X - \mu | \geq \epsilon \} \leq \ \frac{4}{3} \frac{\sigma ^ {2} }{\epsilon ^ {2} } ,\ \ \epsilon \geq \frac{2 \sigma }{\sqrt 3 } , $$

where $ \sigma ^ {2} = {\mathsf D} X $.

The importance of Chebyshev's inequality in probability theory lies not so much in its exactness, but in its simplicity and universality. Chebyshev's inequality and its modifications, applied to sums of random variables, played a large part in the proofs of various forms of the law of large numbers and the law of the iterated logarithm. Chebyshev's inequality for sums of independent random variables has been subject to generalization and improvement in two different directions. The first of these is connected with the transition from the Chebyshev inequality

$$ {\mathsf P} \{ | X _ {1} + \dots + X _ {n} - ( {\mathsf E} X _ {1} + \dots + {\mathsf E} X _ {n} ) | \geq \epsilon \} \leq $$

$$ \leq \ \frac{ {\mathsf D} ( X _ {1} + \dots + X _ {n} ) }{\epsilon ^ {2} } $$

to the significantly stronger inequality

$$ {\mathsf P} \left \{ \max _ {1 \leq k \leq n } | X _ {1} + \dots + X _ {k} - ( {\mathsf E} X _ {1} + \dots + {\mathsf E} X _ {k} ) | \geq \epsilon \right \} \leq $$

$$ \leq \ \frac{ {\mathsf D} ( X _ {1} + \dots + X _ {n} ) }{\epsilon ^ {2} } , $$

which was proved by A.N. Kolmogorov and applied by him to prove the strong law of large numbers (cf. Kolmogorov inequality).

The second direction is concerned with replacing the power in Chebyshev's inequality by something with exponential decay, and leads to the Bernshtein–Kolmogorov inequality:

$$ {\mathsf P} \{ | X _ {1} + \dots + X _ {n} | \geq \epsilon \} \leq \ 2 \mathop{\rm exp} \left \{ - \frac{\epsilon ^ {2} }{2 \sigma ^ {2} ( 1 + a / 3 ) } \right \} , $$

where $ | X _ {i} | \leq C , {\mathsf E} X _ {i} = 0 $, $ \sigma ^ {2} = {\mathsf D} ( X _ {1} + \dots + X _ {n} ) $, $ a = C \epsilon / \sigma ^ {2} $( cf. Bernstein inequality). Such improvements of Chebyshev inequalities are obtained under additional restrictions on the summands $ X _ {i} $.

Multi-dimensional analogues have been obtained of some of the inequalities stated here (cf. [P]).

References

[C] P.L. Chebyshev, "?", Mat. Sb. , 2 (1867) pp. 1–9
[M] A.A. Markov, "Wahrscheinlichkeitsrechung" , Teubner (1912) (Translated from Russian) Zbl 39.0292.02
[K] A.N. Kolmogorov, "Foundations of the theory of probability" , Chelsea, reprint (1950) (Translated from German) MR0032961
[KS] S. Karlin, V. Studden, "Tchebycheff systems: with applications in analysis and statistics" , Interscience (1966) MR0204922 Zbl 0153.38902
[P] Yu.V. Prokhorov, "Multivariate distributions: inequalities and limit theorems" J. Soviet Math. , 2 (1974) pp. 475–488 Itogi Nauk. i Tekhn. Teor. Veroyatnost. Mat. Stat. Teoret. Kibernet. , 10 (1972) pp. 5–24 Zbl 0295.60013
How to Cite This Entry:
Chebyshev inequality in probability theory. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Chebyshev_inequality_in_probability_theory&oldid=20850
This article was adapted from an original article by A.V. Prokhorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article