Difference between revisions of "Zero-one law"
(correction; see here: http://gdz.sub.uni-goettingen.de/en/dms/loader/img/?PID=GDZPPN002371820) |
(latex details) |
||
(2 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | + | <!-- | |
+ | z0992401.png | ||
+ | $#A+1 = 46 n = 0 | ||
+ | $#C+1 = 46 : ~/encyclopedia/old_files/data/Z099/Z.0909240 Zero\AAhone law | ||
+ | Automatically converted into TeX, above some diagnostics. | ||
+ | Please remove this comment and the {{TEX|auto}} line below, | ||
+ | if TeX found to be correct. | ||
+ | --> | ||
− | + | {{TEX|auto}} | |
+ | {{TEX|done}} | ||
− | + | The statement in [[Probability theory|probability theory]] that every event (a so-called tail event) whose occurrence is determined by arbitrarily distant elements of a sequence of independent random events or random variables has probability $ 0 $ | |
+ | or $ 1 $. | ||
+ | This law extends to systems of random variables depending on a continuous parameter (see below). | ||
+ | |||
+ | For individual tail events the fact that their probability is $ 0 $ | ||
+ | or $ 1 $ | ||
+ | was established at the beginning of the 20th century. Thus, let $ A _ {1} , A _ {2} \dots $ | ||
+ | be a sequence of independent events. Let $ A $ | ||
+ | be the tail event that infinitely many events $ A _ {k} $ | ||
+ | occur, i.e. | ||
+ | |||
+ | $$ | ||
+ | A = \cap_{n=1} ^ \infty \cup_{k=n} ^ \infty A_{k} . | ||
+ | $$ | ||
Then, as noted by E. Borel [[#References|[1]]], either | Then, as noted by E. Borel [[#References|[1]]], either | ||
− | + | $$ | |
+ | {\mathsf P} ( A) = 0 \ \textrm{ or } \ {\mathsf P} ( A) = 1. | ||
+ | $$ | ||
By a simple calculation he showed that | By a simple calculation he showed that | ||
− | + | $$ | |
+ | {\mathsf P} ( A) = 0 \ \textrm{ if } \sum_{n=1} ^ \infty | ||
+ | {\mathsf P} ( A _ {n} ) < \infty , | ||
+ | $$ | ||
and | and | ||
− | + | $$ | |
+ | {\mathsf P} ( A) = 1 \ \textrm{ if } \sum_{n=1} ^ \infty | ||
+ | {\mathsf P} ( A _ {n} ) = \infty | ||
+ | $$ | ||
(see [[Borel–Cantelli lemma|Borel–Cantelli lemma]]). | (see [[Borel–Cantelli lemma|Borel–Cantelli lemma]]). | ||
− | Next, if | + | Next, if $ X _ {1} , X _ {2} \dots $ |
+ | is a sequence of independent random variables, then the probability that the series $ \sum_{k=1} ^ \infty X _ {k} $ | ||
+ | converges can only be $ 0 $ | ||
+ | or $ 1 $. | ||
+ | This fact (together with a criterion that makes it possible to distinguish these two cases) was established by A.N. Kolmogorov in 1928 (see [[#References|[2]]], [[#References|[5]]]). | ||
− | Tail events connected with analytic properties of sums of sequences of functions, for example, power series with random terms, have also been investigated. Thus, Borel's vague assertion (1896) that for "arbitrary coefficients" the boundary of the disc of convergence is the natural boundary of the analytic function represented by the coefficients was put in the following precise form by H. Steinhaus [[#References|[3]]]. Let | + | Tail events connected with analytic properties of sums of sequences of functions, for example, power series with random terms, have also been investigated. Thus, Borel's vague assertion (1896) that for "arbitrary coefficients" the boundary of the disc of convergence is the natural boundary of the analytic function represented by the coefficients was put in the following precise form by H. Steinhaus [[#References|[3]]]. Let $ X _ {1} , X _ {2} \dots $ |
+ | be independent random variables uniformly distributed on $ ( 0, 1 ) $( | ||
+ | cf. [[Uniform distribution|Uniform distribution]]), let $ a _ {k} $ | ||
+ | be given numbers and suppose that the power series | ||
− | + | $$ | |
+ | f ( z; X _ {1} , X _ {2} , . . . ) = \sum_{k=1} ^ \infty | ||
+ | a _ {k} e ^ {2 \pi i X _ {k} } z ^ {k-} 1 | ||
+ | $$ | ||
− | has radius of convergence | + | has radius of convergence $ R > 0 $. |
+ | Then the (tail) event that the function $ f $ | ||
+ | cannot be extended across the boundary of the disc $ | z | \leq R $ | ||
+ | has probability $ 1 $. | ||
+ | B. Jessen [[#References|[4]]] has proved that any tail event connected with a sequence of independent random variables that are uniformly distributed on $ ( 0, 1) $ | ||
+ | has probability $ 0 $ | ||
+ | or $ 1 $. | ||
− | A general zero-one law was stated by Kolmogorov (see [[#References|[5]]]) as follows. Let | + | A general zero-one law was stated by Kolmogorov (see [[#References|[5]]]) as follows. Let $ X _ {1} , X _ {2} \dots $ |
+ | be a sequence of random variables and let $ f( X _ {1} , X _ {2} , . . . ) $ | ||
+ | be a Borel-measurable function such that the conditional probability | ||
− | + | $$ | |
+ | {\mathsf P} \{ f ( X _ {1} , X _ {2} , \dots ) = \ | ||
+ | 0 \mid X _ {1} \dots X _ {n} \} | ||
+ | $$ | ||
of the relation | of the relation | ||
− | + | $$ | |
+ | f( X _ {1} , X _ {2} , . . . ) = 0 | ||
+ | $$ | ||
− | given the first | + | given the first $ n $ |
+ | variables $ X _ {1} \dots X _ {n} $ | ||
+ | is equal to the unconditional probability | ||
− | + | $$ \tag{* } | |
+ | {\mathsf P} \{ f( X _ {1} \dots X _ {n} , . . . ) = 0 \} | ||
+ | $$ | ||
− | for every | + | for every $ n $. |
+ | Under these conditions the probability (*) is $ 0 $ | ||
+ | or $ 1 $. | ||
+ | For independent $ X _ {1} , X _ {2} \dots $ | ||
+ | the zero-one law as stated at the beginning of the article follows from this. | ||
As P. Lévy proved in 1937 (see [[#References|[6]]]), Kolmogorov's theorem follows from a more general property of conditional probabilities, namely that | As P. Lévy proved in 1937 (see [[#References|[6]]]), Kolmogorov's theorem follows from a more general property of conditional probabilities, namely that | ||
− | + | $$ | |
− | + | \lim\limits _ {n \rightarrow \infty } {\mathsf P} \{ f ( X _ {1} ,\ | |
− | + | X _ {2} , \dots ) = 0 \mid X _ {1} \dots X _ {n} \} | |
− | + | $$ | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | almost certainly equals $ 1 $ | |
+ | or $ 0 $( | ||
+ | depending on whether $ f( X _ {1} , X _ {2} , . . . ) $ | ||
+ | is zero or not). In turn, this assertion follows from a theorem on martingales (see [[#References|[7]]], Chapt. III, Sect. 1; Chapt. VII, Sects. 4, 5, 7 and the comments; in Sect. 11 there is an analogue of the zero-one law for random processes with independent increments; this implies, in particular, that sample distribution functions of a separable Gaussian process with continuous correlation function are continuous with probability $ 1 $ | ||
+ | at every point or have, with probability $ 1 $, | ||
+ | a discontinuity of the second kind at every point; see also [[#References|[8]]]). | ||
+ | For the special case of a sequence $ X _ {1} , X _ {2} \dots $ | ||
+ | of independent and identically-distributed random variables it has been shown (see [[#References|[9]]]) that the probability not only of any tail event, but also of any event that is invariant under any permutation of finitely many terms of the sequence is $ 0 $ | ||
+ | or $ 1 $. | ||
====References==== | ====References==== | ||
− | <table> | + | <table><TR><TD valign="top">[1]</TD> <TD valign="top"> E. Borel, "Les probabilités dénombrables et leurs applications arithmétique" ''Rend. Circ. Mat. Palermo (2)'' , '''27''' (1909) pp. 247–271</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> A.N. Kolmogorov, "Über die Summen durch den Zufall bestimmter unabhängiger Grössen" ''Math. Ann.'' , '''99''' (1928) pp. 309–319</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> H. Steinhaus, "Über die Wahrscheinlichkeit dafür, dass der Konvergenzkreis einer Potenzreihe ihre natürliche Grenze ist" ''Math. Z.'' , '''31''' (1929) pp. 408–416</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> A. B. Jessen, "The theory of integration in a space of an infinite number of dimensions" ''Acta Math.'' , '''63''' (1934) pp. 249–323</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> A.N. Kolmogorov, "Foundations of the theory of probability" , Chelsea, reprint (1950) (Translated from German)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> P. Lévy, "Théorie de l'addition des variables aléatoires" , Gauthier-Villars (1937)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top"> J.L. Doob, "Stochastic processes" , Chapman & Hall (1953)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top"> R.L. Dobrushin, "Properties of sample functions of a stationary Gaussian process" ''Theor. Probab. Appl.'' , '''5''' : 1 (1960) pp. 117–120 ''Teor. Veroyatnost. i ee Primenen.'' , '''5''' : 1 (1960) pp. 132–134</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top"> E. Hewitt, L.J. Savage, "Symmetric measures on Cartesian products" ''Trans. Amer. Math. Soc.'' , '''80''' (1955) pp. 470–501</TD></TR> |
<TR><TD valign="top">[a1]</TD> <TD valign="top"> M. Loève, "Probability theory" , '''1–2''' , Graduate Texts in Mathematics '''45,46''' Springer (1977-8) {{ZBL|0359.60001}} {{ZBL|0385.60001}}</TD></TR> | <TR><TD valign="top">[a1]</TD> <TD valign="top"> M. Loève, "Probability theory" , '''1–2''' , Graduate Texts in Mathematics '''45,46''' Springer (1977-8) {{ZBL|0359.60001}} {{ZBL|0385.60001}}</TD></TR> | ||
<TR><TD valign="top">[b1]</TD> <TD valign="top"> John C. Morgan, "On zero-one laws", ''Proc. Am. Math. Soc.'' '''62''' (1977) 353-358 {{ZBL|0369.54018}}</TD></TR> | <TR><TD valign="top">[b1]</TD> <TD valign="top"> John C. Morgan, "On zero-one laws", ''Proc. Am. Math. Soc.'' '''62''' (1977) 353-358 {{ZBL|0369.54018}}</TD></TR> | ||
− | <TR><TD valign="top">[b2]</TD> <TD valign="top"> Jordan M. Stoyanov, "Counterexamples in Probability", 3rd ed. Dover Books (2014) ISBN 0486499987 {{ZBL|1287.60004}}</TD></TR> | + | <TR><TD valign="top">[b2]</TD> <TD valign="top"> Jordan M. Stoyanov, "Counterexamples in Probability", 3rd ed. Dover Books (2014) {{ISBN|0486499987}} {{ZBL|1287.60004}}</TD></TR> |
</table> | </table> |
Latest revision as of 16:12, 6 January 2024
The statement in probability theory that every event (a so-called tail event) whose occurrence is determined by arbitrarily distant elements of a sequence of independent random events or random variables has probability $ 0 $
or $ 1 $.
This law extends to systems of random variables depending on a continuous parameter (see below).
For individual tail events the fact that their probability is $ 0 $ or $ 1 $ was established at the beginning of the 20th century. Thus, let $ A _ {1} , A _ {2} \dots $ be a sequence of independent events. Let $ A $ be the tail event that infinitely many events $ A _ {k} $ occur, i.e.
$$ A = \cap_{n=1} ^ \infty \cup_{k=n} ^ \infty A_{k} . $$
Then, as noted by E. Borel [1], either
$$ {\mathsf P} ( A) = 0 \ \textrm{ or } \ {\mathsf P} ( A) = 1. $$
By a simple calculation he showed that
$$ {\mathsf P} ( A) = 0 \ \textrm{ if } \sum_{n=1} ^ \infty {\mathsf P} ( A _ {n} ) < \infty , $$
and
$$ {\mathsf P} ( A) = 1 \ \textrm{ if } \sum_{n=1} ^ \infty {\mathsf P} ( A _ {n} ) = \infty $$
(see Borel–Cantelli lemma).
Next, if $ X _ {1} , X _ {2} \dots $ is a sequence of independent random variables, then the probability that the series $ \sum_{k=1} ^ \infty X _ {k} $ converges can only be $ 0 $ or $ 1 $. This fact (together with a criterion that makes it possible to distinguish these two cases) was established by A.N. Kolmogorov in 1928 (see [2], [5]).
Tail events connected with analytic properties of sums of sequences of functions, for example, power series with random terms, have also been investigated. Thus, Borel's vague assertion (1896) that for "arbitrary coefficients" the boundary of the disc of convergence is the natural boundary of the analytic function represented by the coefficients was put in the following precise form by H. Steinhaus [3]. Let $ X _ {1} , X _ {2} \dots $ be independent random variables uniformly distributed on $ ( 0, 1 ) $( cf. Uniform distribution), let $ a _ {k} $ be given numbers and suppose that the power series
$$ f ( z; X _ {1} , X _ {2} , . . . ) = \sum_{k=1} ^ \infty a _ {k} e ^ {2 \pi i X _ {k} } z ^ {k-} 1 $$
has radius of convergence $ R > 0 $. Then the (tail) event that the function $ f $ cannot be extended across the boundary of the disc $ | z | \leq R $ has probability $ 1 $. B. Jessen [4] has proved that any tail event connected with a sequence of independent random variables that are uniformly distributed on $ ( 0, 1) $ has probability $ 0 $ or $ 1 $.
A general zero-one law was stated by Kolmogorov (see [5]) as follows. Let $ X _ {1} , X _ {2} \dots $ be a sequence of random variables and let $ f( X _ {1} , X _ {2} , . . . ) $ be a Borel-measurable function such that the conditional probability
$$ {\mathsf P} \{ f ( X _ {1} , X _ {2} , \dots ) = \ 0 \mid X _ {1} \dots X _ {n} \} $$
of the relation
$$ f( X _ {1} , X _ {2} , . . . ) = 0 $$
given the first $ n $ variables $ X _ {1} \dots X _ {n} $ is equal to the unconditional probability
$$ \tag{* } {\mathsf P} \{ f( X _ {1} \dots X _ {n} , . . . ) = 0 \} $$
for every $ n $. Under these conditions the probability (*) is $ 0 $ or $ 1 $. For independent $ X _ {1} , X _ {2} \dots $ the zero-one law as stated at the beginning of the article follows from this.
As P. Lévy proved in 1937 (see [6]), Kolmogorov's theorem follows from a more general property of conditional probabilities, namely that
$$ \lim\limits _ {n \rightarrow \infty } {\mathsf P} \{ f ( X _ {1} ,\ X _ {2} , \dots ) = 0 \mid X _ {1} \dots X _ {n} \} $$
almost certainly equals $ 1 $ or $ 0 $( depending on whether $ f( X _ {1} , X _ {2} , . . . ) $ is zero or not). In turn, this assertion follows from a theorem on martingales (see [7], Chapt. III, Sect. 1; Chapt. VII, Sects. 4, 5, 7 and the comments; in Sect. 11 there is an analogue of the zero-one law for random processes with independent increments; this implies, in particular, that sample distribution functions of a separable Gaussian process with continuous correlation function are continuous with probability $ 1 $ at every point or have, with probability $ 1 $, a discontinuity of the second kind at every point; see also [8]).
For the special case of a sequence $ X _ {1} , X _ {2} \dots $ of independent and identically-distributed random variables it has been shown (see [9]) that the probability not only of any tail event, but also of any event that is invariant under any permutation of finitely many terms of the sequence is $ 0 $ or $ 1 $.
References
[1] | E. Borel, "Les probabilités dénombrables et leurs applications arithmétique" Rend. Circ. Mat. Palermo (2) , 27 (1909) pp. 247–271 |
[2] | A.N. Kolmogorov, "Über die Summen durch den Zufall bestimmter unabhängiger Grössen" Math. Ann. , 99 (1928) pp. 309–319 |
[3] | H. Steinhaus, "Über die Wahrscheinlichkeit dafür, dass der Konvergenzkreis einer Potenzreihe ihre natürliche Grenze ist" Math. Z. , 31 (1929) pp. 408–416 |
[4] | A. B. Jessen, "The theory of integration in a space of an infinite number of dimensions" Acta Math. , 63 (1934) pp. 249–323 |
[5] | A.N. Kolmogorov, "Foundations of the theory of probability" , Chelsea, reprint (1950) (Translated from German) |
[6] | P. Lévy, "Théorie de l'addition des variables aléatoires" , Gauthier-Villars (1937) |
[7] | J.L. Doob, "Stochastic processes" , Chapman & Hall (1953) |
[8] | R.L. Dobrushin, "Properties of sample functions of a stationary Gaussian process" Theor. Probab. Appl. , 5 : 1 (1960) pp. 117–120 Teor. Veroyatnost. i ee Primenen. , 5 : 1 (1960) pp. 132–134 |
[9] | E. Hewitt, L.J. Savage, "Symmetric measures on Cartesian products" Trans. Amer. Math. Soc. , 80 (1955) pp. 470–501 |
[a1] | M. Loève, "Probability theory" , 1–2 , Graduate Texts in Mathematics 45,46 Springer (1977-8) Zbl 0359.60001 Zbl 0385.60001 |
[b1] | John C. Morgan, "On zero-one laws", Proc. Am. Math. Soc. 62 (1977) 353-358 Zbl 0369.54018 |
[b2] | Jordan M. Stoyanov, "Counterexamples in Probability", 3rd ed. Dover Books (2014) ISBN 0486499987 Zbl 1287.60004 |
Zero-one law. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Zero-one_law&oldid=40747