Namespaces
Variants
Actions

Difference between revisions of "Zero-one law"

From Encyclopedia of Mathematics
Jump to: navigation, search
(correction; see here: http://gdz.sub.uni-goettingen.de/en/dms/loader/img/?PID=GDZPPN002371820)
m (tex encoded by computer)
Line 1: Line 1:
The statement in [[Probability theory|probability theory]] that every event (a so-called tail event) whose occurrence is determined by arbitrarily distant elements of a sequence of independent random events or random variables has probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z0992401.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z0992402.png" />. This law extends to systems of random variables depending on a continuous parameter (see below).
+
<!--
 +
z0992401.png
 +
$#A+1 = 46 n = 0
 +
$#C+1 = 46 : ~/encyclopedia/old_files/data/Z099/Z.0909240 Zero\AAhone law
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
For individual tail events the fact that their probability is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z0992403.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z0992404.png" /> was established at the beginning of the 20th century. Thus, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z0992405.png" /> be a sequence of independent events. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z0992406.png" /> be the tail event that infinitely many events <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z0992407.png" /> occur, i.e.
+
{{TEX|auto}}
 +
{{TEX|done}}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z0992408.png" /></td> </tr></table>
+
The statement in [[Probability theory|probability theory]] that every event (a so-called tail event) whose occurrence is determined by arbitrarily distant elements of a sequence of independent random events or random variables has probability  $  0 $
 +
or  $  1 $.
 +
This law extends to systems of random variables depending on a continuous parameter (see below).
 +
 
 +
For individual tail events the fact that their probability is  $  0 $
 +
or  $  1 $
 +
was established at the beginning of the 20th century. Thus, let  $  A _ {1} , A _ {2} \dots $
 +
be a sequence of independent events. Let  $  A $
 +
be the tail event that infinitely many events  $  A _ {k} $
 +
occur, i.e.
 +
 
 +
$$
 +
= \cap _ { n= } 1 ^  \infty  \cup _ { k= } n ^  \infty  A _ {k} .
 +
$$
  
 
Then, as noted by E. Borel [[#References|[1]]], either
 
Then, as noted by E. Borel [[#References|[1]]], either
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z0992409.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} ( A)  = 0 \  \textrm{ or } \  {\mathsf P} ( A)  = 1.
 +
$$
  
 
By a simple calculation he showed that
 
By a simple calculation he showed that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924010.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} ( A)  = 0 \  \textrm{ if }  \sum _ { n= } 1 ^  \infty 
 +
{\mathsf P} ( A _ {n} ) < \infty ,
 +
$$
  
 
and
 
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924011.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} ( A)  = 1 \  \textrm{ if }  \sum _ { n= } 1 ^  \infty 
 +
{\mathsf P} ( A _ {n} ) = \infty
 +
$$
  
 
(see [[Borel–Cantelli lemma|Borel–Cantelli lemma]]).
 
(see [[Borel–Cantelli lemma|Borel–Cantelli lemma]]).
  
Next, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924012.png" /> is a sequence of independent random variables, then the probability that the series <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924013.png" /> converges can only be <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924014.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924015.png" />. This fact (together with a criterion that makes it possible to distinguish these two cases) was established by A.N. Kolmogorov in 1928 (see [[#References|[2]]], [[#References|[5]]]).
+
Next, if $  X _ {1} , X _ {2} \dots $
 +
is a sequence of independent random variables, then the probability that the series $  \sum _ {k=} 1  ^  \infty  X _ {k} $
 +
converges can only be 0 $
 +
or $  1 $.  
 +
This fact (together with a criterion that makes it possible to distinguish these two cases) was established by A.N. Kolmogorov in 1928 (see [[#References|[2]]], [[#References|[5]]]).
  
Tail events connected with analytic properties of sums of sequences of functions, for example, power series with random terms, have also been investigated. Thus, Borel's vague assertion (1896) that for  "arbitrary coefficients"  the boundary of the disc of convergence is the natural boundary of the analytic function represented by the coefficients was put in the following precise form by H. Steinhaus [[#References|[3]]]. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924016.png" /> be independent random variables uniformly distributed on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924017.png" /> (cf. [[Uniform distribution|Uniform distribution]]), let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924018.png" /> be given numbers and suppose that the power series
+
Tail events connected with analytic properties of sums of sequences of functions, for example, power series with random terms, have also been investigated. Thus, Borel's vague assertion (1896) that for  "arbitrary coefficients"  the boundary of the disc of convergence is the natural boundary of the analytic function represented by the coefficients was put in the following precise form by H. Steinhaus [[#References|[3]]]. Let $  X _ {1} , X _ {2} \dots $
 +
be independent random variables uniformly distributed on $  ( 0, 1 ) $(
 +
cf. [[Uniform distribution|Uniform distribution]]), let $  a _ {k} $
 +
be given numbers and suppose that the power series
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924019.png" /></td> </tr></table>
+
$$
 +
f ( z; X _ {1} , X _ {2} , .  .  . )  = \sum _ { k= } 1 ^  \infty 
 +
a _ {k} e ^ {2 \pi i X _ {k} } z  ^ {k-} 1
 +
$$
  
has radius of convergence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924020.png" />. Then the (tail) event that the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924021.png" /> cannot be extended across the boundary of the disc <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924022.png" /> has probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924023.png" />. B. Jessen [[#References|[4]]] has proved that any tail event connected with a sequence of independent random variables that are uniformly distributed on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924024.png" /> has probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924025.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924026.png" />.
+
has radius of convergence $  R > 0 $.  
 +
Then the (tail) event that the function $  f $
 +
cannot be extended across the boundary of the disc $  | z | \leq  R $
 +
has probability $  1 $.  
 +
B. Jessen [[#References|[4]]] has proved that any tail event connected with a sequence of independent random variables that are uniformly distributed on $  ( 0, 1) $
 +
has probability 0 $
 +
or $  1 $.
  
A general zero-one law was stated by Kolmogorov (see [[#References|[5]]]) as follows. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924027.png" /> be a sequence of random variables and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924028.png" /> be a Borel-measurable function such that the conditional probability
+
A general zero-one law was stated by Kolmogorov (see [[#References|[5]]]) as follows. Let $  X _ {1} , X _ {2} \dots $
 +
be a sequence of random variables and let $  f( X _ {1} , X _ {2} , . . . ) $
 +
be a Borel-measurable function such that the conditional probability
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924029.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} \{ f ( X _ {1} , X _ {2} , \dots )  = \
 +
0 \mid  X _ {1} \dots X _ {n} \}
 +
$$
  
 
of the relation
 
of the relation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924030.png" /></td> </tr></table>
+
$$
 +
f( X _ {1} , X _ {2} , . . . )  =  0
 +
$$
  
given the first <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924031.png" /> variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924032.png" /> is equal to the unconditional probability
+
given the first $  n $
 +
variables $  X _ {1} \dots X _ {n} $
 +
is equal to the unconditional probability
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924033.png" /></td> <td valign="top" style="width:5%;text-align:right;">(*)</td></tr></table>
+
$$ \tag{* }
 +
{\mathsf P} \{ f( X _ {1} \dots X _ {n} , . . . = 0 \}
 +
$$
  
for every <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924034.png" />. Under these conditions the probability (*) is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924035.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924036.png" />. For independent <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924037.png" /> the zero-one law as stated at the beginning of the article follows from this.
+
for every $  n $.  
 +
Under these conditions the probability (*) is 0 $
 +
or $  1 $.  
 +
For independent $  X _ {1} , X _ {2} \dots $
 +
the zero-one law as stated at the beginning of the article follows from this.
  
 
As P. Lévy proved in 1937 (see [[#References|[6]]]), Kolmogorov's theorem follows from a more general property of conditional probabilities, namely that
 
As P. Lévy proved in 1937 (see [[#References|[6]]]), Kolmogorov's theorem follows from a more general property of conditional probabilities, namely that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924038.png" /></td> </tr></table>
+
$$
 +
\lim\limits _ {n \rightarrow \infty }  {\mathsf P} \{ f ( X _ {1} ,\
 +
X _ {2} , \dots )  = 0 \mid  X _ {1} \dots X _ {n} \}
 +
$$
  
almost certainly equals <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924039.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924040.png" /> (depending on whether <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924041.png" /> is zero or not). In turn, this assertion follows from a theorem on martingales (see [[#References|[7]]], Chapt. III, Sect. 1; Chapt. VII, Sects. 4, 5, 7 and the comments; in Sect. 11 there is an analogue of the zero-one law for random processes with independent increments; this implies, in particular, that sample distribution functions of a separable Gaussian process with continuous correlation function are continuous with probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924042.png" /> at every point or have, with probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924043.png" />, a discontinuity of the second kind at every point; see also [[#References|[8]]]).
+
almost certainly equals $  1 $
 +
or 0 $(
 +
depending on whether $  f( X _ {1} , X _ {2} , . . . ) $
 +
is zero or not). In turn, this assertion follows from a theorem on martingales (see [[#References|[7]]], Chapt. III, Sect. 1; Chapt. VII, Sects. 4, 5, 7 and the comments; in Sect. 11 there is an analogue of the zero-one law for random processes with independent increments; this implies, in particular, that sample distribution functions of a separable Gaussian process with continuous correlation function are continuous with probability $  1 $
 +
at every point or have, with probability $  1 $,  
 +
a discontinuity of the second kind at every point; see also [[#References|[8]]]).
  
For the special case of a sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924044.png" /> of independent and identically-distributed random variables it has been shown (see [[#References|[9]]]) that the probability not only of any tail event, but also of any event that is invariant under any permutation of finitely many terms of the sequence is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924045.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/z/z099/z099240/z09924046.png" />.
+
For the special case of a sequence $  X _ {1} , X _ {2} \dots $
 +
of independent and identically-distributed random variables it has been shown (see [[#References|[9]]]) that the probability not only of any tail event, but also of any event that is invariant under any permutation of finitely many terms of the sequence is 0 $
 +
or $  1 $.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  E. Borel,  "Les probabilités dénombrables et leurs applications arithmétique"  ''Rend. Circ. Mat. Palermo (2)'' , '''27'''  (1909)  pp. 247–271</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  A.N. Kolmogorov,  "Über die Summen durch den Zufall bestimmter unabhängiger Grössen"  ''Math. Ann.'' , '''99'''  (1928)  pp. 309–319</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  H. Steinhaus,  "Über die Wahrscheinlichkeit dafür, dass der Konvergenzkreis einer Potenzreihe ihre natürliche Grenze ist"  ''Math. Z.'' , '''31'''  (1929)  pp. 408–416</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  A. B. Jessen,  "The theory of integration in a space of an infinite number of dimensions"  ''Acta Math.'' , '''63'''  (1934)  pp. 249–323</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  A.N. Kolmogorov,  "Foundations of the theory of probability" , Chelsea, reprint  (1950)  (Translated from German)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  P. Lévy,  "Théorie de l'addition des variables aléatoires" , Gauthier-Villars  (1937)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  J.L. Doob,  "Stochastic processes" , Chapman &amp; Hall  (1953)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  R.L. Dobrushin,  "Properties of sample functions of a stationary Gaussian process"  ''Theor. Probab. Appl.'' , '''5''' :  1  (1960)  pp. 117–120  ''Teor. Veroyatnost. i ee Primenen.'' , '''5''' :  1  (1960)  pp. 132–134</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  E. Hewitt,  L.J. Savage,  "Symmetric measures on Cartesian products"  ''Trans. Amer. Math. Soc.'' , '''80'''  (1955)  pp. 470–501</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  E. Borel,  "Les probabilités dénombrables et leurs applications arithmétique"  ''Rend. Circ. Mat. Palermo (2)'' , '''27'''  (1909)  pp. 247–271</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  A.N. Kolmogorov,  "Über die Summen durch den Zufall bestimmter unabhängiger Grössen"  ''Math. Ann.'' , '''99'''  (1928)  pp. 309–319</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  H. Steinhaus,  "Über die Wahrscheinlichkeit dafür, dass der Konvergenzkreis einer Potenzreihe ihre natürliche Grenze ist"  ''Math. Z.'' , '''31'''  (1929)  pp. 408–416</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  A. B. Jessen,  "The theory of integration in a space of an infinite number of dimensions"  ''Acta Math.'' , '''63'''  (1934)  pp. 249–323</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  A.N. Kolmogorov,  "Foundations of the theory of probability" , Chelsea, reprint  (1950)  (Translated from German)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  P. Lévy,  "Théorie de l'addition des variables aléatoires" , Gauthier-Villars  (1937)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  J.L. Doob,  "Stochastic processes" , Chapman &amp; Hall  (1953)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  R.L. Dobrushin,  "Properties of sample functions of a stationary Gaussian process"  ''Theor. Probab. Appl.'' , '''5''' :  1  (1960)  pp. 117–120  ''Teor. Veroyatnost. i ee Primenen.'' , '''5''' :  1  (1960)  pp. 132–134</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  E. Hewitt,  L.J. Savage,  "Symmetric measures on Cartesian products"  ''Trans. Amer. Math. Soc.'' , '''80'''  (1955)  pp. 470–501</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====

Revision as of 08:29, 6 June 2020


The statement in probability theory that every event (a so-called tail event) whose occurrence is determined by arbitrarily distant elements of a sequence of independent random events or random variables has probability $ 0 $ or $ 1 $. This law extends to systems of random variables depending on a continuous parameter (see below).

For individual tail events the fact that their probability is $ 0 $ or $ 1 $ was established at the beginning of the 20th century. Thus, let $ A _ {1} , A _ {2} \dots $ be a sequence of independent events. Let $ A $ be the tail event that infinitely many events $ A _ {k} $ occur, i.e.

$$ A = \cap _ { n= } 1 ^ \infty \cup _ { k= } n ^ \infty A _ {k} . $$

Then, as noted by E. Borel [1], either

$$ {\mathsf P} ( A) = 0 \ \textrm{ or } \ {\mathsf P} ( A) = 1. $$

By a simple calculation he showed that

$$ {\mathsf P} ( A) = 0 \ \textrm{ if } \sum _ { n= } 1 ^ \infty {\mathsf P} ( A _ {n} ) < \infty , $$

and

$$ {\mathsf P} ( A) = 1 \ \textrm{ if } \sum _ { n= } 1 ^ \infty {\mathsf P} ( A _ {n} ) = \infty $$

(see Borel–Cantelli lemma).

Next, if $ X _ {1} , X _ {2} \dots $ is a sequence of independent random variables, then the probability that the series $ \sum _ {k=} 1 ^ \infty X _ {k} $ converges can only be $ 0 $ or $ 1 $. This fact (together with a criterion that makes it possible to distinguish these two cases) was established by A.N. Kolmogorov in 1928 (see [2], [5]).

Tail events connected with analytic properties of sums of sequences of functions, for example, power series with random terms, have also been investigated. Thus, Borel's vague assertion (1896) that for "arbitrary coefficients" the boundary of the disc of convergence is the natural boundary of the analytic function represented by the coefficients was put in the following precise form by H. Steinhaus [3]. Let $ X _ {1} , X _ {2} \dots $ be independent random variables uniformly distributed on $ ( 0, 1 ) $( cf. Uniform distribution), let $ a _ {k} $ be given numbers and suppose that the power series

$$ f ( z; X _ {1} , X _ {2} , . . . ) = \sum _ { k= } 1 ^ \infty a _ {k} e ^ {2 \pi i X _ {k} } z ^ {k-} 1 $$

has radius of convergence $ R > 0 $. Then the (tail) event that the function $ f $ cannot be extended across the boundary of the disc $ | z | \leq R $ has probability $ 1 $. B. Jessen [4] has proved that any tail event connected with a sequence of independent random variables that are uniformly distributed on $ ( 0, 1) $ has probability $ 0 $ or $ 1 $.

A general zero-one law was stated by Kolmogorov (see [5]) as follows. Let $ X _ {1} , X _ {2} \dots $ be a sequence of random variables and let $ f( X _ {1} , X _ {2} , . . . ) $ be a Borel-measurable function such that the conditional probability

$$ {\mathsf P} \{ f ( X _ {1} , X _ {2} , \dots ) = \ 0 \mid X _ {1} \dots X _ {n} \} $$

of the relation

$$ f( X _ {1} , X _ {2} , . . . ) = 0 $$

given the first $ n $ variables $ X _ {1} \dots X _ {n} $ is equal to the unconditional probability

$$ \tag{* } {\mathsf P} \{ f( X _ {1} \dots X _ {n} , . . . ) = 0 \} $$

for every $ n $. Under these conditions the probability (*) is $ 0 $ or $ 1 $. For independent $ X _ {1} , X _ {2} \dots $ the zero-one law as stated at the beginning of the article follows from this.

As P. Lévy proved in 1937 (see [6]), Kolmogorov's theorem follows from a more general property of conditional probabilities, namely that

$$ \lim\limits _ {n \rightarrow \infty } {\mathsf P} \{ f ( X _ {1} ,\ X _ {2} , \dots ) = 0 \mid X _ {1} \dots X _ {n} \} $$

almost certainly equals $ 1 $ or $ 0 $( depending on whether $ f( X _ {1} , X _ {2} , . . . ) $ is zero or not). In turn, this assertion follows from a theorem on martingales (see [7], Chapt. III, Sect. 1; Chapt. VII, Sects. 4, 5, 7 and the comments; in Sect. 11 there is an analogue of the zero-one law for random processes with independent increments; this implies, in particular, that sample distribution functions of a separable Gaussian process with continuous correlation function are continuous with probability $ 1 $ at every point or have, with probability $ 1 $, a discontinuity of the second kind at every point; see also [8]).

For the special case of a sequence $ X _ {1} , X _ {2} \dots $ of independent and identically-distributed random variables it has been shown (see [9]) that the probability not only of any tail event, but also of any event that is invariant under any permutation of finitely many terms of the sequence is $ 0 $ or $ 1 $.

References

[1] E. Borel, "Les probabilités dénombrables et leurs applications arithmétique" Rend. Circ. Mat. Palermo (2) , 27 (1909) pp. 247–271
[2] A.N. Kolmogorov, "Über die Summen durch den Zufall bestimmter unabhängiger Grössen" Math. Ann. , 99 (1928) pp. 309–319
[3] H. Steinhaus, "Über die Wahrscheinlichkeit dafür, dass der Konvergenzkreis einer Potenzreihe ihre natürliche Grenze ist" Math. Z. , 31 (1929) pp. 408–416
[4] A. B. Jessen, "The theory of integration in a space of an infinite number of dimensions" Acta Math. , 63 (1934) pp. 249–323
[5] A.N. Kolmogorov, "Foundations of the theory of probability" , Chelsea, reprint (1950) (Translated from German)
[6] P. Lévy, "Théorie de l'addition des variables aléatoires" , Gauthier-Villars (1937)
[7] J.L. Doob, "Stochastic processes" , Chapman & Hall (1953)
[8] R.L. Dobrushin, "Properties of sample functions of a stationary Gaussian process" Theor. Probab. Appl. , 5 : 1 (1960) pp. 117–120 Teor. Veroyatnost. i ee Primenen. , 5 : 1 (1960) pp. 132–134
[9] E. Hewitt, L.J. Savage, "Symmetric measures on Cartesian products" Trans. Amer. Math. Soc. , 80 (1955) pp. 470–501

Comments

References

[a1] M. Loève, "Probability theory" , 1–2 , Graduate Texts in Mathematics 45,46 Springer (1977-8) Zbl 0359.60001 Zbl 0385.60001
[b1] John C. Morgan, "On zero-one laws", Proc. Am. Math. Soc. 62 (1977) 353-358 Zbl 0369.54018
[b2] Jordan M. Stoyanov, "Counterexamples in Probability", 3rd ed. Dover Books (2014) ISBN 0486499987 Zbl 1287.60004
How to Cite This Entry:
Zero-one law. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Zero-one_law&oldid=49247
This article was adapted from an original article by A.V. ProkhorovYu.V. Prokhorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article