Namespaces
Variants
Actions

Zero-one law

From Encyclopedia of Mathematics
Revision as of 21:52, 9 January 2015 by Richard Pinch (talk | contribs) (→‎References: Stoyanov (2014))
Jump to: navigation, search

The statement in probability theory that every event (a so-called tail event) whose occurrence is determined by arbitrarily distant elements of a sequence of independent random events or random variables has probability or . This law extends to systems of random variables depending on a continuous parameter (see below).

For individual tail events the fact that their probability is or was established at the beginning of the 20th century. Thus, let be a sequence of independent events. Let be the tail event that infinitely many events occur, i.e.

Then, as noted by E. Borel [1], either

By a simple calculation he showed that

and

(see Borel–Cantelli lemma).

Next, if is a sequence of independent random variables, then the probability that the series converges can only be or . This fact (together with a criterion that makes it possible to distinguish these two cases) was established by A.N. Kolmogorov in 1928 (see [2], [5]).

Tail events connected with analytic properties of sums of sequences of functions, for example, power series with random terms, have also been investigated. Thus, Borel's vague assertion (1896) that for "arbitrary coefficients" the boundary of the disc of convergence is the natural boundary of the analytic function represented by the coefficients was put in the following precise form by H. Steinhaus [3]. Let be independent random variables uniformly distributed on (cf. Uniform distribution), let be given numbers and suppose that the power series

has radius of convergence . Then the (tail) event that the function cannot be extended across the boundary of the disc has probability . B. Jessen [4] has proved that any tail event connected with a sequence of independent random variables that are uniformly distributed on has probability or .

A general zero-one law was stated by Kolmogorov (see [5]) as follows. Let be a sequence of random variables and let be a Borel-measurable function such that the conditional probability

of the relation

given the first variables is equal to the unconditional probability

(*)

for every . Under these conditions the probability (*) is or . For independent the zero-one law as stated at the beginning of the article follows from this.

As P. Lévy proved in 1937 (see [6]), Kolmogorov's theorem follows from a more general property of conditional probabilities, namely that

almost certainly equals or (depending on whether is zero or not). In turn, this assertion follows from a theorem on martingales (see [7], Chapt. III, Sect. 1; Chapt. VII, Sects. 4, 5, 7 and the comments; in Sect. 11 there is an analogue of the zero-one law for random processes with independent increments; this implies, in particular, that sample distribution functions of a separable Gaussian process with continuous correlation function are continuous with probability at every point or have, with probability , a discontinuity of the second kind at every point; see also [8]).

For the special case of a sequence of independent and identically-distributed random variables it has been shown (see [9]) that the probability not only of any tail event, but also of any event that is invariant under any permutation of finitely many terms of the sequence is or .

References

[1] E. Borel, "Les probabilités dénombrables et leurs applications arithmétique" Rend. Circ. Mat. Palermo (2) , 27 (1909) pp. 247–271
[2] A.N. Kolmogorov, "Über die Summen durch den Zufall bestimmter unabhängiger Grössen" Math. Ann. , 99 (1928) pp. 309–319
[3] H. Steinhaus, "Über die Wahrscheinlichkeit dafür dass der Konvergenzkreis einer Potenzreihe ihre natürliche Grenze ist" Math. Z. , 31 (1929) pp. 408–416
[4] A. B. Jessen, "The theory of integration in a space of an infinite number of dimensions" Acta Math. , 63 (1934) pp. 249–323
[5] A.N. Kolmogorov, "Foundations of the theory of probability" , Chelsea, reprint (1950) (Translated from German)
[6] P. Lévy, "Théorie de l'addition des variables aléatoires" , Gauthier-Villars (1937)
[7] J.L. Doob, "Stochastic processes" , Chapman & Hall (1953)
[8] R.L. Dobrushin, "Properties of sample functions of a stationary Gaussian process" Theor. Probab. Appl. , 5 : 1 (1960) pp. 117–120 Teor. Veroyatnost. i ee Primenen. , 5 : 1 (1960) pp. 132–134
[9] E. Hewitt, L.J. Savage, "Symmetric measures on Cartesian products" Trans. Amer. Math. Soc. , 80 (1955) pp. 470–501


Comments

References

[a1] M. Loève, "Probability theory" , 1–2 , Springer (1978)
[b1] Jordan M. Stoyanov, "Counterexamples in Probability", 3rd ed. Dover Books (2014) ISBN 0486499987
How to Cite This Entry:
Zero-one law. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Zero-one_law&oldid=36179
This article was adapted from an original article by A.V. ProkhorovYu.V. Prokhorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article