Namespaces
Variants
Actions

Difference between revisions of "Erroneous decoding, probability of"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
One of the possible measures for characterizing the accuracy of information reproduction over a [[Communication channel|communication channel]] (see also [[Information, exactness of reproducibility of|Information, exactness of reproducibility of]]). For the transmission of information <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e0362101.png" /> from a source which has <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e0362102.png" /> different possible values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e0362103.png" /> with probability distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e0362104.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e0362105.png" />, a certain communication channel is used. Then for fixed methods of [[Coding and decoding|coding and decoding]] (see also [[Information, transmission of|Information, transmission of]]), the probability of erroneous decoding <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e0362106.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e0362107.png" />, is defined by
+
<!--
 +
e0362101.png
 +
$#A+1 = 58 n = 0
 +
$#C+1 = 58 : ~/encyclopedia/old_files/data/E036/E.0306210 Erroneous decoding, probability of
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e0362108.png" /></td> </tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e0362109.png" /> is the decoded message at the channel output. The variable quantity
+
One of the possible measures for characterizing the accuracy of information reproduction over a [[Communication channel|communication channel]] (see also [[Information, exactness of reproducibility of|Information, exactness of reproducibility of]]). For the transmission of information  $  \xi $
 +
from a source which has  $  M $
 +
different possible values  $  1 \dots M $
 +
with probability distribution  $  p _ {m} = {\mathsf P} \{ \xi = m \} $,
 +
$  m = 1 \dots M $,
 +
a certain communication channel is used. Then for fixed methods of [[Coding and decoding|coding and decoding]] (see also [[Information, transmission of|Information, transmission of]]), the probability of erroneous decoding  $  P _ {e,m} $,
 +
$  m = 1 \dots M $,
 +
is defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621010.png" /></td> </tr></table>
+
$$
 +
P _ {e,m}  = {\mathsf P} \{ \xi \neq m \mid  \xi = m \} ,
 +
$$
  
is called the mean probability of erroneous decoding. The study of the optimal probability of erroneous decoding, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621011.png" />, defined by
+
where  $  \widetilde \xi  $
 +
is the decoded message at the channel output. The variable quantity
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621012.png" /></td> </tr></table>
+
$$
 +
P _ {e}  = {\mathsf P} \{ \widetilde \xi  \neq \xi \}  = \sum _ { m= } 1 ^ { M }  p _ {m} P _ {e,m}  $$
  
is of particular interest. The infimum is taken over all possible methods of coding and decoding. Owing to the difficulty of obtaining an accurate expression for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621013.png" />, which is caused by the fact that optimal coding methods are usually unknown, the study of the asymptotic behaviour of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621014.png" /> when the length of the message in the channel increases is of interest. More precisely, the following situation is studied. Let a segment of length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621015.png" /> in a communication channel with discrete time be used, and put <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621016.png" />. The asymptotic behaviour of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621017.png" /> has to be studied when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621018.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621019.png" /> (this means that the length of the message increases, while its transmission rate remains constant). If the channel is a homogeneous [[Memoryless channel|memoryless channel]] with discrete time and if the spaces <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621020.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621021.png" /> of values of the components of the input and output signals are finite, then the following upper and lower bounds for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621022.png" /> are known:
+
is called the mean probability of erroneous decoding. The study of the optimal probability of erroneous decoding, $  P _ {e} ^ { \mathop{\rm opt} } $,  
 +
defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621023.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$
 +
P _ {e} ^ { \mathop{\rm opt} }  = \inf  P _ {e} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621024.png" /></td> </tr></table>
+
is of particular interest. The infimum is taken over all possible methods of coding and decoding. Owing to the difficulty of obtaining an accurate expression for  $  P _ {e} ^ { \mathop{\rm opt} } $,
 +
which is caused by the fact that optimal coding methods are usually unknown, the study of the asymptotic behaviour of  $  P _ {e} ^ { \mathop{\rm opt} } $
 +
when the length of the message in the channel increases is of interest. More precisely, the following situation is studied. Let a segment of length  $  N $
 +
in a communication channel with discrete time be used, and put  $  R = (  \mathop{\rm ln}  M)/N $.  
 +
The asymptotic behaviour of  $  P _ {e} ^ { \mathop{\rm opt} } $
 +
has to be studied when  $  N \rightarrow \infty $
 +
and  $  R = \textrm{ const } $(
 +
this means that the length of the message increases, while its transmission rate remains constant). If the channel is a homogeneous [[Memoryless channel|memoryless channel]] with discrete time and if the spaces  $  Y $
 +
and  $  \widetilde{Y}  $
 +
of values of the components of the input and output signals are finite, then the following upper and lower bounds for  $  P _ {e} ^ { \mathop{\rm opt} } = P _ {e} ^ { \mathop{\rm opt} } ( N, R) $
 +
are known:
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621025.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621026.png" /> tend to zero with increasing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621027.png" />, while the functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621028.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621029.png" /> are defined as follows:
+
$$ \tag{1 }
 +
\mathop{\rm exp} \{ - N[ E _ {sp} ( R - \alpha ( N)) + \beta ( N)] \} \leq
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621030.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$
 +
\leq  \
 +
P _ {e} ^ { \mathop{\rm opt} } ( N, R)  \leq    \mathop{\rm exp} \{ - NE _ {r} ( R) \} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621031.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
where  $  \alpha ( N) $
 +
and  $  \beta ( N) $
 +
tend to zero with increasing  $  N $,
 +
while the functions  $  E _ {r} ( R) $
 +
and  $  E _ {sp} ( R) $
 +
are defined as follows:
 +
 
 +
$$ \tag{2 }
 +
E _ {r} ( R)  = \max _ {0 \leq  \rho \leq  1 }  \max _ {q( \cdot ) }  [ E _ {0} ( \rho
 +
, q( \cdot )) - \rho R],
 +
$$
 +
 
 +
$$ \tag{3 }
 +
E _ {sp} ( R)  = \sup _ {\rho > 0 }  \max _
 +
{q( \cdot ) }  [ E _ {0} ( \rho , q( \cdot )) - \rho R],
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621032.png" /></td> </tr></table>
+
$$
 +
E _ {0} ( \rho , q( \cdot ))  = \
 +
- \mathop{\rm ln}  \sum _ {\widetilde{y}  \in \widetilde{Y}  } \left [
 +
\sum _ {y \in Y } q( y) q( y, \widetilde{y}  ) ^ {1/( 1+ \rho ) } \right ] ^ {1+ \rho
 +
} .
 +
$$
  
Here, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621033.png" /> is an arbitrary probability distribution on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621034.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621035.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621036.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621037.png" />, while <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621038.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621039.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621040.png" />, are the components of the input and output signals of the memoryless channel being considered. It is known that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621041.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621042.png" />, defined by the formulas (2) and (3), are positive, convex from below, monotone decreasing functions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621043.png" /> when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621044.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621045.png" /> is the transmission rate of the channel (cf. [[Transmission rate of a channel|Transmission rate of a channel]]), and that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621046.png" /> when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621047.png" />; here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621048.png" /> is a quantity defined by the transition matrix of the channel and called the critical speed of the given memoryless channel. Thus, for values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621049.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621050.png" />, the main terms of the upper and lower bounds (1) for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621051.png" /> coincide asymptotically, hence for this range of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621052.png" /> the exact value of the reliability function of the channel, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621053.png" />, defined by
+
Here, $  q( \cdot ) = \{ {q( y) } : {y \in Y } \} $
 +
is an arbitrary probability distribution on $  Y $,
 +
$  q( y, \widetilde{y}  ) = {\mathsf P} \{ \eta _ {k} = \widetilde{y}  \mid  \widetilde \eta  _ {k} = y \} $,  
 +
$  y \in Y $,  
 +
$  \widetilde{y}  \in \widetilde{Y}  $,  
 +
while $  \eta _ {k} $
 +
and $  \widetilde \eta  _ {k} $,  
 +
$  k = 1 \dots N $,  
 +
are the components of the input and output signals of the memoryless channel being considered. It is known that $  E _ {r} ( R) $
 +
and $  E _ {sp} ( R) $,  
 +
defined by the formulas (2) and (3), are positive, convex from below, monotone decreasing functions of $  R $
 +
when  $  0 \leq  R < C $,  
 +
where $  C $
 +
is the transmission rate of the channel (cf. [[Transmission rate of a channel|Transmission rate of a channel]]), and that $  E _ {r} ( R) = E _ {sp} ( R) $
 +
when $  R \geq  R _ {cr} $;  
 +
here $  R _ {cr} $
 +
is a quantity defined by the transition matrix of the channel and called the critical speed of the given memoryless channel. Thus, for values $  R $,  
 +
$  R _ {cr} \leq  R < C $,  
 +
the main terms of the upper and lower bounds (1) for $  P _ {e} ^ { \mathop{\rm opt} } $
 +
coincide asymptotically, hence for this range of $  R $
 +
the exact value of the reliability function of the channel, $  E( R) $,  
 +
defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621054.png" /></td> </tr></table>
+
$$
 +
E( R)  = \overline{\lim\limits}\; _ {N \rightarrow \infty } 
 +
\frac{- \mathop{\rm ln}  P _ {e} ^ { \mathop{\rm opt} }
 +
( N, R) }{N}
 +
,
 +
$$
  
is known. For values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621055.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621056.png" />, the exact value of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621057.png" /> for arbitrary memoryless channels is still unknown (1983), although the bounds (1) can still be improved. The exponential character of decrease of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036210/e03621058.png" /> has been proved for a very broad class of channels.
+
is known. For values of $  R $,
 +
0 \leq  R < R _ {cr} $,  
 +
the exact value of $  E( R) $
 +
for arbitrary memoryless channels is still unknown (1983), although the bounds (1) can still be improved. The exponential character of decrease of $  P _ {e} ^ { \mathop{\rm opt} } ( N, R) $
 +
has been proved for a very broad class of channels.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  R. Gallager,  "Information theory and reliable communication" , '''1–2''' , Wiley  (1968–1972)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  A.A. Feinstein,  "Foundations of information theory" , McGraw-Hill  (1958)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  R. Gallager,  "Information theory and reliable communication" , '''1–2''' , Wiley  (1968–1972)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  A.A. Feinstein,  "Foundations of information theory" , McGraw-Hill  (1958)</TD></TR></table>

Revision as of 19:37, 5 June 2020


One of the possible measures for characterizing the accuracy of information reproduction over a communication channel (see also Information, exactness of reproducibility of). For the transmission of information $ \xi $ from a source which has $ M $ different possible values $ 1 \dots M $ with probability distribution $ p _ {m} = {\mathsf P} \{ \xi = m \} $, $ m = 1 \dots M $, a certain communication channel is used. Then for fixed methods of coding and decoding (see also Information, transmission of), the probability of erroneous decoding $ P _ {e,m} $, $ m = 1 \dots M $, is defined by

$$ P _ {e,m} = {\mathsf P} \{ \xi \neq m \mid \xi = m \} , $$

where $ \widetilde \xi $ is the decoded message at the channel output. The variable quantity

$$ P _ {e} = {\mathsf P} \{ \widetilde \xi \neq \xi \} = \sum _ { m= } 1 ^ { M } p _ {m} P _ {e,m} $$

is called the mean probability of erroneous decoding. The study of the optimal probability of erroneous decoding, $ P _ {e} ^ { \mathop{\rm opt} } $, defined by

$$ P _ {e} ^ { \mathop{\rm opt} } = \inf P _ {e} , $$

is of particular interest. The infimum is taken over all possible methods of coding and decoding. Owing to the difficulty of obtaining an accurate expression for $ P _ {e} ^ { \mathop{\rm opt} } $, which is caused by the fact that optimal coding methods are usually unknown, the study of the asymptotic behaviour of $ P _ {e} ^ { \mathop{\rm opt} } $ when the length of the message in the channel increases is of interest. More precisely, the following situation is studied. Let a segment of length $ N $ in a communication channel with discrete time be used, and put $ R = ( \mathop{\rm ln} M)/N $. The asymptotic behaviour of $ P _ {e} ^ { \mathop{\rm opt} } $ has to be studied when $ N \rightarrow \infty $ and $ R = \textrm{ const } $( this means that the length of the message increases, while its transmission rate remains constant). If the channel is a homogeneous memoryless channel with discrete time and if the spaces $ Y $ and $ \widetilde{Y} $ of values of the components of the input and output signals are finite, then the following upper and lower bounds for $ P _ {e} ^ { \mathop{\rm opt} } = P _ {e} ^ { \mathop{\rm opt} } ( N, R) $ are known:

$$ \tag{1 } \mathop{\rm exp} \{ - N[ E _ {sp} ( R - \alpha ( N)) + \beta ( N)] \} \leq $$

$$ \leq \ P _ {e} ^ { \mathop{\rm opt} } ( N, R) \leq \mathop{\rm exp} \{ - NE _ {r} ( R) \} , $$

where $ \alpha ( N) $ and $ \beta ( N) $ tend to zero with increasing $ N $, while the functions $ E _ {r} ( R) $ and $ E _ {sp} ( R) $ are defined as follows:

$$ \tag{2 } E _ {r} ( R) = \max _ {0 \leq \rho \leq 1 } \max _ {q( \cdot ) } [ E _ {0} ( \rho , q( \cdot )) - \rho R], $$

$$ \tag{3 } E _ {sp} ( R) = \sup _ {\rho > 0 } \max _ {q( \cdot ) } [ E _ {0} ( \rho , q( \cdot )) - \rho R], $$

where

$$ E _ {0} ( \rho , q( \cdot )) = \ - \mathop{\rm ln} \sum _ {\widetilde{y} \in \widetilde{Y} } \left [ \sum _ {y \in Y } q( y) q( y, \widetilde{y} ) ^ {1/( 1+ \rho ) } \right ] ^ {1+ \rho } . $$

Here, $ q( \cdot ) = \{ {q( y) } : {y \in Y } \} $ is an arbitrary probability distribution on $ Y $, $ q( y, \widetilde{y} ) = {\mathsf P} \{ \eta _ {k} = \widetilde{y} \mid \widetilde \eta _ {k} = y \} $, $ y \in Y $, $ \widetilde{y} \in \widetilde{Y} $, while $ \eta _ {k} $ and $ \widetilde \eta _ {k} $, $ k = 1 \dots N $, are the components of the input and output signals of the memoryless channel being considered. It is known that $ E _ {r} ( R) $ and $ E _ {sp} ( R) $, defined by the formulas (2) and (3), are positive, convex from below, monotone decreasing functions of $ R $ when $ 0 \leq R < C $, where $ C $ is the transmission rate of the channel (cf. Transmission rate of a channel), and that $ E _ {r} ( R) = E _ {sp} ( R) $ when $ R \geq R _ {cr} $; here $ R _ {cr} $ is a quantity defined by the transition matrix of the channel and called the critical speed of the given memoryless channel. Thus, for values $ R $, $ R _ {cr} \leq R < C $, the main terms of the upper and lower bounds (1) for $ P _ {e} ^ { \mathop{\rm opt} } $ coincide asymptotically, hence for this range of $ R $ the exact value of the reliability function of the channel, $ E( R) $, defined by

$$ E( R) = \overline{\lim\limits}\; _ {N \rightarrow \infty } \frac{- \mathop{\rm ln} P _ {e} ^ { \mathop{\rm opt} } ( N, R) }{N} , $$

is known. For values of $ R $, $ 0 \leq R < R _ {cr} $, the exact value of $ E( R) $ for arbitrary memoryless channels is still unknown (1983), although the bounds (1) can still be improved. The exponential character of decrease of $ P _ {e} ^ { \mathop{\rm opt} } ( N, R) $ has been proved for a very broad class of channels.

References

[1] R. Gallager, "Information theory and reliable communication" , 1–2 , Wiley (1968–1972)
[2] A.A. Feinstein, "Foundations of information theory" , McGraw-Hill (1958)
How to Cite This Entry:
Erroneous decoding, probability of. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Erroneous_decoding,_probability_of&oldid=14369
This article was adapted from an original article by R.L. DobrushinV.V. Prelov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article