Namespaces
Variants
Actions

Difference between revisions of "Information, exactness of reproducibility of"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
A measure of the quality of information transmission from an information source (cf. [[Information, source of|Information, source of]]) to a receiver (addressee) over a [[Communication channel|communication channel]]. The criteria relevant to the exactness of reproducibility of information in the theory of information transmission are usually treated statistically, by isolating the class <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i0510801.png" /> of admissible joint distributions for pairs <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i0510802.png" /> in the set of all probability measures on the product <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i0510803.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i0510804.png" /> is the [[Measurable space|measurable space]] of values of a communication <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i0510805.png" /> generated by the source, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i0510806.png" /> is the measurable space of values of the communication <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i0510807.png" /> received. Exactness of reproducibility of information is often defined in terms of a distortion measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i0510808.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i0510809.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108010.png" />, which is a non-negative measurable function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108011.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108012.png" />. The set of admissible communications <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108013.png" /> is then specified by the formula
+
<!--
 +
i0510801.png
 +
$#A+1 = 30 n = 0
 +
$#C+1 = 30 : ~/encyclopedia/old_files/data/I051/I.0501080 Information, exactness of reproducibility of
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108014.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
for a given <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108015.png" />.
+
A measure of the quality of information transmission from an information source (cf. [[Information, source of|Information, source of]]) to a receiver (addressee) over a [[Communication channel|communication channel]]. The criteria relevant to the exactness of reproducibility of information in the theory of information transmission are usually treated statistically, by isolating the class  $  W $
 +
of admissible joint distributions for pairs  $  ( \xi , \widetilde \xi  ) $
 +
in the set of all probability measures on the product  $  ( \mathfrak X \times \widetilde{\mathfrak X}  , S _ {\mathfrak X }  \times S _ {\widetilde{\mathfrak X}  }  ) $,
 +
where  $  ( \mathfrak X , S _ {\mathfrak X }  ) $
 +
is the [[Measurable space|measurable space]] of values of a communication  $  \xi $
 +
generated by the source, and  $  ( \widetilde{\mathfrak X}  , S _ {\widetilde{\mathfrak X}  }  ) $
 +
is the measurable space of values of the communication  $  \widetilde \xi  $
 +
received. Exactness of reproducibility of information is often defined in terms of a distortion measure  $  \rho ( x, \widetilde{x}  ) $,
 +
$  x \in \mathfrak X $,
 +
$  \widetilde{x}  \in \widetilde{\mathfrak X}  $,
 +
which is a non-negative measurable function of  $  x $
 +
and  $  \widetilde{x}  $.  
 +
The set of admissible communications  $  W $
 +
is then specified by the formula
  
In particular, when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108016.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108017.png" />, one often uses a componentwise condition for the exactness of reproducibility of information, namely
+
$$ \tag{1 }
 +
{\mathsf E} \rho ( \xi , \widetilde \xi  )  \leq  \epsilon ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108018.png" /></td> </tr></table>
+
for a given  $  \epsilon > 0 $.
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108019.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108020.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108021.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108022.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108023.png" />, and where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108024.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108025.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108026.png" />, is again a non-negative measurable function. In this case, instead of condition (1) one sometimes uses the following condition:
+
In particular, when  $  ( \mathfrak X , S _ {\mathfrak X }  ) = ( X  ^ {n} , S _ {X  ^ {n}  } ) $
 +
and $  ( \widetilde{\mathfrak X}  , S _ {\widetilde{\mathfrak X}  }  ) = ( \widetilde{X}  {}  ^ {n} , S _ {\widetilde{X}  {}  ^ {n}  } ) $,  
 +
one often uses a componentwise condition for the exactness of reproducibility of information, namely
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108027.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$
 +
\rho ( x  ^ {n} , \widetilde{x}  {}  ^ {n} )  = \
 +
{
 +
\frac{1}{n}
 +
}
 +
\sum _ {k = 1 } ^ { n }
 +
\rho _ {0} ( x _ {k} , \widetilde{x}  _ {k} ),
 +
$$
  
In the case when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108028.png" /> and
+
where  $  x  ^ {n} = ( x _ {1} \dots x _ {n} ) \in X $,
 +
$  \widetilde{x}  {}  ^ {n} = ( \widetilde{x}  _ {1} \dots \widetilde{x}  _ {n} ) \in \widetilde{X}  {}  ^ {n} $,
 +
$  x _ {k} \in X $,
 +
$  \widetilde{x}  _ {k} \in \widetilde{X}  $,
 +
$  k = 1 \dots n $,
 +
and where  $  \rho _ {0} ( x, \widetilde{x}  ) $,
 +
$  x \in X $,
 +
$  \widetilde{x}  \in \widetilde{X}  $,
 +
is again a non-negative measurable function. In this case, instead of condition (1) one sometimes uses the following condition:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108029.png" /></td> </tr></table>
+
$$ \tag{2 }
 +
{\mathsf E} \rho _ {0} ( \xi _ {k} , \widetilde \xi  _ {k} )  \leq  \epsilon \ \
 +
\textrm{ for }  \textrm{ all } \
 +
k = 1 \dots n.
 +
$$
  
the conditions (1) and (2) turn into restrictions on the mean or maximal probability of erroneous decoding (cf. [[Erroneous decoding, probability of|Erroneous decoding, probability of]]) of separate components of the communication, respectively. In the case of sources with continuous spaces (such as a Gaussian source), it is often assumed that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051080/i05108030.png" />.
+
In the case when  $  X = \widetilde{X}  $
 +
and
 +
 
 +
$$
 +
\rho _ {0} ( x, \widetilde{x}  )  = \
 +
\left \{
 +
 
 +
\begin{array}{ll}
 +
0  & \textrm{ if }  x = \widetilde{x}  ,  \\
 +
1  & \textrm{ if }  x \neq \widetilde{x}  ,  \\
 +
\end{array}
 +
 
 +
\right .$$
 +
 
 +
the conditions (1) and (2) turn into restrictions on the mean or maximal probability of erroneous decoding (cf. [[Erroneous decoding, probability of|Erroneous decoding, probability of]]) of separate components of the communication, respectively. In the case of sources with continuous spaces (such as a Gaussian source), it is often assumed that $  \rho _ {0} ( x, \widetilde{x}  ) = ( x - \widetilde{x}  )  ^ {2} $.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  R. Gallagher,  "Information theory and reliable communication" , Wiley  (1968)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  T. Berger,  "Rate distortion theory" , Prentice-Hall  (1971)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  R. Gallagher,  "Information theory and reliable communication" , Wiley  (1968)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  T. Berger,  "Rate distortion theory" , Prentice-Hall  (1971)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  I. Csiszar,  J. Körner,  "Information theory. Coding theorems for discrete memoryless systems" , Akad. Kiado  (1981)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  I. Csiszar,  J. Körner,  "Information theory. Coding theorems for discrete memoryless systems" , Akad. Kiado  (1981)</TD></TR></table>

Latest revision as of 22:12, 5 June 2020


A measure of the quality of information transmission from an information source (cf. Information, source of) to a receiver (addressee) over a communication channel. The criteria relevant to the exactness of reproducibility of information in the theory of information transmission are usually treated statistically, by isolating the class $ W $ of admissible joint distributions for pairs $ ( \xi , \widetilde \xi ) $ in the set of all probability measures on the product $ ( \mathfrak X \times \widetilde{\mathfrak X} , S _ {\mathfrak X } \times S _ {\widetilde{\mathfrak X} } ) $, where $ ( \mathfrak X , S _ {\mathfrak X } ) $ is the measurable space of values of a communication $ \xi $ generated by the source, and $ ( \widetilde{\mathfrak X} , S _ {\widetilde{\mathfrak X} } ) $ is the measurable space of values of the communication $ \widetilde \xi $ received. Exactness of reproducibility of information is often defined in terms of a distortion measure $ \rho ( x, \widetilde{x} ) $, $ x \in \mathfrak X $, $ \widetilde{x} \in \widetilde{\mathfrak X} $, which is a non-negative measurable function of $ x $ and $ \widetilde{x} $. The set of admissible communications $ W $ is then specified by the formula

$$ \tag{1 } {\mathsf E} \rho ( \xi , \widetilde \xi ) \leq \epsilon , $$

for a given $ \epsilon > 0 $.

In particular, when $ ( \mathfrak X , S _ {\mathfrak X } ) = ( X ^ {n} , S _ {X ^ {n} } ) $ and $ ( \widetilde{\mathfrak X} , S _ {\widetilde{\mathfrak X} } ) = ( \widetilde{X} {} ^ {n} , S _ {\widetilde{X} {} ^ {n} } ) $, one often uses a componentwise condition for the exactness of reproducibility of information, namely

$$ \rho ( x ^ {n} , \widetilde{x} {} ^ {n} ) = \ { \frac{1}{n} } \sum _ {k = 1 } ^ { n } \rho _ {0} ( x _ {k} , \widetilde{x} _ {k} ), $$

where $ x ^ {n} = ( x _ {1} \dots x _ {n} ) \in X $, $ \widetilde{x} {} ^ {n} = ( \widetilde{x} _ {1} \dots \widetilde{x} _ {n} ) \in \widetilde{X} {} ^ {n} $, $ x _ {k} \in X $, $ \widetilde{x} _ {k} \in \widetilde{X} $, $ k = 1 \dots n $, and where $ \rho _ {0} ( x, \widetilde{x} ) $, $ x \in X $, $ \widetilde{x} \in \widetilde{X} $, is again a non-negative measurable function. In this case, instead of condition (1) one sometimes uses the following condition:

$$ \tag{2 } {\mathsf E} \rho _ {0} ( \xi _ {k} , \widetilde \xi _ {k} ) \leq \epsilon \ \ \textrm{ for } \textrm{ all } \ k = 1 \dots n. $$

In the case when $ X = \widetilde{X} $ and

$$ \rho _ {0} ( x, \widetilde{x} ) = \ \left \{ \begin{array}{ll} 0 & \textrm{ if } x = \widetilde{x} , \\ 1 & \textrm{ if } x \neq \widetilde{x} , \\ \end{array} \right .$$

the conditions (1) and (2) turn into restrictions on the mean or maximal probability of erroneous decoding (cf. Erroneous decoding, probability of) of separate components of the communication, respectively. In the case of sources with continuous spaces (such as a Gaussian source), it is often assumed that $ \rho _ {0} ( x, \widetilde{x} ) = ( x - \widetilde{x} ) ^ {2} $.

References

[1] R. Gallagher, "Information theory and reliable communication" , Wiley (1968)
[2] T. Berger, "Rate distortion theory" , Prentice-Hall (1971)

Comments

References

[a1] I. Csiszar, J. Körner, "Information theory. Coding theorems for discrete memoryless systems" , Akad. Kiado (1981)
How to Cite This Entry:
Information, exactness of reproducibility of. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Information,_exactness_of_reproducibility_of&oldid=47350
This article was adapted from an original article by R.L. DobrushinV.V. Prelov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article