Namespaces
Variants
Actions

Difference between revisions of "Optimal decoding"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
A decoding which maximizes the exactness of information reproduction (cf. [[Information, exactness of reproducibility of|Information, exactness of reproducibility of]]) for given sources of information, communication channels and coding methods. In case the exactness of information reproduction is characterized by the mean probability of erroneous decoding (cf. [[Erroneous decoding, probability of|Erroneous decoding, probability of]]), optimal decoding minimizes this probability. For example, for a transmission <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o0684401.png" /> of information represented by the numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o0684402.png" />, the probabilities of appearance of which are <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o0684403.png" />, respectively, a discrete channel is used with a finite number of input and output signals and with transition function defined by the matrix
+
<!--
 +
o0684401.png
 +
$#A+1 = 24 n = 0
 +
$#C+1 = 24 : ~/encyclopedia/old_files/data/O068/O.0608440 Optimal decoding
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o0684404.png" /></td> </tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o0684405.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o0684406.png" /> are the sets of values of the signals of the input <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o0684407.png" /> and the output <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o0684408.png" />, respectively, while the coding is defined by the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o0684409.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o06844010.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o06844011.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o06844012.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o06844013.png" />, is a [[Code|code]], i.e. a choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o06844014.png" /> possible input values. Then an optimal decoding is defined by a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o06844015.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o06844016.png" /> for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o06844017.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o06844018.png" /> satisfies the inequality
+
A decoding which maximizes the exactness of information reproduction (cf. [[Information, exactness of reproducibility of|Information, exactness of reproducibility of]]) for given sources of information, communication channels and coding methods. In case the exactness of information reproduction is characterized by the mean probability of erroneous decoding (cf. [[Erroneous decoding, probability of|Erroneous decoding, probability of]]), optimal decoding minimizes this probability. For example, for a transmission  $  M $
 +
of information represented by the numbers  $  1 \dots M $,
 +
the probabilities of appearance of which are  $  p _ {1} \dots p _ {M} $,
 +
respectively, a discrete channel is used with a finite number of input and output signals and with transition function defined by the matrix
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o06844019.png" /></td> </tr></table>
+
$$
 +
q( y, \widetilde{y}  )  = {\mathsf P} \{ \widetilde \eta  = \widetilde{y}  \mid  \eta
 +
= y \} ,\ \
 +
y \in Y ,\  \widetilde{y}  \in \widetilde{Y}  ,
 +
$$
  
for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o06844020.png" />. In particular, if all information has equal probability, i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o06844021.png" />, then the described optimal decoding is also a decoding by the method of "maximum likelihood" (which is generally not optimal): The output signal <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o06844022.png" /> has to be decoded in the information <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o06844023.png" /> for which
+
where  $  Y $,
 +
$  \widetilde{Y}  $
 +
are the sets of values of the signals of the input  $  \eta $
 +
and the output  $  \widetilde \eta  $,
 +
respectively, while the coding is defined by the function  $  f( \cdot ) $
 +
for which  $  f( m) = y _ {m} $,
 +
$  m = 1 \dots M $,
 +
where  $  y _ {m} \in Y $,
 +
$  m = 1 \dots M $,  
 +
is a [[Code|code]], i.e. a choice of  $  M $
 +
possible input values. Then an optimal decoding is defined by a function  $  g( \cdot ) $
 +
such that $ g( \widetilde{y}  ) = m  ^  \prime  $
 +
for any  $  \widetilde{y}  \in \widetilde{Y}  $,
 +
where  $  m  ^  \prime  $
 +
satisfies the inequality
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068440/o06844024.png" /></td> </tr></table>
+
$$
 +
p _ {m  ^  \prime  } q( y _ {m  ^  \prime  } , \widetilde{y}  )  \geq  p _ {m} q( y _ {m} , \widetilde{y}  )
 +
$$
 +
 
 +
for all  $  m \neq m  ^  \prime  $.
 +
In particular, if all information has equal probability, i.e.  $  p _ {1} = \dots = p _ {M} = 1/M $,
 +
then the described optimal decoding is also a decoding by the method of  "maximum likelihood" (which is generally not optimal): The output signal  $  \widetilde{y}  $
 +
has to be decoded in the information  $  m  ^  \prime  $
 +
for which
 +
 
 +
$$
 +
q( y _ {m  ^  \prime  } , \widetilde{y}  )  \geq  q( y _ {m} , \widetilde{y}  )
 +
\  \textrm{ if }  m \neq m  ^  \prime  .
 +
$$
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  R. Gallagher,  "Information theory and reliable communication" , Wiley  (1968)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  J.M. Wozencraft,  I.M. Jacobs,  "Principles of communication engineering" , Wiley  (1965)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  R. Gallagher,  "Information theory and reliable communication" , Wiley  (1968)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  J.M. Wozencraft,  I.M. Jacobs,  "Principles of communication engineering" , Wiley  (1965)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
Cf. also [[Coding and decoding|Coding and decoding]]; [[Information theory|Information theory]].
 
Cf. also [[Coding and decoding|Coding and decoding]]; [[Information theory|Information theory]].

Latest revision as of 08:04, 6 June 2020


A decoding which maximizes the exactness of information reproduction (cf. Information, exactness of reproducibility of) for given sources of information, communication channels and coding methods. In case the exactness of information reproduction is characterized by the mean probability of erroneous decoding (cf. Erroneous decoding, probability of), optimal decoding minimizes this probability. For example, for a transmission $ M $ of information represented by the numbers $ 1 \dots M $, the probabilities of appearance of which are $ p _ {1} \dots p _ {M} $, respectively, a discrete channel is used with a finite number of input and output signals and with transition function defined by the matrix

$$ q( y, \widetilde{y} ) = {\mathsf P} \{ \widetilde \eta = \widetilde{y} \mid \eta = y \} ,\ \ y \in Y ,\ \widetilde{y} \in \widetilde{Y} , $$

where $ Y $, $ \widetilde{Y} $ are the sets of values of the signals of the input $ \eta $ and the output $ \widetilde \eta $, respectively, while the coding is defined by the function $ f( \cdot ) $ for which $ f( m) = y _ {m} $, $ m = 1 \dots M $, where $ y _ {m} \in Y $, $ m = 1 \dots M $, is a code, i.e. a choice of $ M $ possible input values. Then an optimal decoding is defined by a function $ g( \cdot ) $ such that $ g( \widetilde{y} ) = m ^ \prime $ for any $ \widetilde{y} \in \widetilde{Y} $, where $ m ^ \prime $ satisfies the inequality

$$ p _ {m ^ \prime } q( y _ {m ^ \prime } , \widetilde{y} ) \geq p _ {m} q( y _ {m} , \widetilde{y} ) $$

for all $ m \neq m ^ \prime $. In particular, if all information has equal probability, i.e. $ p _ {1} = \dots = p _ {M} = 1/M $, then the described optimal decoding is also a decoding by the method of "maximum likelihood" (which is generally not optimal): The output signal $ \widetilde{y} $ has to be decoded in the information $ m ^ \prime $ for which

$$ q( y _ {m ^ \prime } , \widetilde{y} ) \geq q( y _ {m} , \widetilde{y} ) \ \textrm{ if } m \neq m ^ \prime . $$

References

[1] R. Gallagher, "Information theory and reliable communication" , Wiley (1968)
[2] J.M. Wozencraft, I.M. Jacobs, "Principles of communication engineering" , Wiley (1965)

Comments

Cf. also Coding and decoding; Information theory.

How to Cite This Entry:
Optimal decoding. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Optimal_decoding&oldid=12294
This article was adapted from an original article by R.L. DobrushinV.V. Prelov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article