Difference between revisions of "Erroneous decoding, probability of"
(Importing text file) |
Ulf Rehmann (talk | contribs) m (tex encoded by computer) |
||
Line 1: | Line 1: | ||
− | + | <!-- | |
+ | e0362101.png | ||
+ | $#A+1 = 58 n = 0 | ||
+ | $#C+1 = 58 : ~/encyclopedia/old_files/data/E036/E.0306210 Erroneous decoding, probability of | ||
+ | Automatically converted into TeX, above some diagnostics. | ||
+ | Please remove this comment and the {{TEX|auto}} line below, | ||
+ | if TeX found to be correct. | ||
+ | --> | ||
− | + | {{TEX|auto}} | |
+ | {{TEX|done}} | ||
− | + | One of the possible measures for characterizing the accuracy of information reproduction over a [[Communication channel|communication channel]] (see also [[Information, exactness of reproducibility of|Information, exactness of reproducibility of]]). For the transmission of information $ \xi $ | |
+ | from a source which has $ M $ | ||
+ | different possible values $ 1 \dots M $ | ||
+ | with probability distribution $ p _ {m} = {\mathsf P} \{ \xi = m \} $, | ||
+ | $ m = 1 \dots M $, | ||
+ | a certain communication channel is used. Then for fixed methods of [[Coding and decoding|coding and decoding]] (see also [[Information, transmission of|Information, transmission of]]), the probability of erroneous decoding $ P _ {e,m} $, | ||
+ | $ m = 1 \dots M $, | ||
+ | is defined by | ||
− | + | $$ | |
+ | P _ {e,m} = {\mathsf P} \{ \xi \neq m \mid \xi = m \} , | ||
+ | $$ | ||
− | is | + | where $ \widetilde \xi $ |
+ | is the decoded message at the channel output. The variable quantity | ||
− | + | $$ | |
+ | P _ {e} = {\mathsf P} \{ \widetilde \xi \neq \xi \} = \sum _ { m= } 1 ^ { M } p _ {m} P _ {e,m} $$ | ||
− | is of | + | is called the mean probability of erroneous decoding. The study of the optimal probability of erroneous decoding, $ P _ {e} ^ { \mathop{\rm opt} } $, |
+ | defined by | ||
− | + | $$ | |
+ | P _ {e} ^ { \mathop{\rm opt} } = \inf P _ {e} , | ||
+ | $$ | ||
− | + | is of particular interest. The infimum is taken over all possible methods of coding and decoding. Owing to the difficulty of obtaining an accurate expression for $ P _ {e} ^ { \mathop{\rm opt} } $, | |
+ | which is caused by the fact that optimal coding methods are usually unknown, the study of the asymptotic behaviour of $ P _ {e} ^ { \mathop{\rm opt} } $ | ||
+ | when the length of the message in the channel increases is of interest. More precisely, the following situation is studied. Let a segment of length $ N $ | ||
+ | in a communication channel with discrete time be used, and put $ R = ( \mathop{\rm ln} M)/N $. | ||
+ | The asymptotic behaviour of $ P _ {e} ^ { \mathop{\rm opt} } $ | ||
+ | has to be studied when $ N \rightarrow \infty $ | ||
+ | and $ R = \textrm{ const } $( | ||
+ | this means that the length of the message increases, while its transmission rate remains constant). If the channel is a homogeneous [[Memoryless channel|memoryless channel]] with discrete time and if the spaces $ Y $ | ||
+ | and $ \widetilde{Y} $ | ||
+ | of values of the components of the input and output signals are finite, then the following upper and lower bounds for $ P _ {e} ^ { \mathop{\rm opt} } = P _ {e} ^ { \mathop{\rm opt} } ( N, R) $ | ||
+ | are known: | ||
− | + | $$ \tag{1 } | |
+ | \mathop{\rm exp} \{ - N[ E _ {sp} ( R - \alpha ( N)) + \beta ( N)] \} \leq | ||
+ | $$ | ||
− | + | $$ | |
+ | \leq \ | ||
+ | P _ {e} ^ { \mathop{\rm opt} } ( N, R) \leq \mathop{\rm exp} \{ - NE _ {r} ( R) \} , | ||
+ | $$ | ||
− | + | where $ \alpha ( N) $ | |
+ | and $ \beta ( N) $ | ||
+ | tend to zero with increasing $ N $, | ||
+ | while the functions $ E _ {r} ( R) $ | ||
+ | and $ E _ {sp} ( R) $ | ||
+ | are defined as follows: | ||
+ | |||
+ | $$ \tag{2 } | ||
+ | E _ {r} ( R) = \max _ {0 \leq \rho \leq 1 } \max _ {q( \cdot ) } [ E _ {0} ( \rho | ||
+ | , q( \cdot )) - \rho R], | ||
+ | $$ | ||
+ | |||
+ | $$ \tag{3 } | ||
+ | E _ {sp} ( R) = \sup _ {\rho > 0 } \max _ | ||
+ | {q( \cdot ) } [ E _ {0} ( \rho , q( \cdot )) - \rho R], | ||
+ | $$ | ||
where | where | ||
− | + | $$ | |
+ | E _ {0} ( \rho , q( \cdot )) = \ | ||
+ | - \mathop{\rm ln} \sum _ {\widetilde{y} \in \widetilde{Y} } \left [ | ||
+ | \sum _ {y \in Y } q( y) q( y, \widetilde{y} ) ^ {1/( 1+ \rho ) } \right ] ^ {1+ \rho | ||
+ | } . | ||
+ | $$ | ||
− | Here, | + | Here, $ q( \cdot ) = \{ {q( y) } : {y \in Y } \} $ |
+ | is an arbitrary probability distribution on $ Y $, | ||
+ | $ q( y, \widetilde{y} ) = {\mathsf P} \{ \eta _ {k} = \widetilde{y} \mid \widetilde \eta _ {k} = y \} $, | ||
+ | $ y \in Y $, | ||
+ | $ \widetilde{y} \in \widetilde{Y} $, | ||
+ | while $ \eta _ {k} $ | ||
+ | and $ \widetilde \eta _ {k} $, | ||
+ | $ k = 1 \dots N $, | ||
+ | are the components of the input and output signals of the memoryless channel being considered. It is known that $ E _ {r} ( R) $ | ||
+ | and $ E _ {sp} ( R) $, | ||
+ | defined by the formulas (2) and (3), are positive, convex from below, monotone decreasing functions of $ R $ | ||
+ | when $ 0 \leq R < C $, | ||
+ | where $ C $ | ||
+ | is the transmission rate of the channel (cf. [[Transmission rate of a channel|Transmission rate of a channel]]), and that $ E _ {r} ( R) = E _ {sp} ( R) $ | ||
+ | when $ R \geq R _ {cr} $; | ||
+ | here $ R _ {cr} $ | ||
+ | is a quantity defined by the transition matrix of the channel and called the critical speed of the given memoryless channel. Thus, for values $ R $, | ||
+ | $ R _ {cr} \leq R < C $, | ||
+ | the main terms of the upper and lower bounds (1) for $ P _ {e} ^ { \mathop{\rm opt} } $ | ||
+ | coincide asymptotically, hence for this range of $ R $ | ||
+ | the exact value of the reliability function of the channel, $ E( R) $, | ||
+ | defined by | ||
− | + | $$ | |
+ | E( R) = \overline{\lim\limits}\; _ {N \rightarrow \infty } | ||
+ | \frac{- \mathop{\rm ln} P _ {e} ^ { \mathop{\rm opt} } | ||
+ | ( N, R) }{N} | ||
+ | , | ||
+ | $$ | ||
− | is known. For values of | + | is known. For values of $ R $, |
+ | $ 0 \leq R < R _ {cr} $, | ||
+ | the exact value of $ E( R) $ | ||
+ | for arbitrary memoryless channels is still unknown (1983), although the bounds (1) can still be improved. The exponential character of decrease of $ P _ {e} ^ { \mathop{\rm opt} } ( N, R) $ | ||
+ | has been proved for a very broad class of channels. | ||
====References==== | ====References==== | ||
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> R. Gallager, "Information theory and reliable communication" , '''1–2''' , Wiley (1968–1972)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> A.A. Feinstein, "Foundations of information theory" , McGraw-Hill (1958)</TD></TR></table> | <table><TR><TD valign="top">[1]</TD> <TD valign="top"> R. Gallager, "Information theory and reliable communication" , '''1–2''' , Wiley (1968–1972)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> A.A. Feinstein, "Foundations of information theory" , McGraw-Hill (1958)</TD></TR></table> |
Revision as of 19:37, 5 June 2020
One of the possible measures for characterizing the accuracy of information reproduction over a communication channel (see also Information, exactness of reproducibility of). For the transmission of information $ \xi $
from a source which has $ M $
different possible values $ 1 \dots M $
with probability distribution $ p _ {m} = {\mathsf P} \{ \xi = m \} $,
$ m = 1 \dots M $,
a certain communication channel is used. Then for fixed methods of coding and decoding (see also Information, transmission of), the probability of erroneous decoding $ P _ {e,m} $,
$ m = 1 \dots M $,
is defined by
$$ P _ {e,m} = {\mathsf P} \{ \xi \neq m \mid \xi = m \} , $$
where $ \widetilde \xi $ is the decoded message at the channel output. The variable quantity
$$ P _ {e} = {\mathsf P} \{ \widetilde \xi \neq \xi \} = \sum _ { m= } 1 ^ { M } p _ {m} P _ {e,m} $$
is called the mean probability of erroneous decoding. The study of the optimal probability of erroneous decoding, $ P _ {e} ^ { \mathop{\rm opt} } $, defined by
$$ P _ {e} ^ { \mathop{\rm opt} } = \inf P _ {e} , $$
is of particular interest. The infimum is taken over all possible methods of coding and decoding. Owing to the difficulty of obtaining an accurate expression for $ P _ {e} ^ { \mathop{\rm opt} } $, which is caused by the fact that optimal coding methods are usually unknown, the study of the asymptotic behaviour of $ P _ {e} ^ { \mathop{\rm opt} } $ when the length of the message in the channel increases is of interest. More precisely, the following situation is studied. Let a segment of length $ N $ in a communication channel with discrete time be used, and put $ R = ( \mathop{\rm ln} M)/N $. The asymptotic behaviour of $ P _ {e} ^ { \mathop{\rm opt} } $ has to be studied when $ N \rightarrow \infty $ and $ R = \textrm{ const } $( this means that the length of the message increases, while its transmission rate remains constant). If the channel is a homogeneous memoryless channel with discrete time and if the spaces $ Y $ and $ \widetilde{Y} $ of values of the components of the input and output signals are finite, then the following upper and lower bounds for $ P _ {e} ^ { \mathop{\rm opt} } = P _ {e} ^ { \mathop{\rm opt} } ( N, R) $ are known:
$$ \tag{1 } \mathop{\rm exp} \{ - N[ E _ {sp} ( R - \alpha ( N)) + \beta ( N)] \} \leq $$
$$ \leq \ P _ {e} ^ { \mathop{\rm opt} } ( N, R) \leq \mathop{\rm exp} \{ - NE _ {r} ( R) \} , $$
where $ \alpha ( N) $ and $ \beta ( N) $ tend to zero with increasing $ N $, while the functions $ E _ {r} ( R) $ and $ E _ {sp} ( R) $ are defined as follows:
$$ \tag{2 } E _ {r} ( R) = \max _ {0 \leq \rho \leq 1 } \max _ {q( \cdot ) } [ E _ {0} ( \rho , q( \cdot )) - \rho R], $$
$$ \tag{3 } E _ {sp} ( R) = \sup _ {\rho > 0 } \max _ {q( \cdot ) } [ E _ {0} ( \rho , q( \cdot )) - \rho R], $$
where
$$ E _ {0} ( \rho , q( \cdot )) = \ - \mathop{\rm ln} \sum _ {\widetilde{y} \in \widetilde{Y} } \left [ \sum _ {y \in Y } q( y) q( y, \widetilde{y} ) ^ {1/( 1+ \rho ) } \right ] ^ {1+ \rho } . $$
Here, $ q( \cdot ) = \{ {q( y) } : {y \in Y } \} $ is an arbitrary probability distribution on $ Y $, $ q( y, \widetilde{y} ) = {\mathsf P} \{ \eta _ {k} = \widetilde{y} \mid \widetilde \eta _ {k} = y \} $, $ y \in Y $, $ \widetilde{y} \in \widetilde{Y} $, while $ \eta _ {k} $ and $ \widetilde \eta _ {k} $, $ k = 1 \dots N $, are the components of the input and output signals of the memoryless channel being considered. It is known that $ E _ {r} ( R) $ and $ E _ {sp} ( R) $, defined by the formulas (2) and (3), are positive, convex from below, monotone decreasing functions of $ R $ when $ 0 \leq R < C $, where $ C $ is the transmission rate of the channel (cf. Transmission rate of a channel), and that $ E _ {r} ( R) = E _ {sp} ( R) $ when $ R \geq R _ {cr} $; here $ R _ {cr} $ is a quantity defined by the transition matrix of the channel and called the critical speed of the given memoryless channel. Thus, for values $ R $, $ R _ {cr} \leq R < C $, the main terms of the upper and lower bounds (1) for $ P _ {e} ^ { \mathop{\rm opt} } $ coincide asymptotically, hence for this range of $ R $ the exact value of the reliability function of the channel, $ E( R) $, defined by
$$ E( R) = \overline{\lim\limits}\; _ {N \rightarrow \infty } \frac{- \mathop{\rm ln} P _ {e} ^ { \mathop{\rm opt} } ( N, R) }{N} , $$
is known. For values of $ R $, $ 0 \leq R < R _ {cr} $, the exact value of $ E( R) $ for arbitrary memoryless channels is still unknown (1983), although the bounds (1) can still be improved. The exponential character of decrease of $ P _ {e} ^ { \mathop{\rm opt} } ( N, R) $ has been proved for a very broad class of channels.
References
[1] | R. Gallager, "Information theory and reliable communication" , 1–2 , Wiley (1968–1972) |
[2] | A.A. Feinstein, "Foundations of information theory" , McGraw-Hill (1958) |
Erroneous decoding, probability of. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Erroneous_decoding,_probability_of&oldid=14369