Namespaces
Variants
Actions

Difference between revisions of "Transmission rate of a channel"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
An information-theoretic measure of the ability to transmit information over a [[Communication channel|communication channel]]. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t0938901.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t0938902.png" /> be random variables connected in a communication channel <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t0938903.png" />. Then the transmission rate <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t0938904.png" /> of this channel is defined by the equation
+
<!--
 +
t0938901.png
 +
$#A+1 = 50 n = 0
 +
$#C+1 = 50 : ~/encyclopedia/old_files/data/T093/T.0903890 Transmission rate of a channel
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t0938905.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t0938906.png" /> is the amount of information (cf. [[Information, amount of|Information, amount of]]) in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t0938907.png" /> relative to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t0938908.png" />, and the supremum is taken over all pairs of random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t0938909.png" /> connected in the channel <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389010.png" />. In the case when the input and output signals <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389011.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389012.png" /> are random processes in continuous or discrete time, the transmission rate of the channel is usually understood to mean the mean transmission rate of the channel, taken in unit time or over one symbol of the transmitted signal, that is, by definition one sets
+
An information-theoretic measure of the ability to transmit information over a [[Communication channel|communication channel]]. Let  $  \eta $
 +
and $  \widetilde \eta  $
 +
be random variables connected in a communication channel $  ( Q, V) $.  
 +
Then the transmission rate $  C $
 +
of this channel is defined by the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389013.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{1 }
 +
= \sup  I ( \eta , \widetilde \eta  ),
 +
$$
  
if the limit exists; here the supremum is taken over all possible pairs of random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389014.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389015.png" /> connected in the corresponding segment of the given channel. The existence of the limit (2) has been proved for a wide class of channels, for example for a homogeneous [[Channel with a finite memory|channel with a finite memory]] and non-vanishing [[Transition probabilities|transition probabilities]].
+
where  $  I ( \eta , \widetilde \eta  ) $
 +
is the amount of information (cf. [[Information, amount of|Information, amount of]]) in  $  \widetilde \eta  $
 +
relative to  $  \eta $,
 +
and the supremum is taken over all pairs of random variables  $  ( \eta , \widetilde \eta  ) $
 +
connected in the channel  $  ( Q, V) $.
 +
In the case when the input and output signals  $  \eta = \{ {\eta ( t) } : {- \infty < t < \infty } \} $
 +
and  $  \widetilde \eta  = \{ {\widetilde \eta  ( t) } : {- \infty < t < \infty } \} $
 +
are random processes in continuous or discrete time, the transmission rate of the channel is usually understood to mean the mean transmission rate of the channel, taken in unit time or over one symbol of the transmitted signal, that is, by definition one sets
 +
 
 +
$$ \tag{2 }
 +
C  =  \lim\limits _ {T - t \rightarrow \infty }  {
 +
\frac{1}{T - t }
 +
} \
 +
\sup  I ( \eta _ {t}  ^ {T} , \widetilde \eta  {} _ {t}  ^ {T} ),
 +
$$
 +
 
 +
if the limit exists; here the supremum is taken over all possible pairs of random variables $  \eta _ {t}  ^ {T} = \{ {\eta ( s) } : {t < s \leq  T } \} $,  
 +
$  \widetilde \eta  {} _ {t}  ^ {T} = \{ {\widetilde \eta  ( s) } : {t < s \leq  T } \} $
 +
connected in the corresponding segment of the given channel. The existence of the limit (2) has been proved for a wide class of channels, for example for a homogeneous [[Channel with a finite memory|channel with a finite memory]] and non-vanishing [[Transition probabilities|transition probabilities]].
  
 
It is known that for a sufficiently wide class (for example, for the channels with finite memory mentioned above) the following holds:
 
It is known that for a sufficiently wide class (for example, for the channels with finite memory mentioned above) the following holds:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389016.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
= \sup \
 +
\left (
 +
\lim\limits _ {T - t \rightarrow \infty } \
 +
{
 +
\frac{1}{T - t }
 +
}
 +
I ( \eta _ {t}  ^ {T} , \widetilde \eta  {} _ {t}  ^ {T} )
 +
\right ) ,
 +
$$
 +
 
 +
where the supremum is taken over all pairs of stationarily-related random processes  $  \eta ( t) $,
 +
$  \widetilde \eta  ( t) $,
 +
$  - \infty < t < \infty $,
 +
such that for any  $  - \infty < t < T < \infty $
 +
the random variables  $  \eta _ {t}  ^ {T} = \{ {\eta ( s) } : {t < s \leq  T } \} $
 +
and  $  \widetilde \eta  {} _ {t}  ^ {T} = \{ {\widetilde \eta  ( s) } : {t < s \leq  T } \} $
 +
are connected in the corresponding segment of the channel under consideration. Thus, (3) shows that the transmission rate of the channel is the same as the maximum possible transmission rate of information (cf. [[Information, transmission rate of|Information, transmission rate of]]) along this channel.
  
where the supremum is taken over all pairs of stationarily-related random processes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389017.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389018.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389019.png" />, such that for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389020.png" /> the random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389021.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389022.png" /> are connected in the corresponding segment of the channel under consideration. Thus, (3) shows that the transmission rate of the channel is the same as the maximum possible transmission rate of information (cf. [[Information, transmission rate of|Information, transmission rate of]]) along this channel.
+
An explicit calculation of transmission rates is therefore of considerable interest. For example, for a channel  $  ( Q , V) $
 +
whose input and output signals take values in the Euclidean  $  n $-
 +
dimensional space  $  \mathbf R  ^ {n} $,
 +
with transition function  $  Q ( y, \cdot ) $
 +
defined by a density  $  q ( y, \widetilde{y}  ) $(
 +
with respect to the Lebesgue measure),  $  y, \widetilde{y}  \in \mathbf R  ^ {n} $,
 +
and with the constraint  $  V $
 +
consisting of boundedness of the mean square power of the input signal,  $  {\mathsf E} | \eta |  ^ {2} \leq  S $(
 +
where  $  | \eta | $
 +
is the length of the vector  $  \eta $
 +
in  $  \mathbf R  ^ {n} $),  
 +
$  S > 0 $
 +
being a fixed constant, the following results are known (see ).
  
An explicit calculation of transmission rates is therefore of considerable interest. For example, for a channel <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389023.png" /> whose input and output signals take values in the Euclidean <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389024.png" />-dimensional space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389025.png" />, with transition function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389026.png" /> defined by a density <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389027.png" /> (with respect to the Lebesgue measure), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389028.png" />, and with the constraint <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389029.png" /> consisting of boundedness of the mean square power of the input signal, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389030.png" /> (where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389031.png" /> is the length of the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389032.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389033.png" />), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389034.png" /> being a fixed constant, the following results are known (see ).
+
1) Let  $  q ( y, \widetilde{y}  ) = q ( \widetilde{y}  - y) $,
 +
that is, one considers a channel with additive noise such that the output signal  $  \widetilde \eta  $
 +
is equal to the sum  $  \widetilde \eta  = \eta + \zeta $
 +
of the input signal $  \eta $
 +
and a noise  $  \zeta $
 +
independent of it, and let  $  {\mathsf E} | \zeta |  ^ {2} = N $.
 +
Then as  $  N \rightarrow 0 $(
 +
under weak additional conditions) the following asymptotic formula holds:
  
1) Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389035.png" />, that is, one considers a channel with additive noise such that the output signal <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389036.png" /> is equal to the sum <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389037.png" /> of the input signal <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389038.png" /> and a noise <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389039.png" /> independent of it, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389040.png" />. Then as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389041.png" /> (under weak additional conditions) the following asymptotic formula holds:
+
$$
 +
C  =  - h ( \zeta ) +
 +
{
 +
\frac{n}{2}
 +
}  \mathop{\rm log}  {
 +
\frac{2 \pi eS }{n}
 +
} +
 +
{
 +
\frac{n  \mathop{\rm log} }{2S}
 +
} N + o ( N),
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389042.png" /></td> </tr></table>
+
where  $  h ( \zeta ) $
 +
is the [[Differential entropy|differential entropy]] of  $  \zeta $
 +
and  $  o ( N)/N \rightarrow 0 $
 +
as  $  N \rightarrow 0 $.  
 +
This formula corresponds to the case of little noise.
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389043.png" /> is the [[Differential entropy|differential entropy]] of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389044.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389045.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389046.png" />. This formula corresponds to the case of little noise.
+
2) Let  $  q ( y, \widetilde{y}  ) $
 +
be arbitrary but let  $  S \rightarrow 0 $.  
 +
Then
  
2) Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389047.png" /> be arbitrary but let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389048.png" />. Then
+
$$
 +
C  =  \left (
 +
\sup _ { y } 
 +
\frac{\phi ( y) }{| y | }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389049.png" /></td> </tr></table>
+
\right ) S + o ( S),
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093890/t09389050.png" /></td> </tr></table>
+
$$
 +
\phi ( y)  = \
 +
\int\limits _ {\mathbf R  ^ {n} }
 +
q ( y, \widetilde{y}  )  \mathop{\rm log} \
 +
 
 +
\frac{q ( y, \widetilde{y}  ) }{q ( 0, y) }
 +
\
 +
d \widetilde{y}  .
 +
$$
  
 
See also , – cited under [[Communication channel|Communication channel]].
 
See also , – cited under [[Communication channel|Communication channel]].
Line 35: Line 131:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1a]</TD> <TD valign="top">  V.V. Prelov,  "The asymptotic channel capacity for a continuous channel with small additive noise"  ''Problems Inform. Transmission'' , '''5''' :  2  (1969)  pp. 23–27  ''Probl. Peredachi Inform.'' , '''5''' :  2  (1969)  pp. 31–36</TD></TR><TR><TD valign="top">[1b]</TD> <TD valign="top">  V.V. Prelov,  "Asymptotic behavior of the capacity of a continuous channel with large nonadditive noise"  ''Problems Inform. Transmission'' , '''8''' :  4  (1972)  pp. 285–289  ''Probl. Peredachi Inform.'' , '''8''' :  4  (1972)  pp. 22–27</TD></TR></table>
 
<table><TR><TD valign="top">[1a]</TD> <TD valign="top">  V.V. Prelov,  "The asymptotic channel capacity for a continuous channel with small additive noise"  ''Problems Inform. Transmission'' , '''5''' :  2  (1969)  pp. 23–27  ''Probl. Peredachi Inform.'' , '''5''' :  2  (1969)  pp. 31–36</TD></TR><TR><TD valign="top">[1b]</TD> <TD valign="top">  V.V. Prelov,  "Asymptotic behavior of the capacity of a continuous channel with large nonadditive noise"  ''Problems Inform. Transmission'' , '''8''' :  4  (1972)  pp. 285–289  ''Probl. Peredachi Inform.'' , '''8''' :  4  (1972)  pp. 22–27</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  P. Billingsley,  "Ergodic theory and information" , Wiley  (1965)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  R.B. Ash,  "Information theory" , Interscience  (1965)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  A.M. Yaglom,  I.M. Yaglom,  "Probabilité et information" , Dunod  (1959)  (Translated from Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  P. Billingsley,  "Ergodic theory and information" , Wiley  (1965)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  R.B. Ash,  "Information theory" , Interscience  (1965)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  A.M. Yaglom,  I.M. Yaglom,  "Probabilité et information" , Dunod  (1959)  (Translated from Russian)</TD></TR></table>

Latest revision as of 08:26, 6 June 2020


An information-theoretic measure of the ability to transmit information over a communication channel. Let $ \eta $ and $ \widetilde \eta $ be random variables connected in a communication channel $ ( Q, V) $. Then the transmission rate $ C $ of this channel is defined by the equation

$$ \tag{1 } C = \sup I ( \eta , \widetilde \eta ), $$

where $ I ( \eta , \widetilde \eta ) $ is the amount of information (cf. Information, amount of) in $ \widetilde \eta $ relative to $ \eta $, and the supremum is taken over all pairs of random variables $ ( \eta , \widetilde \eta ) $ connected in the channel $ ( Q, V) $. In the case when the input and output signals $ \eta = \{ {\eta ( t) } : {- \infty < t < \infty } \} $ and $ \widetilde \eta = \{ {\widetilde \eta ( t) } : {- \infty < t < \infty } \} $ are random processes in continuous or discrete time, the transmission rate of the channel is usually understood to mean the mean transmission rate of the channel, taken in unit time or over one symbol of the transmitted signal, that is, by definition one sets

$$ \tag{2 } C = \lim\limits _ {T - t \rightarrow \infty } { \frac{1}{T - t } } \ \sup I ( \eta _ {t} ^ {T} , \widetilde \eta {} _ {t} ^ {T} ), $$

if the limit exists; here the supremum is taken over all possible pairs of random variables $ \eta _ {t} ^ {T} = \{ {\eta ( s) } : {t < s \leq T } \} $, $ \widetilde \eta {} _ {t} ^ {T} = \{ {\widetilde \eta ( s) } : {t < s \leq T } \} $ connected in the corresponding segment of the given channel. The existence of the limit (2) has been proved for a wide class of channels, for example for a homogeneous channel with a finite memory and non-vanishing transition probabilities.

It is known that for a sufficiently wide class (for example, for the channels with finite memory mentioned above) the following holds:

$$ \tag{3 } C = \sup \ \left ( \lim\limits _ {T - t \rightarrow \infty } \ { \frac{1}{T - t } } I ( \eta _ {t} ^ {T} , \widetilde \eta {} _ {t} ^ {T} ) \right ) , $$

where the supremum is taken over all pairs of stationarily-related random processes $ \eta ( t) $, $ \widetilde \eta ( t) $, $ - \infty < t < \infty $, such that for any $ - \infty < t < T < \infty $ the random variables $ \eta _ {t} ^ {T} = \{ {\eta ( s) } : {t < s \leq T } \} $ and $ \widetilde \eta {} _ {t} ^ {T} = \{ {\widetilde \eta ( s) } : {t < s \leq T } \} $ are connected in the corresponding segment of the channel under consideration. Thus, (3) shows that the transmission rate of the channel is the same as the maximum possible transmission rate of information (cf. Information, transmission rate of) along this channel.

An explicit calculation of transmission rates is therefore of considerable interest. For example, for a channel $ ( Q , V) $ whose input and output signals take values in the Euclidean $ n $- dimensional space $ \mathbf R ^ {n} $, with transition function $ Q ( y, \cdot ) $ defined by a density $ q ( y, \widetilde{y} ) $( with respect to the Lebesgue measure), $ y, \widetilde{y} \in \mathbf R ^ {n} $, and with the constraint $ V $ consisting of boundedness of the mean square power of the input signal, $ {\mathsf E} | \eta | ^ {2} \leq S $( where $ | \eta | $ is the length of the vector $ \eta $ in $ \mathbf R ^ {n} $), $ S > 0 $ being a fixed constant, the following results are known (see ).

1) Let $ q ( y, \widetilde{y} ) = q ( \widetilde{y} - y) $, that is, one considers a channel with additive noise such that the output signal $ \widetilde \eta $ is equal to the sum $ \widetilde \eta = \eta + \zeta $ of the input signal $ \eta $ and a noise $ \zeta $ independent of it, and let $ {\mathsf E} | \zeta | ^ {2} = N $. Then as $ N \rightarrow 0 $( under weak additional conditions) the following asymptotic formula holds:

$$ C = - h ( \zeta ) + { \frac{n}{2} } \mathop{\rm log} { \frac{2 \pi eS }{n} } + { \frac{n \mathop{\rm log} }{2S} } N + o ( N), $$

where $ h ( \zeta ) $ is the differential entropy of $ \zeta $ and $ o ( N)/N \rightarrow 0 $ as $ N \rightarrow 0 $. This formula corresponds to the case of little noise.

2) Let $ q ( y, \widetilde{y} ) $ be arbitrary but let $ S \rightarrow 0 $. Then

$$ C = \left ( \sup _ { y } \frac{\phi ( y) }{| y | } \right ) S + o ( S), $$

where

$$ \phi ( y) = \ \int\limits _ {\mathbf R ^ {n} } q ( y, \widetilde{y} ) \mathop{\rm log} \ \frac{q ( y, \widetilde{y} ) }{q ( 0, y) } \ d \widetilde{y} . $$

See also , – cited under Communication channel.

References

[1a] V.V. Prelov, "The asymptotic channel capacity for a continuous channel with small additive noise" Problems Inform. Transmission , 5 : 2 (1969) pp. 23–27 Probl. Peredachi Inform. , 5 : 2 (1969) pp. 31–36
[1b] V.V. Prelov, "Asymptotic behavior of the capacity of a continuous channel with large nonadditive noise" Problems Inform. Transmission , 8 : 4 (1972) pp. 285–289 Probl. Peredachi Inform. , 8 : 4 (1972) pp. 22–27

Comments

References

[a1] P. Billingsley, "Ergodic theory and information" , Wiley (1965)
[a2] R.B. Ash, "Information theory" , Interscience (1965)
[a3] A.M. Yaglom, I.M. Yaglom, "Probabilité et information" , Dunod (1959) (Translated from Russian)
How to Cite This Entry:
Transmission rate of a channel. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Transmission_rate_of_a_channel&oldid=17437
This article was adapted from an original article by R.L. DobrushinV.V. Prelov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article