Namespaces
Variants
Actions

Difference between revisions of "Error-correcting code"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Category:Information and communication, circuits)
m (tex encoded by computer)
(One intermediate revision by one other user not shown)
Line 1: Line 1:
 +
<!--
 +
e0362301.png
 +
$#A+1 = 207 n = 0
 +
$#C+1 = 207 : ~/encyclopedia/old_files/data/E036/E.0306230 Error\AAhcorrecting code,
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''code that corrects errors''
 
''code that corrects errors''
  
A set of messages destined for transmission over a [[Communication channel|communication channel]] with noise with the property that the "neighbourhood" of each message (that is, the set of most likely distorted versions of this message) does not intersect with the neighbourhoods of other messages. This property of an error-correcting code enables one to correct the errors (that is, to recover the transmitted message) in those distorted messages (received at the output of the channel) that belong to the neighbourhood of the message. Elements of an error-correcting code (codewords) are employed in the encoding of sequences of information symbols being presented by the source of information (cf. [[Information, source of|Information, source of]]). The encoding consists in the representation of the information sequence in a special form and in the introduction of additional information ([[Redundancy|redundancy]]) into it. This redundancy is usually introduced by appending to the message extra symbols, by some means or other. For example, the sequence of symbols may be divided into blocks of a fixed length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e0362301.png" />, and, independently of one another, the blocks are replaced by different blocks of greater length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e0362302.png" />, which are elements of a so-called error-correcting block code. Other methods [[#References|[1]]] are known for the introduction of redundancy and the error-correcting codes related to them.
+
A set of messages destined for transmission over a [[Communication channel|communication channel]] with noise with the property that the "neighbourhood" of each message (that is, the set of most likely distorted versions of this message) does not intersect with the neighbourhoods of other messages. This property of an error-correcting code enables one to correct the errors (that is, to recover the transmitted message) in those distorted messages (received at the output of the channel) that belong to the neighbourhood of the message. Elements of an error-correcting code (codewords) are employed in the encoding of sequences of information symbols being presented by the source of information (cf. [[Information, source of|Information, source of]]). The encoding consists in the representation of the information sequence in a special form and in the introduction of additional information ([[Redundancy|redundancy]]) into it. This redundancy is usually introduced by appending to the message extra symbols, by some means or other. For example, the sequence of symbols may be divided into blocks of a fixed length $  k $,  
 +
and, independently of one another, the blocks are replaced by different blocks of greater length $  n $,  
 +
which are elements of a so-called error-correcting block code. Other methods [[#References|[1]]] are known for the introduction of redundancy and the error-correcting codes related to them.
 +
 
 +
The codewords of an error-correcting block code are taken from a certain set of  $  n $-
 +
dimensional vectors  $  L  ^ {n} $
 +
endowed with a metric  $  \lambda $,
 +
and the neighbourhood of a codeword is a ball with centre at the codeword. The radius of this ball defines the correcting capacity of the block code. The metric  $  \lambda $
 +
depends on the nature of the errors for the correction of which the code is intended. The subsequent account deals only with block codes, these being the most prevalent.
  
The codewords of an error-correcting block code are taken from a certain set of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e0362303.png" />-dimensional vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e0362304.png" /> endowed with a metric <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e0362305.png" />, and the neighbourhood of a codeword is a ball with centre at the codeword. The radius of this ball defines the correcting capacity of the block code. The metric <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e0362306.png" /> depends on the nature of the errors for the correction of which the code is intended. The subsequent account deals only with block codes, these being the most prevalent.
+
In order that it be possible to transmit the maximum amount of information over the channel, it is necessary, for a prescribed correcting capacity, to use codes with maximum number of elements (codewords). The construction of such codes is one of the basic problems in the theory of error-correcting codes. Sufficient progress has been made with this problem only for certain finite sets  $  L  ^ {n} $,
 +
which are discussed below. Meanwhile, codes on certain infinite spaces, for example, the sphere in Euclidean space  $  \mathbf R  ^ {n} $,
 +
are of interest both in theory and in applications.
  
In order that it be possible to transmit the maximum amount of information over the channel, it is necessary, for a prescribed correcting capacity, to use codes with maximum number of elements (codewords). The construction of such codes is one of the basic problems in the theory of error-correcting codes. Sufficient progress has been made with this problem only for certain finite sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e0362307.png" />, which are discussed below. Meanwhile, codes on certain infinite spaces, for example, the sphere in Euclidean space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e0362308.png" />, are of interest both in theory and in applications.
+
In the practical use of error-correcting codes there arise problems of mapping the information to be transmitted into the set of elements of the error-correcting code, and of the determination of the transmitted element  $  x $
 +
of the code from the received element  $  x  ^  \prime  $.
 +
The first problem is called the problem of encoding, the second the problem of decoding. The complexity of encoding and decoding is determined to a large extent by the properties of the error-correcting code being used. As a result, this leads to the study of a relatively narrow class of codes such as, for example, the binary linear codes considered below.
  
In the practical use of error-correcting codes there arise problems of mapping the information to be transmitted into the set of elements of the error-correcting code, and of the determination of the transmitted element <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e0362309.png" /> of the code from the received element <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623010.png" />. The first problem is called the problem of encoding, the second the problem of decoding. The complexity of encoding and decoding is determined to a large extent by the properties of the error-correcting code being used. As a result, this leads to the study of a relatively narrow class of codes such as, for example, the binary linear codes considered below.
+
The most widely investigated such codes are the $  q $-
 +
ary block codes in the [[Hamming metric]]. This is because they have found numerous applications, while the methods for constructing them are related to well-known mathematical structures. The words of these codes are certain elements of the set  $  B _ {q}  ^ {n} $
 +
consisting of all vectors of length  $  n $
 +
with coordinates belonging to a set of $  q $
 +
elements. By the Hamming distance  $  d ( x , y ) $
 +
between two vectors  $  x , y $
 +
in  $  B _ {q}  ^ {n} $
 +
one means the number of positions  $  i $
 +
such that  $  x _ {i} \neq y _ {i} $.  
 +
The neighbourhood  $  U _ {t} ( x) $
 +
of radius  $  t $(
 +
where  $  t $
 +
is an integer) of the vector  $  x $
 +
consists of all vectors in  $  B _ {q}  ^ {n} $
 +
differing from  $  x $
 +
in at most  $  t $
 +
coordinates, that is, $  U _ {t} ( x) $
 +
is the ball in the metric  $  d $
 +
of radius  $  t $
 +
with centre at  $  x $.  
 +
In particular,  $  U _ {1} ( x) $
 +
consists of $  ( q - 1 ) n + 1 $
 +
vectors. For an arbitrary set  $  K \subset  B _ {q}  ^ {n} $
 +
the function  $  d ( K) = \min _ {x , y \in K , x \neq y }  d ( x , y ) $
 +
is called the minimum distance of the $  q $-
 +
ary code  $  K $.
 +
A code  $  K $
 +
is a  $  t $-
 +
error-correcting code if  $  d ( K) \geq  2 t + 1 $.  
 +
When the latter inequality holds, each neighbourhood  $  U _ {t} ( x) $,  
 +
$  x \in K $,  
 +
is disjoint with  $  U _ {t} ( y) $
 +
for every other vector  $  y $
 +
in  $  K $.
  
The most widely investigated such codes are the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623012.png" />-ary block codes in the Hamming metric. This is because they have found numerous applications, while the methods for constructing them are related to well-known mathematical structures. The words of these codes are certain elements of the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623013.png" /> consisting of all vectors of length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623014.png" /> with coordinates belonging to a set of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623015.png" /> elements. By the Hamming distance <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623016.png" /> between two vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623017.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623018.png" /> one means the number of positions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623019.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623020.png" />. The neighbourhood <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623021.png" /> of radius <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623022.png" /> (where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623023.png" /> is an integer) of the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623024.png" /> consists of all vectors in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623025.png" /> differing from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623026.png" /> in at most <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623027.png" /> coordinates, that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623028.png" /> is the ball in the metric <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623029.png" /> of radius <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623030.png" /> with centre at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623031.png" />. In particular, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623032.png" /> consists of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623033.png" /> vectors. For an arbitrary set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623034.png" /> the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623035.png" /> is called the minimum distance of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623037.png" />-ary code <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623038.png" />. A code <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623039.png" /> is a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623041.png" />-error-correcting code if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623042.png" />. When the latter inequality holds, each neighbourhood <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623043.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623044.png" />, is disjoint with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623045.png" /> for every other vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623046.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623047.png" />.
+
Significant progress in the study of  $  q $-
 +
ary codes has been made in case  $  q $
 +
is a power of a prime number. In this case the set of coordinates is taken to be the finite field  $  \mathop{\rm GF} ( q) $
 +
of  $  q $
 +
elements, and the algebraic properties of this concept are used. In what follows it is supposed that the elements of $  \mathop{\rm GF} ( q) $
 +
are the coordinates of the elements of the set $  B _ {q}  ^ {n} $.  
 +
The set  $  B _ {q}  ^ {n} $
 +
is a linear space over  $  \mathop{\rm GF} ( q) $.  
 +
If the vectors of the code  $  K $
 +
form a linear subspace of $  B _ {q}  ^ {n} $,
 +
then the code is said to be linear. A linear code  $  K $
 +
can be specified either by a basis of it or by a basis of the linear space dual to  $  K $.  
 +
The latter method is the most prevalent one. A matrix  $  A $
 +
whose rows form a basis for the vector space dual to  $  K $
 +
is called a parity-check matrix of $  K $.  
 +
For all $  x \in K $,
 +
$  x A  ^ {T} = 0 $,  
 +
where  $  A  ^ {T} $
 +
is the transpose of $  A $.  
 +
In what follows,  $  n $
 +
denotes the code length, $  k $
 +
is the dimension of the linear code and  $  d $
 +
is the minimum distance. A code in  $  B _ {2}  ^ {n} $
 +
is called a binary code.
  
Significant progress in the study of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623048.png" />-ary codes has been made in case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623049.png" /> is a power of a prime number. In this case the set of coordinates is taken to be the finite field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623050.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623051.png" /> elements, and the algebraic properties of this concept are used. In what follows it is supposed that the elements of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623052.png" /> are the coordinates of the elements of the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623053.png" />. The set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623054.png" /> is a linear space over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623055.png" />. If the vectors of the code <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623056.png" /> form a linear subspace of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623057.png" />, then the code is said to be linear. A linear code <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623058.png" /> can be specified either by a basis of it or by a basis of the linear space dual to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623059.png" />. The latter method is the most prevalent one. A matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623060.png" /> whose rows form a basis for the vector space dual to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623061.png" /> is called a parity-check matrix of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623062.png" />. For all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623063.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623064.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623065.png" /> is the transpose of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623066.png" />. In what follows, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623067.png" /> denotes the code length, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623068.png" /> is the dimension of the linear code and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623069.png" /> is the minimum distance. A code in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623070.png" /> is called a binary code.
+
In order to estimate the quality of specific codes one studies the behaviour of the function  $  A ( n , d ) $—
 +
the maximum number of vectors of a code of length  $  n $
 +
with minimum distance  $  d $.  
 +
The function  $  A ( n , d ) $
 +
has been comparatively well studied for large  $  d $,
 +
$  2 d \geq  n $,
 +
and for small  $  d $,
 +
$  d = \textrm{ const } $,
 +
$  n \rightarrow \infty $.  
 +
In the first case  $  A ( n , d ) $
 +
is at most  $  2 n $
 +
when  $  2 d = n $,  
 +
and  $  2 d / ( 2 d - n ) $
 +
when  $  2 d - n > 0 $;
 +
in the second case it has order  $  n  ^ {-} t 2  ^ {n} $
 +
when  $  d = 2 t + 1 $.  
 +
If  $  d = 3 $,  
 +
$  n = 2  ^ {l} - 1 $,  
 +
then  $  A ( n , 3 ) = 2  ^ {n} / ( n + 1 ) $.
  
In order to estimate the quality of specific codes one studies the behaviour of the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623071.png" /> — the maximum number of vectors of a code of length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623072.png" /> with minimum distance <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623073.png" />. The function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623074.png" /> has been comparatively well studied for large <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623075.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623076.png" />, and for small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623077.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623078.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623079.png" />. In the first case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623080.png" /> is at most <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623081.png" /> when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623082.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623083.png" /> when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623084.png" />; in the second case it has order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623085.png" /> when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623086.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623087.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623088.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623089.png" />.
+
A linear code for which this last equality is attained is called a binary Hamming code. A binary Hamming code (that corrects one error) has the following property: The balls of radius  $  t = 1 $
 +
with centres at the points of the code are disjoint but at the same time fill out the whole of $  B _ {2}  ^ {n} $.  
 +
Such codes are called perfect. It is known that, apart from the Hamming codes and codes with the same parameters, there is only one non-trivial binary perfect code.
  
A linear code for which this last equality is attained is called a binary Hamming code. A binary Hamming code (that corrects one error) has the following property: The balls of radius <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623090.png" /> with centres at the points of the code are disjoint but at the same time fill out the whole of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623091.png" />. Such codes are called perfect. It is known that, apart from the Hamming codes and codes with the same parameters, there is only one non-trivial binary perfect code.
+
In the case  $  n \rightarrow \infty $,
 +
$  d / n \rightarrow \delta $,
 +
$  0 < \delta < 1 / 2 $,  
 +
the function
  
In the case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623092.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623093.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623094.png" />, the function
+
$$
 +
\overline{\lim\limits}\;
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623095.png" /></td> </tr></table>
+
\frac{1}{n}
 +
  \mathop{\rm log} _ {2}  A ( n , d )
 +
= R ( \delta )
 +
$$
  
(the logarithmic asymptotic of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623096.png" />) is studied. It is called the information rate for a maximal code with relative distance <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623097.png" />. Substantially distinct upper and lower bounds are known for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623098.png" />. The lower bound (the Varshamov–Gilbert bound) has the form
+
(the logarithmic asymptotic of $  A ( n , d ) $)  
 +
is studied. It is called the information rate for a maximal code with relative distance $  \delta $.  
 +
Substantially distinct upper and lower bounds are known for $  R ( \delta ) $.  
 +
The lower bound (the Varshamov–Gilbert bound) has the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e03623099.png" /></td> <td valign="top" style="width:5%;text-align:right;">(*)</td></tr></table>
+
$$ \tag{* }
 +
R ( \delta )  \geq  1 - H ( \delta ) ,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230100.png" /></td> </tr></table>
+
$$
 +
H ( \delta )  = \delta  \mathop{\rm log} _ {2}
 +
\frac{1} \delta
 +
+ ( 1 - \delta )  \mathop{\rm log} _ {2} \
 +
 
 +
\frac{1}{1 - \delta }
 +
,
 +
$$
  
 
and guarantees the existence of codes with the above parameters. The proof of (*) is not constructive, for other bounds see [[#References|[6]]], [[#References|[7]]].
 
and guarantees the existence of codes with the above parameters. The proof of (*) is not constructive, for other bounds see [[#References|[6]]], [[#References|[7]]].
  
Next constructive methods (that is, methods that require a relatively small number of operations for their realization) for making error-correcting codes will be considered. The most important constructive codes are Reed–Solomon codes (RS-codes), Bose–Choudhuri–Hocquenghem codes (BCH-codes) and Reed–Muller codes (RM-codes). All these codes are linear. The starting point for the construction of the first two is the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230101.png" /> with elements in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230102.png" />:
+
Next constructive methods (that is, methods that require a relatively small number of operations for their realization) for making error-correcting codes will be considered. The most important constructive codes are Reed–Solomon codes (RS-codes), Bose–Choudhuri–Hocquenghem codes (BCH-codes) and Reed–Muller codes (RM-codes). All these codes are linear. The starting point for the construction of the first two is the matrix $  A _ {r} $
 +
with elements in $  \mathop{\rm GF} ( q) $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230103.png" /></td> </tr></table>
+
$$
 +
A _ {r}  = \
 +
\left \|
 +
\begin{array}{ccccc}
 +
1  &\dots  &\alpha  ^ {j}  &\dots  &\alpha  ^ {q-} 2  \\
 +
1  &\dots  &\alpha  ^ {2j}  &\dots  &\alpha  ^ {2(} q- 2)  \\
 +
\dots  &\dots  &\dots  &\dots  &\dots  \\
 +
1  &\dots  &\alpha  ^ {(} r+ 1) j  &\dots  &\alpha  ^ {(} r+ 1)( q- 2)  \\
 +
\end{array}
 +
\right \| ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230104.png" /> is a primitive root of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230105.png" />. The matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230106.png" /> is the parity-check matrix of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230107.png" />-code <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230108.png" /> over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230109.png" /> with the following parameters: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230110.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230111.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230112.png" />. The distance of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230113.png" /> is maximal among all linear codes of length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230114.png" /> and dimension <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230115.png" />. The binary BCH-code <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230116.png" /> consists of all vectors of the RS-code <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230117.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230118.png" /> whose vectors belong to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230119.png" />, that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230120.png" />. The BCH-code <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230121.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230122.png" /> has the following parameters: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230123.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230124.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230125.png" />. The Hamming code, mentioned earlier, is the same as the BCH-code <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230126.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230127.png" /> (here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230128.png" /> denotes the integer part of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230129.png" />), then the dimension of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230130.png" /> is equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230131.png" />. A BCH-code is cyclic, that is, if a vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230132.png" /> belongs to it, then so do all its cyclic shifts. As <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230133.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230134.png" />, the number of vectors of BCH-codes is of the same order as the cardinality of the best code; best with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230135.png" />.
+
where $  \alpha $
 +
is a primitive root of $  \mathop{\rm GF} ( q) $.  
 +
The matrix $  A _ {r} $
 +
is the parity-check matrix of the $  \mathop{\rm RS} $-
 +
code $  C _ {r} $
 +
over $  \mathop{\rm GF} ( q) $
 +
with the following parameters: $  n = q - 1 $,  
 +
$  k = q - r $,  
 +
$  d = r $.  
 +
The distance of $  C _ {r} $
 +
is maximal among all linear codes of length $  q - 1 $
 +
and dimension $  q - r $.  
 +
The binary BCH-code $  H _ {r} $
 +
consists of all vectors of the RS-code $  C _ {r} $
 +
in $  \mathop{\rm GF} ( 2  ^ {l} ) $
 +
whose vectors belong to $  B _ {2}  ^ {n} $,  
 +
that is, $  H _ {r} = C _ {r} \cap B _ {2}  ^ {n} $.  
 +
The BCH-code $  H _ {r} $
 +
with $  r = 2 t + 1 $
 +
has the following parameters: $  n = 2  ^ {l} - 1 $,  
 +
$  k \geq  n - l t $,  
 +
$  d \geq  2 t + 1 $.  
 +
The Hamming code, mentioned earlier, is the same as the BCH-code $  H _ {3} $.  
 +
If $  t < 2 ^ {[ ( l + 1 ) / 2 ] } $(
 +
here $  [ x] $
 +
denotes the integer part of $  x $),  
 +
then the dimension of $  H _ {2t+} 1 $
 +
is equal to $  n - l t $.  
 +
A BCH-code is cyclic, that is, if a vector $  x $
 +
belongs to it, then so do all its cyclic shifts. As $  n \rightarrow \infty $,  
 +
$  r = \textrm{ const } $,  
 +
the number of vectors of BCH-codes is of the same order as the cardinality of the best code; best with respect to $  A ( n , r ) $.
  
The <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230136.png" />-th order binary RM-code is defined as the set of binary vectors of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230137.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230138.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230139.png" /> are all possible binary vectors of length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230140.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230141.png" /> runs through the set of all functions of the [[Algebra of logic|algebra of logic]] that are represented by a polynomial over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230142.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230143.png" /> binary variables and of degree not exceeding <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230144.png" />. An RM-code has the following parameters: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230145.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230146.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230147.png" />.
+
The $  n $-
 +
th order binary RM-code is defined as the set of binary vectors of the form $  ( f ( \alpha _ {1} ) \dots f ( \alpha _ {n} ) ) $,  
 +
where $  n = 2  ^ {l} $,  
 +
$  \alpha _ {1} \dots \alpha _ {n} $
 +
are all possible binary vectors of length $  l $
 +
and $  f $
 +
runs through the set of all functions of the [[Algebra of logic|algebra of logic]] that are represented by a polynomial over $  \mathop{\rm GF} ( 2) $
 +
in $  l $
 +
binary variables and of degree not exceeding $  r $.  
 +
An RM-code has the following parameters: $  n = 2  ^ {l} $,  
 +
$  k = \sum _ {l=} 0 ^ {r} C _ {l}  ^ {j} $,  
 +
$  d = 2  ^ {l-} r $.
  
The information rate <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230148.png" /> of a binary code <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230149.png" /> of length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230150.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230151.png" /> vectors is defined as
+
The information rate $  R ( K) $
 +
of a binary code $  K $
 +
of length $  n $
 +
with $  m $
 +
vectors is defined as
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230152.png" /></td> </tr></table>
+
$$
 +
R ( K)  =
 +
\frac{ \mathop{\rm log} _ {2}  m }{n}
 +
.
 +
$$
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230153.png" /> is linear code of dimension <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230154.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230155.png" />. The information rates of the constructive codes listed above tend to zero as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230156.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230157.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230158.png" />. Constructive codes are known with positive information rate as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230159.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230160.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230161.png" />, but less than the information rates of codes whose existence was established by the bound in (*).
+
If $  K $
 +
is linear code of dimension $  k $,  
 +
then $  R ( K) = k / n $.  
 +
The information rates of the constructive codes listed above tend to zero as $  n \rightarrow \infty $,
 +
$  d / n \rightarrow \delta $,  
 +
$  \delta > 0 $.  
 +
Constructive codes are known with positive information rate as $  n \rightarrow \infty $,
 +
$  d / n \rightarrow \delta $,
 +
$  1 / 2 > \delta > 0 $,  
 +
but less than the information rates of codes whose existence was established by the bound in (*).
  
In the practical application of a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230162.png" />-error-correcting code for the correction of errors on a communication channel, a device (a decoder) is required that determines the transmitted codeword from the distorted word <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230163.png" />. For this it is preferable to use error-correcting codes for which the complexity of the decoder is not too large. By the complexity of a decoder of a binary code <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230164.png" /> with distance <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230165.png" /> one means, for example, the minimum number of functional elements necessary for the realization of the Boolean operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230166.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230167.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230168.png" />. The constructive codes considered above have decoders of small complexity. Moreover, other error-correcting codes are known with information rate not tending to 0 as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230169.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230170.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230171.png" />, and with a decoder of small complexity. Examples of such codes are cascade codes and codes with low-density parity checks. A cascade code consists, in the simplest case, of the iteration of an RS-code in the field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230172.png" /> and a binary code of length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230173.png" />, dimension <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230174.png" /> with distance <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230175.png" />. A one-to-one correspondence is established by means of some linear mapping between the elements of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230176.png" /> and the vectors of the binary code. The elements of the RS-codes are then replaced by the corresponding vectors of the binary code. As a result, a binary linear cascade code is obtained with parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230177.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230178.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230179.png" />. The best results are obtained by the use of distant binary codes for the replacement of distinct bits of the RS-code. By this means, codes of length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230180.png" /> can be obtained that correct a fixed segment of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230181.png" /> errors using a decoder with complexity of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230182.png" />. An ensemble of binary codes with low-density parity checks is defined by the ensemble of parity-check matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230183.png" /> consisting of binary matrices of a certain type, of dimension <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230184.png" />, that contain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230185.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230186.png" /> units respectively in each column and row, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230187.png" />. For fixed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230188.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230189.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230190.png" />, a typical code of the ensemble of length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230191.png" /> can correct a fixed segment of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230192.png" /> errors by means of a decoder with complexity of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230193.png" />. The information rate both of cascades and codes with low-density checks lies below the bound in (*).
+
In the practical application of a $  t $-
 +
error-correcting code for the correction of errors on a communication channel, a device (a decoder) is required that determines the transmitted codeword from the distorted word $  x  ^  \prime  \in U _ {t} ( x) $.  
 +
For this it is preferable to use error-correcting codes for which the complexity of the decoder is not too large. By the complexity of a decoder of a binary code $  K $
 +
with distance $  2 t + 1 $
 +
one means, for example, the minimum number of functional elements necessary for the realization of the Boolean operator $  F ( x  ^  \prime  ) = x $,  
 +
$  x  ^  \prime  \in U _ {t} ( x) $,  
 +
$  x \in K $.  
 +
The constructive codes considered above have decoders of small complexity. Moreover, other error-correcting codes are known with information rate not tending to 0 as $  n \rightarrow \infty $,
 +
$  d / n \rightarrow \delta $,  
 +
$  \delta > 0 $,  
 +
and with a decoder of small complexity. Examples of such codes are cascade codes and codes with low-density parity checks. A cascade code consists, in the simplest case, of the iteration of an RS-code in the field $  \mathop{\rm GF} ( 2  ^ {l} ) $
 +
and a binary code of length $  n _ {1} $,  
 +
dimension $  l $
 +
with distance $  d _ {1} $.  
 +
A one-to-one correspondence is established by means of some linear mapping between the elements of $  \mathop{\rm GF} ( 2  ^ {l} ) $
 +
and the vectors of the binary code. The elements of the RS-codes are then replaced by the corresponding vectors of the binary code. As a result, a binary linear cascade code is obtained with parameters $  n = n _ {1} ( 2  ^ {l} - 1 ) $,  
 +
$  k = l ( 2  ^ {l} - r ) $,  
 +
$  d \geq  r _ {1} $.  
 +
The best results are obtained by the use of distant binary codes for the replacement of distinct bits of the RS-code. By this means, codes of length $  n $
 +
can be obtained that correct a fixed segment of $  n $
 +
errors using a decoder with complexity of order $  n  \mathop{\rm log}  n $.  
 +
An ensemble of binary codes with low-density parity checks is defined by the ensemble of parity-check matrices $  \{ A \} $
 +
consisting of binary matrices of a certain type, of dimension $  n \times r $,  
 +
that contain $  l $
 +
and $  h $
 +
units respectively in each column and row, $  4 < l < h $.  
 +
For fixed $  l $
 +
and $  h $
 +
and $  n \rightarrow \infty $,  
 +
a typical code of the ensemble of length $  n $
 +
can correct a fixed segment of $  n $
 +
errors by means of a decoder with complexity of order $  n  \mathop{\rm log}  n $.  
 +
The information rate both of cascades and codes with low-density checks lies below the bound in (*).
  
Codes on sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230194.png" /> with metrics differing from the Hamming metric have been fairly widely investigated. The approaches and results of these investigations are, by and large, similar to the corresponding results for the Hamming metric. In particular, codes have been considered in metrics connected with synchronous errors, with errors in the arithmetic apparatus of computers (arithmetic codes), with blocks of errors, with asymmetric errors, and with errors in a continuous communication channel. For example, in the latter case the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230195.png" /> is the unit sphere <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230196.png" /> in the Euclidean space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230197.png" /> with centre at the origin, while a "neighbourhood" <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230198.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230199.png" />, is the surface cut out from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230200.png" /> by the circular cone with semi-angle <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230201.png" /> and axis passing through the points 0 and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230202.png" />. It must also be pointed out that codes in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230203.png" />, in a somewhat different interpretation, have been considered in geometry since the end of the 19th century.
+
Codes on sets $  B _ {q}  ^ {n} $
 +
with metrics differing from the Hamming metric have been fairly widely investigated. The approaches and results of these investigations are, by and large, similar to the corresponding results for the Hamming metric. In particular, codes have been considered in metrics connected with synchronous errors, with errors in the arithmetic apparatus of computers (arithmetic codes), with blocks of errors, with asymmetric errors, and with errors in a continuous communication channel. For example, in the latter case the set $  L  ^ {n} $
 +
is the unit sphere $  S  ^ {n} $
 +
in the Euclidean space $  \mathbf R  ^ {n} $
 +
with centre at the origin, while a "neighbourhood" $  U _  \phi  ( x) $,  
 +
$  x \in S  ^ {n} $,  
 +
is the surface cut out from $  S  ^ {n-} 1 $
 +
by the circular cone with semi-angle $  \phi $
 +
and axis passing through the points 0 and $  x $.  
 +
It must also be pointed out that codes in $  \mathbf R  ^ {n} $,  
 +
in a somewhat different interpretation, have been considered in geometry since the end of the 19th century.
  
 
The development of the theory of error-correcting codes was stimulated by the work of C.E. Shannon on information theory, in which he showed the possibility, in principle, of transmitting over a communication channel with noise with speed less than the channel capacity and with arbitrarily small error. Initially the theory of error-correcting codes satisfied the requirements of communications engineers in that the mathematical constructions guaranteed reliable transmission of information under certain restrictions on the number and form of the errors in the information. Subsequently the results and methods of the theory of error-correcting codes found application in other fields. In particular, in mathematics, best estimates (up to 1978) have obtained for the density of packing spheres in Euclidean space; significant progress has been made in estimating the complexity in typical disjunctive forms for almost-all Boolean functions; some new objects in combinatorics have been constructed; self-correcting diagrams of functional elements have been constructed, etc.
 
The development of the theory of error-correcting codes was stimulated by the work of C.E. Shannon on information theory, in which he showed the possibility, in principle, of transmitting over a communication channel with noise with speed less than the channel capacity and with arbitrarily small error. Initially the theory of error-correcting codes satisfied the requirements of communications engineers in that the mathematical constructions guaranteed reliable transmission of information under certain restrictions on the number and form of the errors in the information. Subsequently the results and methods of the theory of error-correcting codes found application in other fields. In particular, in mathematics, best estimates (up to 1978) have obtained for the density of packing spheres in Euclidean space; significant progress has been made in estimating the complexity in typical disjunctive forms for almost-all Boolean functions; some new objects in combinatorics have been constructed; self-correcting diagrams of functional elements have been constructed, etc.
Line 53: Line 284:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> W.W. Peterson, E.J. Weldon, "Error-correcting codes" , M.I.T. (1972) {{MR|0347444}} {{ZBL|0251.94007}} </TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> E. Berlekamp, "Algebraic coding theory" , McGraw-Hill (1968) {{MR|0238597}} {{ZBL|0988.94521}} </TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> F.J. MacWilliams, N.J.A. Sloane, "The theory of error correcting codes" , '''I-II''' , North-Holland (1977)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> E.L. Blokh, V.V. Zyablov, "Generalized cascade codes" , Moscow (1976) (In Russian) {{MR|0607525}} {{ZBL|}} </TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> V.D. Kolesnik, E.T. Mironchikov, "The decoding of cyclic codes" , Moscow (1968) (In Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> V.M. Sidel'nikov, "Extremal polynomials used in bounds of code volume" ''Probl. Inform. Transmission'' , '''16''' : 3 (1980) pp. 174–185 ''Probl. Peredach. Inform.'' , '''16''' : 3 (1980) pp. 17–30 {{MR|}} {{ZBL|0468.94010}} </TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top"> V.I. Levenshtein, "Minimum redundancy of error-correcting codes" ''Probl. Inform. Transmission'' , '''10''' : 2 (1974) pp. 110–123 ''Probl. Peredach. Inform.'' , '''10''' : 2 (1974) pp. 26–42</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top"> V.V. Zyablov, M.S. Pinsker, "Estimation of the error correcting complexity for Gallager low-density codes" ''Probl. Inform. Transmission'' , '''11''' : 1 (1975) pp. 18–28 ''Probl. Peredach. Inform.'' , '''11''' : 1 (1975) pp. 23–36</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> W.W. Peterson, E.J. Weldon, "Error-correcting codes" , M.I.T. (1972) {{MR|0347444}} {{ZBL|0251.94007}} </TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> E. Berlekamp, "Algebraic coding theory" , McGraw-Hill (1968) {{MR|0238597}} {{ZBL|0988.94521}} </TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> F.J. MacWilliams, N.J.A. Sloane, "The theory of error correcting codes" , '''I-II''' , North-Holland (1977)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> E.L. Blokh, V.V. Zyablov, "Generalized cascade codes" , Moscow (1976) (In Russian) {{MR|0607525}} {{ZBL|}} </TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> V.D. Kolesnik, E.T. Mironchikov, "The decoding of cyclic codes" , Moscow (1968) (In Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> V.M. Sidel'nikov, "Extremal polynomials used in bounds of code volume" ''Probl. Inform. Transmission'' , '''16''' : 3 (1980) pp. 174–185 ''Probl. Peredach. Inform.'' , '''16''' : 3 (1980) pp. 17–30 {{MR|}} {{ZBL|0468.94010}} </TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top"> V.I. Levenshtein, "Minimum redundancy of error-correcting codes" ''Probl. Inform. Transmission'' , '''10''' : 2 (1974) pp. 110–123 ''Probl. Peredach. Inform.'' , '''10''' : 2 (1974) pp. 26–42</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top"> V.V. Zyablov, M.S. Pinsker, "Estimation of the error correcting complexity for Gallager low-density codes" ''Probl. Inform. Transmission'' , '''11''' : 1 (1975) pp. 18–28 ''Probl. Peredach. Inform.'' , '''11''' : 1 (1975) pp. 23–36</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
The linear space dual to a linear code <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230204.png" /> is, of course, the space of all vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230205.png" /> such that the scalar product <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230206.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230207.png" />. This dual space is denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230208.png" />.
+
The linear space dual to a linear code $  K $
 +
is, of course, the space of all vectors $  y $
 +
such that the scalar product $  ( x , y ) = 0 $
 +
for all $  x \in K $.  
 +
This dual space is denoted by $  K  ^  \perp  $.
  
The best upper bound known for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230209.png" /> up till now (1987) is due to R.J. McEliece, E.R. Rodemich, H. Rumsey, and L.R. Welch (cf. [[#References|[a3]]]).
+
The best upper bound known for $  R ( d) $
 +
up till now (1987) is due to R.J. McEliece, E.R. Rodemich, H. Rumsey, and L.R. Welch (cf. [[#References|[a3]]]).
  
 
See also [[Code|Code]]; [[Code with correction of arithmetical errors|Code with correction of arithmetical errors]]; [[Code with correction of deletions and insertions|Code with correction of deletions and insertions]]; [[Coding, alphabetical|Coding, alphabetical]]; [[Coding and decoding|Coding and decoding]].
 
See also [[Code|Code]]; [[Code with correction of arithmetical errors|Code with correction of arithmetical errors]]; [[Code with correction of deletions and insertions|Code with correction of deletions and insertions]]; [[Coding, alphabetical|Coding, alphabetical]]; [[Coding and decoding|Coding and decoding]].
Line 65: Line 299:
 
As already indicated in the main article, coding theory is intimately related to other branches of mathematics, mainly the [[Geometry of numbers|geometry of numbers]] (cf. also [[#References|[a5]]]) and the theory of finite fields (cf. [[Finite field|Finite field]]).
 
As already indicated in the main article, coding theory is intimately related to other branches of mathematics, mainly the [[Geometry of numbers|geometry of numbers]] (cf. also [[#References|[a5]]]) and the theory of finite fields (cf. [[Finite field|Finite field]]).
  
In 1982 M.A. Tsfasman, S.G. Vlăduts and T. Zink, using ideas of V.D. Goppa and algebraic geometry, constructed a sequence of codes that exceed the Gilbert–Varshamov bound [[#References|[a4]]], thus also proving that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e036/e036230/e036230210.png" />, cf. (*), does not hold. Cf. [[Goppa code|Goppa code]] for more detail.
+
In 1982 M.A. Tsfasman, S.G. Vlăduts and T. Zink, using ideas of V.D. Goppa and algebraic geometry, constructed a sequence of codes that exceed the Gilbert–Varshamov bound [[#References|[a4]]], thus also proving that $  R ( \delta ) = 1 - H ( \delta ) $,  
 +
cf. (*), does not hold. Cf. [[Goppa code|Goppa code]] for more detail.
  
 
====References====
 
====References====

Revision as of 19:37, 5 June 2020


code that corrects errors

A set of messages destined for transmission over a communication channel with noise with the property that the "neighbourhood" of each message (that is, the set of most likely distorted versions of this message) does not intersect with the neighbourhoods of other messages. This property of an error-correcting code enables one to correct the errors (that is, to recover the transmitted message) in those distorted messages (received at the output of the channel) that belong to the neighbourhood of the message. Elements of an error-correcting code (codewords) are employed in the encoding of sequences of information symbols being presented by the source of information (cf. Information, source of). The encoding consists in the representation of the information sequence in a special form and in the introduction of additional information (redundancy) into it. This redundancy is usually introduced by appending to the message extra symbols, by some means or other. For example, the sequence of symbols may be divided into blocks of a fixed length $ k $, and, independently of one another, the blocks are replaced by different blocks of greater length $ n $, which are elements of a so-called error-correcting block code. Other methods [1] are known for the introduction of redundancy and the error-correcting codes related to them.

The codewords of an error-correcting block code are taken from a certain set of $ n $- dimensional vectors $ L ^ {n} $ endowed with a metric $ \lambda $, and the neighbourhood of a codeword is a ball with centre at the codeword. The radius of this ball defines the correcting capacity of the block code. The metric $ \lambda $ depends on the nature of the errors for the correction of which the code is intended. The subsequent account deals only with block codes, these being the most prevalent.

In order that it be possible to transmit the maximum amount of information over the channel, it is necessary, for a prescribed correcting capacity, to use codes with maximum number of elements (codewords). The construction of such codes is one of the basic problems in the theory of error-correcting codes. Sufficient progress has been made with this problem only for certain finite sets $ L ^ {n} $, which are discussed below. Meanwhile, codes on certain infinite spaces, for example, the sphere in Euclidean space $ \mathbf R ^ {n} $, are of interest both in theory and in applications.

In the practical use of error-correcting codes there arise problems of mapping the information to be transmitted into the set of elements of the error-correcting code, and of the determination of the transmitted element $ x $ of the code from the received element $ x ^ \prime $. The first problem is called the problem of encoding, the second the problem of decoding. The complexity of encoding and decoding is determined to a large extent by the properties of the error-correcting code being used. As a result, this leads to the study of a relatively narrow class of codes such as, for example, the binary linear codes considered below.

The most widely investigated such codes are the $ q $- ary block codes in the Hamming metric. This is because they have found numerous applications, while the methods for constructing them are related to well-known mathematical structures. The words of these codes are certain elements of the set $ B _ {q} ^ {n} $ consisting of all vectors of length $ n $ with coordinates belonging to a set of $ q $ elements. By the Hamming distance $ d ( x , y ) $ between two vectors $ x , y $ in $ B _ {q} ^ {n} $ one means the number of positions $ i $ such that $ x _ {i} \neq y _ {i} $. The neighbourhood $ U _ {t} ( x) $ of radius $ t $( where $ t $ is an integer) of the vector $ x $ consists of all vectors in $ B _ {q} ^ {n} $ differing from $ x $ in at most $ t $ coordinates, that is, $ U _ {t} ( x) $ is the ball in the metric $ d $ of radius $ t $ with centre at $ x $. In particular, $ U _ {1} ( x) $ consists of $ ( q - 1 ) n + 1 $ vectors. For an arbitrary set $ K \subset B _ {q} ^ {n} $ the function $ d ( K) = \min _ {x , y \in K , x \neq y } d ( x , y ) $ is called the minimum distance of the $ q $- ary code $ K $. A code $ K $ is a $ t $- error-correcting code if $ d ( K) \geq 2 t + 1 $. When the latter inequality holds, each neighbourhood $ U _ {t} ( x) $, $ x \in K $, is disjoint with $ U _ {t} ( y) $ for every other vector $ y $ in $ K $.

Significant progress in the study of $ q $- ary codes has been made in case $ q $ is a power of a prime number. In this case the set of coordinates is taken to be the finite field $ \mathop{\rm GF} ( q) $ of $ q $ elements, and the algebraic properties of this concept are used. In what follows it is supposed that the elements of $ \mathop{\rm GF} ( q) $ are the coordinates of the elements of the set $ B _ {q} ^ {n} $. The set $ B _ {q} ^ {n} $ is a linear space over $ \mathop{\rm GF} ( q) $. If the vectors of the code $ K $ form a linear subspace of $ B _ {q} ^ {n} $, then the code is said to be linear. A linear code $ K $ can be specified either by a basis of it or by a basis of the linear space dual to $ K $. The latter method is the most prevalent one. A matrix $ A $ whose rows form a basis for the vector space dual to $ K $ is called a parity-check matrix of $ K $. For all $ x \in K $, $ x A ^ {T} = 0 $, where $ A ^ {T} $ is the transpose of $ A $. In what follows, $ n $ denotes the code length, $ k $ is the dimension of the linear code and $ d $ is the minimum distance. A code in $ B _ {2} ^ {n} $ is called a binary code.

In order to estimate the quality of specific codes one studies the behaviour of the function $ A ( n , d ) $— the maximum number of vectors of a code of length $ n $ with minimum distance $ d $. The function $ A ( n , d ) $ has been comparatively well studied for large $ d $, $ 2 d \geq n $, and for small $ d $, $ d = \textrm{ const } $, $ n \rightarrow \infty $. In the first case $ A ( n , d ) $ is at most $ 2 n $ when $ 2 d = n $, and $ 2 d / ( 2 d - n ) $ when $ 2 d - n > 0 $; in the second case it has order $ n ^ {-} t 2 ^ {n} $ when $ d = 2 t + 1 $. If $ d = 3 $, $ n = 2 ^ {l} - 1 $, then $ A ( n , 3 ) = 2 ^ {n} / ( n + 1 ) $.

A linear code for which this last equality is attained is called a binary Hamming code. A binary Hamming code (that corrects one error) has the following property: The balls of radius $ t = 1 $ with centres at the points of the code are disjoint but at the same time fill out the whole of $ B _ {2} ^ {n} $. Such codes are called perfect. It is known that, apart from the Hamming codes and codes with the same parameters, there is only one non-trivial binary perfect code.

In the case $ n \rightarrow \infty $, $ d / n \rightarrow \delta $, $ 0 < \delta < 1 / 2 $, the function

$$ \overline{\lim\limits}\; \frac{1}{n} \mathop{\rm log} _ {2} A ( n , d ) = R ( \delta ) $$

(the logarithmic asymptotic of $ A ( n , d ) $) is studied. It is called the information rate for a maximal code with relative distance $ \delta $. Substantially distinct upper and lower bounds are known for $ R ( \delta ) $. The lower bound (the Varshamov–Gilbert bound) has the form

$$ \tag{* } R ( \delta ) \geq 1 - H ( \delta ) , $$

where

$$ H ( \delta ) = \delta \mathop{\rm log} _ {2} \frac{1} \delta + ( 1 - \delta ) \mathop{\rm log} _ {2} \ \frac{1}{1 - \delta } , $$

and guarantees the existence of codes with the above parameters. The proof of (*) is not constructive, for other bounds see [6], [7].

Next constructive methods (that is, methods that require a relatively small number of operations for their realization) for making error-correcting codes will be considered. The most important constructive codes are Reed–Solomon codes (RS-codes), Bose–Choudhuri–Hocquenghem codes (BCH-codes) and Reed–Muller codes (RM-codes). All these codes are linear. The starting point for the construction of the first two is the matrix $ A _ {r} $ with elements in $ \mathop{\rm GF} ( q) $:

$$ A _ {r} = \ \left \| \begin{array}{ccccc} 1 &\dots &\alpha ^ {j} &\dots &\alpha ^ {q-} 2 \\ 1 &\dots &\alpha ^ {2j} &\dots &\alpha ^ {2(} q- 2) \\ \dots &\dots &\dots &\dots &\dots \\ 1 &\dots &\alpha ^ {(} r+ 1) j &\dots &\alpha ^ {(} r+ 1)( q- 2) \\ \end{array} \right \| , $$

where $ \alpha $ is a primitive root of $ \mathop{\rm GF} ( q) $. The matrix $ A _ {r} $ is the parity-check matrix of the $ \mathop{\rm RS} $- code $ C _ {r} $ over $ \mathop{\rm GF} ( q) $ with the following parameters: $ n = q - 1 $, $ k = q - r $, $ d = r $. The distance of $ C _ {r} $ is maximal among all linear codes of length $ q - 1 $ and dimension $ q - r $. The binary BCH-code $ H _ {r} $ consists of all vectors of the RS-code $ C _ {r} $ in $ \mathop{\rm GF} ( 2 ^ {l} ) $ whose vectors belong to $ B _ {2} ^ {n} $, that is, $ H _ {r} = C _ {r} \cap B _ {2} ^ {n} $. The BCH-code $ H _ {r} $ with $ r = 2 t + 1 $ has the following parameters: $ n = 2 ^ {l} - 1 $, $ k \geq n - l t $, $ d \geq 2 t + 1 $. The Hamming code, mentioned earlier, is the same as the BCH-code $ H _ {3} $. If $ t < 2 ^ {[ ( l + 1 ) / 2 ] } $( here $ [ x] $ denotes the integer part of $ x $), then the dimension of $ H _ {2t+} 1 $ is equal to $ n - l t $. A BCH-code is cyclic, that is, if a vector $ x $ belongs to it, then so do all its cyclic shifts. As $ n \rightarrow \infty $, $ r = \textrm{ const } $, the number of vectors of BCH-codes is of the same order as the cardinality of the best code; best with respect to $ A ( n , r ) $.

The $ n $- th order binary RM-code is defined as the set of binary vectors of the form $ ( f ( \alpha _ {1} ) \dots f ( \alpha _ {n} ) ) $, where $ n = 2 ^ {l} $, $ \alpha _ {1} \dots \alpha _ {n} $ are all possible binary vectors of length $ l $ and $ f $ runs through the set of all functions of the algebra of logic that are represented by a polynomial over $ \mathop{\rm GF} ( 2) $ in $ l $ binary variables and of degree not exceeding $ r $. An RM-code has the following parameters: $ n = 2 ^ {l} $, $ k = \sum _ {l=} 0 ^ {r} C _ {l} ^ {j} $, $ d = 2 ^ {l-} r $.

The information rate $ R ( K) $ of a binary code $ K $ of length $ n $ with $ m $ vectors is defined as

$$ R ( K) = \frac{ \mathop{\rm log} _ {2} m }{n} . $$

If $ K $ is linear code of dimension $ k $, then $ R ( K) = k / n $. The information rates of the constructive codes listed above tend to zero as $ n \rightarrow \infty $, $ d / n \rightarrow \delta $, $ \delta > 0 $. Constructive codes are known with positive information rate as $ n \rightarrow \infty $, $ d / n \rightarrow \delta $, $ 1 / 2 > \delta > 0 $, but less than the information rates of codes whose existence was established by the bound in (*).

In the practical application of a $ t $- error-correcting code for the correction of errors on a communication channel, a device (a decoder) is required that determines the transmitted codeword from the distorted word $ x ^ \prime \in U _ {t} ( x) $. For this it is preferable to use error-correcting codes for which the complexity of the decoder is not too large. By the complexity of a decoder of a binary code $ K $ with distance $ 2 t + 1 $ one means, for example, the minimum number of functional elements necessary for the realization of the Boolean operator $ F ( x ^ \prime ) = x $, $ x ^ \prime \in U _ {t} ( x) $, $ x \in K $. The constructive codes considered above have decoders of small complexity. Moreover, other error-correcting codes are known with information rate not tending to 0 as $ n \rightarrow \infty $, $ d / n \rightarrow \delta $, $ \delta > 0 $, and with a decoder of small complexity. Examples of such codes are cascade codes and codes with low-density parity checks. A cascade code consists, in the simplest case, of the iteration of an RS-code in the field $ \mathop{\rm GF} ( 2 ^ {l} ) $ and a binary code of length $ n _ {1} $, dimension $ l $ with distance $ d _ {1} $. A one-to-one correspondence is established by means of some linear mapping between the elements of $ \mathop{\rm GF} ( 2 ^ {l} ) $ and the vectors of the binary code. The elements of the RS-codes are then replaced by the corresponding vectors of the binary code. As a result, a binary linear cascade code is obtained with parameters $ n = n _ {1} ( 2 ^ {l} - 1 ) $, $ k = l ( 2 ^ {l} - r ) $, $ d \geq r _ {1} $. The best results are obtained by the use of distant binary codes for the replacement of distinct bits of the RS-code. By this means, codes of length $ n $ can be obtained that correct a fixed segment of $ n $ errors using a decoder with complexity of order $ n \mathop{\rm log} n $. An ensemble of binary codes with low-density parity checks is defined by the ensemble of parity-check matrices $ \{ A \} $ consisting of binary matrices of a certain type, of dimension $ n \times r $, that contain $ l $ and $ h $ units respectively in each column and row, $ 4 < l < h $. For fixed $ l $ and $ h $ and $ n \rightarrow \infty $, a typical code of the ensemble of length $ n $ can correct a fixed segment of $ n $ errors by means of a decoder with complexity of order $ n \mathop{\rm log} n $. The information rate both of cascades and codes with low-density checks lies below the bound in (*).

Codes on sets $ B _ {q} ^ {n} $ with metrics differing from the Hamming metric have been fairly widely investigated. The approaches and results of these investigations are, by and large, similar to the corresponding results for the Hamming metric. In particular, codes have been considered in metrics connected with synchronous errors, with errors in the arithmetic apparatus of computers (arithmetic codes), with blocks of errors, with asymmetric errors, and with errors in a continuous communication channel. For example, in the latter case the set $ L ^ {n} $ is the unit sphere $ S ^ {n} $ in the Euclidean space $ \mathbf R ^ {n} $ with centre at the origin, while a "neighbourhood" $ U _ \phi ( x) $, $ x \in S ^ {n} $, is the surface cut out from $ S ^ {n-} 1 $ by the circular cone with semi-angle $ \phi $ and axis passing through the points 0 and $ x $. It must also be pointed out that codes in $ \mathbf R ^ {n} $, in a somewhat different interpretation, have been considered in geometry since the end of the 19th century.

The development of the theory of error-correcting codes was stimulated by the work of C.E. Shannon on information theory, in which he showed the possibility, in principle, of transmitting over a communication channel with noise with speed less than the channel capacity and with arbitrarily small error. Initially the theory of error-correcting codes satisfied the requirements of communications engineers in that the mathematical constructions guaranteed reliable transmission of information under certain restrictions on the number and form of the errors in the information. Subsequently the results and methods of the theory of error-correcting codes found application in other fields. In particular, in mathematics, best estimates (up to 1978) have obtained for the density of packing spheres in Euclidean space; significant progress has been made in estimating the complexity in typical disjunctive forms for almost-all Boolean functions; some new objects in combinatorics have been constructed; self-correcting diagrams of functional elements have been constructed, etc.

References

[1] W.W. Peterson, E.J. Weldon, "Error-correcting codes" , M.I.T. (1972) MR0347444 Zbl 0251.94007
[2] E. Berlekamp, "Algebraic coding theory" , McGraw-Hill (1968) MR0238597 Zbl 0988.94521
[3] F.J. MacWilliams, N.J.A. Sloane, "The theory of error correcting codes" , I-II , North-Holland (1977)
[4] E.L. Blokh, V.V. Zyablov, "Generalized cascade codes" , Moscow (1976) (In Russian) MR0607525
[5] V.D. Kolesnik, E.T. Mironchikov, "The decoding of cyclic codes" , Moscow (1968) (In Russian)
[6] V.M. Sidel'nikov, "Extremal polynomials used in bounds of code volume" Probl. Inform. Transmission , 16 : 3 (1980) pp. 174–185 Probl. Peredach. Inform. , 16 : 3 (1980) pp. 17–30 Zbl 0468.94010
[7] V.I. Levenshtein, "Minimum redundancy of error-correcting codes" Probl. Inform. Transmission , 10 : 2 (1974) pp. 110–123 Probl. Peredach. Inform. , 10 : 2 (1974) pp. 26–42
[8] V.V. Zyablov, M.S. Pinsker, "Estimation of the error correcting complexity for Gallager low-density codes" Probl. Inform. Transmission , 11 : 1 (1975) pp. 18–28 Probl. Peredach. Inform. , 11 : 1 (1975) pp. 23–36

Comments

The linear space dual to a linear code $ K $ is, of course, the space of all vectors $ y $ such that the scalar product $ ( x , y ) = 0 $ for all $ x \in K $. This dual space is denoted by $ K ^ \perp $.

The best upper bound known for $ R ( d) $ up till now (1987) is due to R.J. McEliece, E.R. Rodemich, H. Rumsey, and L.R. Welch (cf. [a3]).

See also Code; Code with correction of arithmetical errors; Code with correction of deletions and insertions; Coding, alphabetical; Coding and decoding.

As already indicated in the main article, coding theory is intimately related to other branches of mathematics, mainly the geometry of numbers (cf. also [a5]) and the theory of finite fields (cf. Finite field).

In 1982 M.A. Tsfasman, S.G. Vlăduts and T. Zink, using ideas of V.D. Goppa and algebraic geometry, constructed a sequence of codes that exceed the Gilbert–Varshamov bound [a4], thus also proving that $ R ( \delta ) = 1 - H ( \delta ) $, cf. (*), does not hold. Cf. Goppa code for more detail.

References

[a1] J.H. van Lint, "Introduction to coding theory" , Springer (1982) MR1540511 Zbl 0485.94015
[a2] A. Tietäväinen, "On the existence of perfect codes over finite fields" SIAM J. Appl. Math. , 24 (1973) pp. 88–96
[a3] R.J. McEliece, E.R. Rodemich, H. Rumsey, L.R. Welch, "New upper bounds on the rate of a code via the Delsarte–MacWilliams inequalities" IEEE Trans. Inform. Theory , 23 (1977) pp. 157–166 MR439403
[a4] M.A. Tsfasman, S.G. Vlăduts, T. Zink, "Modular curves, Shimuro curves and Goppa codes, better than Varshanov–Gilbert bound" Math. Nachr , 109 (1982) pp. 21–28
[a5] P.M. Gruber, C.G. Lekkerkerker, "Geometry of numbers" , North-Holland (1987) (Updated reprint) MR0893813 Zbl 0611.10017
[a6] R. Hill, "A first course in coding theory" , Clarendon Press (1986) MR0853914 Zbl 0616.94006
[a7] J.H. van Lint, G. van der Geer, "Introduction to coding theory and algebraic geometry" , Birkhäuser (1988) Zbl 0648.94011 Zbl 0639.00048
[a8] V.D. Goppa, "Geometry and codes" , Kluwer (1988) MR1029027 Zbl 1097.14502
[a9] M.A. Tsfasman, S.G. Vlăduts, "Algebraic geometric codes" , Kluwer (1989) MR1003354 Zbl 0674.94012
How to Cite This Entry:
Error-correcting code. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Error-correcting_code&oldid=34183
This article was adapted from an original article by V.M. Sidel'nikov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article