Coding, alphabetical
non-uniform coding
The representation of information in standard form, under which coded combinations of symbols from a given alphabet are sequentially associated with elementary syntactical units of the language of the messages (letters of the alphabet of the language). (Here, by information one means a linear string of letters.) An example of an alphabetical coding is the celebrated Morse code in which words are encoded letter by letter, and words in the alphabet of three symbols $ \{ \cdot , - , \wedge \} $( where $ \wedge $ denotes a blank) are associated with the letters. An example of a naturally arising coding which does not fit into the model of alphabetical coding is the decimal system of calculation. The sources of ideas for the development of alphabetical coding are the cybernetic point of view on information processes and systems on the one hand, and on the other hand the requirements of communications engineering (where the need frequently arises to transform information into a form that is more convenient in some regard); the starting point was the fundamental work of C. Shannon [1] published in 1948. The theory of alphabetical coding is a branch of applied mathematics that consists in the study of various mathematical models of communication channels using a variety of mathematical methods. The best studied are uniform or block codes, in which the coded combinations have fixed length. There is a well-developed mathematical apparatus for this class (see Error-correcting code). The most general abstract scheme is automatic coding, in which the correspondence is realized by a finite automaton. In practical terms, uniform coding gives, as a rule, satisfactory results; therefore, when using other coding methods these have to possess considerable merit (for example, the existence of compression of information) and their deficiencies (complications in encoding and decoding) must not be too serious.
Fundamental results of a general character can be stated only for the case of letter by letter alphabetical coding (when the coding automaton has only one internal state), since the known generalizations have no fundamental significance, while investigations in connection with other models have not yet attained any significant theoretical development. The following model of a communication channel is considered:
Figure: c022880a
Source of information $ \rightarrow $ coding scheme $ \rightarrow \dots \rightarrow $ decoding scheme $ \rightarrow $ receiver
Here $ A = \{ a _ {1} \dots a _ {n} \} $ is the alphabet of the communication channel, that is, the list of signals that can be transmitted along the channel, $ \tau ( a _ {i} ) $ is the duration of the signal $ a _ {i} $, and $ B = \{ b _ {1} \dots b _ {n} \} $ is the alphabet of the language of the messages. In the simplest case, the source of information is a probabilistic scheme at the output of which one of the letters of $ B $ appears at each moment of discrete time, the probabilities $ p _ {i} = p ( b _ {i} ) $ of appearance of the letters being independent of time. A coding scheme $ f $ maps the letters of $ B $ into words of $ A ^ {*} $( where $ A ^ {*} $ is the monoid generated by $ A $), $ f ( b _ {i} ) = v _ {i} $, and the words of $ B ^ {*} $ are encoded letter by letter: $ f ( b _ {i _ {1} } \dots b _ {i _ {k} } ) = f ( b _ {i _ {1} } ) \dots f ( b _ {i _ {k} } ) $. Thus, $ f $ is completely determined by the code $ V = \{ v _ {1} \dots v _ {m} \} $. If $ f $ is a one-to-one mapping, then the problem of the decoding scheme is to recover the transmitted message by realizing the mapping $ f ^ {-} 1 $.
The rate of information transmission (cf. Information, transmission rate of) and the complexity of decoding determined by the choice of the code $ V $ are factors to be estimated for the model. The transmission rate is characterized quantitatively by the mathematical expectation of the time required for the transmission of a single letter of the message: The duration of the transmission of a letter $ b _ {i} $ is
$$ \tau ( b _ {i} ) = \ \sum _ {j = 1 } ^ { n } \omega _ {ij} \tau ( a _ {j} ), $$
where $ \omega _ {ij} $ is the number of occurrences of the letter $ a _ {j} $ in the word $ v _ {i} $, and the required mathematical expectation is
$$ {\mathsf E} _ {f} = \ \sum _ {i = 1 } ^ { m } p _ {i} \tau ( b _ {i} ) $$
( $ {\mathsf E} _ {f} $ is also called the cost of the coding $ f $). In the general case $ {\mathsf E} _ {f} $ depends on the structure of the code, which is described by a structure polynomial
$$ S _ {f} ( x _ {1} \dots x _ {n} ) = \ \sum _ {i = 1 } ^ { m } \left ( \prod _ {j = 1 } ^ { n } x _ {j} ^ {\omega _ {ij} } \right ) $$
— the generating function, which enumerates the words of the code from their composition. In the particular case $ \tau ( a _ {1} ) = \dots = \tau ( a _ {n} ) = \tau $, one has $ {\mathsf E} _ {f} = \tau \sum _ {i = 1 } ^ {m} p _ {i} l _ {i} $, i.e. $ {\mathsf E} _ {f} $ is determined only by the spectrum of the lengths $ \{ l _ {1} \dots l _ {m} \} $ of the words of the code, where $ l _ {i} $ is the length of the word $ v _ {i} $. In this case, the problem of the selection of an optimal code (that is, a cost-minimizing one) has been completely solved; it has been proved [2] that a necessary and sufficient spectral condition for the existence of a unique decodable code is
$$ \tag{1 } \sum _ {i = 1 } ^ { m } n ^ {- l _ {i} } \leq 1. $$
It has been shown in [4] that, together with (1), the condition
$$ \mathop{\rm gcd} \ ( l _ {1} \dots l _ {m} ) = 1 $$
is a necessary and sufficient condition for the existence of a code with a given spectrum having the so-called property of self-synchronization, meaning that an error in decoding is automatically located with probability 1.
The following qualitative measure of the complexity of decoding is of greatest interest from an abstract point of view: The delay in the decoding should have the property of finiteness, meaning that it is possible to effect the decoding by a finite automaton. (Quantitatively estimating the complexity of a circuit realization by automata goes beyond the scope of coding theory.) The so-called prefix codes (that is, codes for which no word of $ V $ is the initial segment of another word in $ V $) all have this property. It has been proved [2] that any code for which $ f $ is one-to-one is spectrally equivalent to some prefix code. One has a relatively good overview of prefix codes, and the successful results obtained for the case $ \tau ( a _ {1} ) = \dots = \tau ( a _ {n} ) $ are explained by this.
In the general case, algorithmic solutions are known for the problem of recognizing the one-to-one property of a coding and the property of finiteness of delay of the decoding. The one-to-one property of $ f $ can be regarded as a minimal level of noise stability of the code $ V $. There are algorithmic solutions for the problem of calculating the actual correcting ability of an arbitrary code in the general case as well as under the supplementary requirement that the decoding delay be finite. The class of free (that is, uniquely decodable) codes has a highly complicated structure, but extremal requirements on the codes often lead to substantial restrictions. For example, it has been shown that for codes that are maximal with respect to inclusion (for which inequality (1) becomes an equality; these are called complete because only for them the alphabet of the channel is used fully in the sense that any word of $ A ^ {*} $ is part of some encoded message) the finite delay property holds if and only if the code is prefix.
References
[1] | C. Shannon, "A mathematical theory of communication" Bell. System Techn. J. , 27 (1948) pp. 379–423; 623–656 |
[2] | B. McMillan, Kibernet. Sb. , 3 (1961) pp. 88–92 |
[3] | D. Haffman, Kibernet. Sb. , 3 (1961) pp. 79–87 |
[4] | M.P. Schützenberger, "On synchronizing prefix codes" Information and Control , 11 (1967) pp. 396–401 |
[5a] | Al.A. Markov, "Non-recurrent coding" Problemy Kibernet. , 8 (1962) pp. 169–186 (In Russian) |
[5b] | Al.A. Markov, "Non-uniform error-correcting codes" Problemy Kibernet. , 12 (1964) pp. 137–153 (In Russian) |
[5c] | Al.A. Markov, Problemy Kibernet. , 19 (1967) pp. 307–309 |
[5d] | Al.A. Markov, "Foundation of a general theory of codes" Problemy Kibernet. , 31 (1976) pp. 77–108 (In Russian) |
Coding, alphabetical. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Coding,_alphabetical&oldid=54474