Namespaces
Variants
Actions

Berlekamp-Massey algorithm

From Encyclopedia of Mathematics
Revision as of 18:48, 26 January 2024 by Chapoton (talk | contribs) (details)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

A procedure for solving an equation of the form

\begin{equation*} \sigma ( z ) S ( z ) \equiv \omega ( z ) ( \operatorname { mod } z ^ { 2 t } ) \end{equation*}

for the polynomials $\sigma ( z )$, $\omega ( z )$, $\operatorname { deg } \omega ( z ) < \operatorname { deg } \sigma ( z ) \leq t$, given the polynomial $S ( z )$, $\operatorname { deg } S ( z ) < 2 t$. Originally intended for the decoding of certain cyclic error-correcting codes (cf. also Error-correcting code), it has since found wide application in unrelated areas.

The decoding of cyclic error-correcting codes defined over a finite field, such as Bose–Chaudhuri–Hocquenghem and Reed–Solomon codes, can be achieved by solving the above equation, referred to as the key equation [a1] in this context, where the minimum distance of the code is at least $2 t + 1$. For cyclic codes of length $n$ over the finite field with $q$ elements, $\mathbf{F} _ { q }$, codewords are viewed as $n$-tuples over $\mathbf{F} _ { q }$, and also naturally identified with polynomials over $\mathbf{F} _ { q }$ of degree at most $n - 1$. Let $\alpha$ be a primitive $n$-th root of unity in an extension field of $\mathbf{F} _ { q }$. The coordinate positions of the code are identified with the distinct powers of $\alpha$. Cyclic codes with designed distance $2 t + 1$ consist of all codeword polynomials having a common set of $2 t$ consecutive powers of $\alpha$ as zeros, the zero set of the code. The polynomial $S ( z )$ is the syndrome polynomial whose $i$th coefficient $S_i \in \mathbf{F} _ { q }$, $i = 0 , \ldots , 2 t - 1$, is the evaluation of the received word, at the $i$th element of the zero set of the code. The received word is the sum of the transmitted codeword with an error word, over $\mathbf{F} _ { q }$, whose non-zero coordinate positions correspond to error locations. It is assumed there are at most $t$ coordinate positions in error. The decoding process retrieves the transmitted codeword. The unique solution to the key equation gives $\sigma ( z )$, the error locator polynomial, and $\omega ( z )$, the error evaluator polynomial. The zeros of the error evaluator polynomial yield the coordinate positions at which errors occur and the error value at such an error position $\beta$ is given by $\omega ( \beta ) / \sigma ^ { \prime } ( \beta )$, where $\sigma ^ { \prime }$ indicates the formal derivative.

E.R. Berlekamp [a1] gave a computationally efficient iterative algorithm to solve the key equation for the two polynomials. J.L. Massey [a4] showed that this algorithm gives a general solution to the problem of synthesizing the shortest linear feedback shift register that generates the syndrome sequence. At each stage the algorithm incrementally adjusts the length of the shift register and the feedback multiplier taps to generate the syndrome sequence to that point and terminates when the whole sequence is generated. In this formulation, the set of shift register feedback tap values yields the coefficients of the error locator polynomial. The iterative algorithm of Berlekamp and the feedback shift register synthesis interpretation is known as the Berlekamp–Massey algorithm.

The solution to the key equation, and hence the Berlekamp–Massey algorithm, has connections to several other algorithms, most notably the extended Euclidean algorithm [a5], [a3] and continued fractions [a6] (cf. also Continued fraction). The extended Euclidean algorithm iteratively determines the greatest common divisor of two elements in a Euclidean domain, in this case $\mathbf F _ { q } [ z ]$. Specifically, for $a ( z ) , b ( z ) \in \mathbf{F} _ { q } [ z ]$, define $r_{ - 1} ( z ) = a ( z )$ and $r_0 ( z ) = b ( z )$ and define a sequence of quotient and remainder polynomials by

\begin{equation*} r _ { i - 2} ( z ) = q _ { i } ( z ) r _ { i - 1} ( z ) + r _ { i } ( z ) , \quad i = 1,2 ,\dots . \end{equation*}

The degrees of the remainder polynomials are strictly decreasing and the last non-zero remainder polynomial is the greatest common divisor of $a ( z )$, $b ( z )$. The quotient polynomials $q_i( z )$ can be used to define a sequence of subsidiary polynomials $s _ { i } ( z )$ and $t _ { i } ( z )$ with strictly increasing degrees, such that at each stage of the algorithm

\begin{equation*} s _ { i } ( z ) a ( z ) + t _ { i } ( z ) b ( z ) = r _ { i } ( z ), \end{equation*}

so that $s _ { i } ( z ) a ( z ) \equiv r _ { i } ( z ) ( \operatorname { mod } b ( z ) )$ for all $i$. Applied to the case $a ( z ) = S ( z )$ and $b ( z ) = z ^ { 2 t }$, since the $r_i$ have strictly decreasing degrees and the $s_i$ strictly increasing degrees, it can be shown that the algorithm reaches a unique point where $t \geq \operatorname { deg } s _ { i } > \operatorname { deg } r _ { i }$, yielding a solution to the key equation.

Recasting the key equation as

\begin{equation*} S ( z ) \equiv \frac { \omega ( z ) } { \sigma ( z ) } ( \operatorname { mod } z ^ { 2 t } ), \end{equation*}

$\operatorname { deg } \omega ( z ) < \operatorname { deg } \sigma ( z ) \leq t$, the problem is to determine the smallest degree $\sigma ( z )$, of degree at most $t$, $\operatorname { deg } \omega ( z ) < \operatorname { deg } \sigma ( z )$ such that the first $2 t$ terms of the expansion of the fraction about the origin yields the sequence of syndromes. Such a partial fraction expansion leads naturally to an equivalent continued fraction interpretation of the Berlekamp–Massey algorithm [a6].

The Berlekamp–Massey algorithm has found numerous applications outside of the coding context. These include fast algorithms in numerical analysis such as Lanczos recursion and Levinson–Shur algorithms for Toeplitz matrices, as well as problems of minimal realizations in system theory [a2].

References

[a1] E.R. Berlekamp, "Algebraic coding theory" , McGraw-Hill (1968)
[a2] T. Kailath, "Encounters with the Berlekamp–Massey algorithm" R.E. Blahut (ed.) D.J. Costello Jr. (ed.) U. Maureer (ed.) T. Mittelholzer (ed.) , Communications and Cryptography, Two Sides of One Tapestry , Kluwer Acad. Publ. (1994)
[a3] R.J. McEliece, "The theory of information and coding" , Encycl. Math. Appl. , 3 , Addison-Wesley (1977)
[a4] J.L. Massey, "Shift register synthesis and BCH decoding" IEEE Trans. Inform. Th. , IT-19 (1969) pp. 122–127
[a5] Y. Sugiyama, S. Kasahara, S. Hirasawa, T. Namekawa, "A method for solving key equation for decoding Goppa codes" Inform. Control , 27 (1975) pp. 87–99
[a6] L.R. Welch, R.A. Scholtz, "Continued fractions and Berlekamp's algorithm" IEEE Trans. Inform. Th. , IT-25 (1979) pp. 19–27
How to Cite This Entry:
Berlekamp-Massey algorithm. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Berlekamp-Massey_algorithm&oldid=55330
This article was adapted from an original article by Ian F. Blake (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article