# Hankel matrix

A matrix whose entries along a parallel to the main anti-diagonal are equal, for each parallel. Equivalently, $H = ( h _ { i , j} )$ is a Hankel matrix if and only if there exists a sequence $s _ { 1 } , s_{ 2} , \ldots$, such that $h_{ i , j } = s _ { i + j - 1 }$, $i , j = 1,2 , \ldots$. If $s _ { k }$ are square matrices, then $H$ is referred to as a block Hankel matrix. Infinite Hankel matrices are associated with the representation of Hankel operators acting on the Hilbert space of square summable complex sequences.

Hankel matrices are frequently encountered in applications where the close interplay between polynomial and matrix computations is exploited in order to devise very effective numerical solution algorithms. Famous special cases are the Hilbert matrix and Hilbert–Hankel operator, defined by $s _ { i + j - 1} = ( i + j - 1 ) ^ { - 1 }$, which play an important role in the study of spectral properties of integral operators of Carleman type (cf. also Carleman operator). For theoretical properties of Hankel matrices, see [a13]. For an overview of Hankel operators, see [a20]. For references to the varied and numerous relations among Hankel matrices and polynomial computations, see [a24], [a9], [a4].

To a Hankel operator $H = ( s _ { i + j - 1} )$ one naturally associates the function $r ( z ) = \sum _ { k = 1 } ^ { \infty } s _ { k } z ^ { - k }$, which is called its symbol. Kronecker's theorem [a9] says that $H$ has finite rank $n$ if and only if its symbol is a rational function, that is, $r ( z ) = p ( z ) / q ( z )$, where $p ( z )$ and $q ( z )$ are mutually prime polynomials. In this case $n$ is the number of poles of $r ( z )$ and, therefore, the degree of $q ( z )$. In addition, assuming that $H _ { k }$ is the $k \times k$ leading principal submatrix of $H$, i.e., the submatrix made up by the entries in the first $k$ rows and columns of $H$, then $H _ { n }$ is non-singular, whereas $H _ { j }$ is singular for any $j > n$.

Given a Hankel operator $H$ with symbol $r ( z )$, then the problem of finding two polynomials $p ( z )$ and $q ( z )$ of degree at most $n - 1$ and $n$, respectively, such that

\begin{equation} \tag{a1} \frac { r ( z ^ { - 1 } ) } { z } - \frac { p ( z ) } { q ( z ) } = w _ { 0 } z ^ { 2 n } + w _ { 1 } z ^ { 2 n + 1 } +\dots , \end{equation}

is a particular instance of the Padé approximation problem. If $H _ { n }$ is non-singular, these polynomials are uniquely determined up to a suitable normalization, say $q ( 0 ) = 1$, and their computation essentially amounts to solving a linear system with coefficient matrix $H _ { n }$. See [a1], [a3] and [a10] for a survey of both the theory and applications of general Padé approximation problems. In [a19] this theory is first generalized and then applied to the inversion of (block) Hankel matrices.

Other variants of (a1) can also be considered, generally leading to different computational problems. From a system-theoretic point of view, the possibility of recovering a rational function $p ( z ) / q ( z )$, where $q ( z )$ is monic, by its MacLaurin expansion at infinity has been extensively studied as the partial realization problem of system theory (see, for instance, [a11]). It is intimately connected to such topics as the Berlekamp–Massey algorithm in the context of coding theory and Kalman filtering. For applications of the theory of Hankel matrices to engineering problems of system and control theory, see [a14] and [a7].

The connection between Hankel matrices and orthogonal polynomials arises in the solution of moment problems (cf. also Moment problem). Given a positive Borel measure $ \eta $ on $( - 1,1 )$, then the Hankel operator $H = ( s _ { i + j - 1} )$ defined by $s _ { i + j - 1 } = \int _ { - 1 } ^ { 1 } z ^ { i + j - 2 } d \eta ( z )$, $i , j = 1,2 , \ldots$, is positive definite and, moreover, the last columns of $H _ { k } ^{- 1}$, $k = 1,2 , \dots$, generate a sequence of orthogonal polynomials linked by a three-term recurrence relation. The converse is known as the Hamburger moment problem (cf. also Moment problem). The underlying theory is very rich and can be suitably extended to both finite Hankel matrices, by considering discrete measures, and to indefinite Hankel matrices, by means of formal orthogonal polynomials. A survey of results on Hankel matrices generated by positive measures can be found in [a22]. See [a8] and [a12] for an introduction to the theory of formal orthogonal polynomials in the context of the algorithms of numerical analysis, including Lanczos' tridiagonalization process, rational interpolation schemes, the Euclidean algorithm, and inverse spectral methods for Jacobi matrices.

Since orthogonal polynomials on the real axis generate Sturm sequences, it follows that the use of quadratic forms associated with Hankel matrices provides a means for solving real root counting problems and real root localization problems; see [a18] and [a2]. Moreover, certain properties of sequences of Hankel determinants give the theoretical bases on which both Koenig's method and the Rutishauser $qd$ algorithm, for the approximation of zeros and poles of meromorphic functions, rely; see [a26].

The problem of inverting a finite non-singular Hankel matrix $H _ { n }$ has been extensively studied in the literature on numerical methods and the connections shown earlier between the theory of Hankel matrices and many other fields have been exploited in order to derive many different Hankel system solvers.

As mentioned above (the Kronecker's theorem), if the Hankel operator $H$ has a rational symbol $r ( z ) = p ( z ) / q ( z )$ with $p ( z )$ and $q ( z )$ mutually prime and $q ( z )$ of degree $n$, then $H _ { n }$ is invertible. On the other hand, if $H _ { n }$ is an invertible finite Hankel matrix of order $n$ determined by its entries $s _ { i +j-1 } $, $1 \leq i , j \leq n$, then the rational function $r ( z ) = \sum _ { i = 1 } ^ { 2 n - 1 } s _ { i } z ^ { - i }$ can uniquely be extended to a power series $g ( z ) = r ( z ) + \sum _ { i = 1 } ^ { \infty } s _ { 2 m + i } z ^ { - ( 2 m + i ) }$ in such a way that $g ( z )$ is the expansion at infinity of $p ( z ) / q ( z )$, where $p ( z )$ and $q ( z )$ are mutually prime with $q ( z )$ monic of degree $n$. In this case the inverse of $H _ { n }$ is given by $H _ { n } ^ { - 1 } = B ( q , t )$, where $t ( z )$ is a polynomial of degree less than $n$ that satisfies the Bezout equation $t ( z ) p ( z ) + q ( z ) v ( z ) = 1$, and $B ( q , t ) = ( b _ { i , j} )$ denotes the Bezout matrix, whose entries are defined by

\begin{equation*} \frac { q ( z ) t ( w ) - q ( w ) t ( z ) } { z - w } = \sum _ { i , j = 1 } ^ { n } b _ { i , j } z ^ { i - 1 } w ^ { j - 1 }. \end{equation*}

Since the Bezout equation can be solved by means of the Euclidean algorithm, this brings up the possibility of a recursive approach to the inversion of Hankel matrices. Analogously, in a matrix setting this recursive structure is fully exploited by the observation that Hankel matrices have low displacement rank in a sense first defined in [a15]. Based on these properties, many fast algorithms for solving Hankel linear systems with cost $O ( n ^ { 2 } )$ have been developed. Superfast $O (n \operatorname{log} ^ { 2 } n )$ algorithms have also appeared; see [a25], [a16], [a4], [a24], [a5] and [a17] for extensive bibliographies on these methods. Generalizations to the case where $H _ { n }$ has entries over an integral domain are discussed in [a23], where the subresultant theory [a6] is described in terms of factorization properties of Hankel matrices. This becomes significant for the efficient solution of many problems in real algebraic geometry involving polynomials and rational functions [a21].

#### References

[a1] | G.A. Baker, P.R. Graves-Morris, "Padé approximants" , Addison-Wesley (1981) MR0661068 MR0635620 MR0635619 Zbl 0603.30045 Zbl 0603.30044 Zbl 0468.30033 Zbl 0468.30032 |

[a2] | S. Barnett, "Polynomials and linear control systems" , M. Dekker (1983) MR0704016 Zbl 0528.93003 |

[a3] | C. Brezinski, "Padé-type approximation and general orthogonal polynomials" , Birkhäuser (1980) MR0561106 Zbl 0418.41012 |

[a4] | A. Bultheel, M. van Barel, "Linear algebra: Rational approximation and orthogonal polynomials" , Studies in Computational Math. , North-Holland (1997) MR1602017 Zbl 0890.65024 |

[a5] | S. Cabay, R. Meleshko, "A weakly stable algorithm for Padé approximation and the inversion of Hankel matrices" SIAM J. Matrix Anal. Appl. , 14 (1993) pp. 735–765 |

[a6] | G.E. Collins, "Sub-resultants and reduced polynomial remainder sequences" J. Assoc. Comput. Mach. , 14 (1967) pp. 128–142 |

[a7] | "Linear algebra in signals, systems and control" B.N. Datta (ed.) C.R. Johnson (ed.) M.A. Kaashoek (ed.) R. Plemmons (ed.) E.D. Sontag (ed.) , SIAM (1988) MR0969778 Zbl 0661.00019 |

[a8] | A. Draux, "Polynômes orthogonaux: Formels-applications" , Lecture Notes in Mathematics , 974 , Springer (1983) MR690769 Zbl 0504.42001 |

[a9] | P.A. Fuhrmann, "A polynomial approach to linear algebra" , Springer (1996) MR1393938 Zbl 0852.15001 |

[a10] | W.B. Gragg., "The Padé table and its relation to certain algorithms of numerical analysis" SIAM Review , 14 (1972) pp. 1–61 MR0305563 Zbl 0238.30008 |

[a11] | W.B. Gragg, A. Lindquist, "On the partial realization problem." Linear Alg. & Its Appl. , 50 (1983) pp. 277–319 MR0699565 Zbl 0519.93024 |

[a12] | M.H. Gutknecht, "A completed theory of unsymmetric Lanczos process and related algorithms" SIAM J. Matrix Anal. Appl. , 13 (1992) pp. 594–639 MR1152770 Zbl 0760.65039 |

[a13] | I.S. Iohvidov., "Hankel and Toeplitz matrices and forms" , Birkhäuser (1982) MR0677503 Zbl 0493.15018 |

[a14] | T. Kailath, "Linear systems" , Prentice-Hall (1980) MR0569473 MR0584089 Zbl 0454.93001 |

[a15] | T. Kailath, S.Y. Kung, M. Morf, "Displacement ranks of matrices and linear equations" J. Math. Anal. Appl. , 68 (1979) pp. 395–407 MR0533501 Zbl 0433.15001 |

[a16] | "Fast reliable algorithms for matrices with structure" T. Kailath (ed.) A.H. Sayed (ed.) , SIAM (1999) MR1715813 Zbl 0931.65018 |

[a17] | M. van Barel, P. Kravanja, "A stabilized superfast solver for indefinite Hankel systems" Linear Alg. & Its Appl. , 284 (1998) pp. 335–355 MR1655143 Zbl 0938.65059 |

[a18] | M.A. Naimark, M.G. Krein., "The method of symmetric and Hermitian forms in the theory of the separation of the roots of algebraic equations" Linear and Multilinear Algebra , 10 (1981) pp. 265–308 MR0638124 Zbl 0584.12018 |

[a19] | G. Labahn, D.K. Choi, S. Cabay, "The inverses of block Hankel and block Toeplitz matrices" SIAM J. Comput. , 19 (1990) pp. 98–123 MR1035200 Zbl 0716.15003 |

[a20] | S.C. Power, "Hankel operators in Hilbert spaces" , Research Notes in Math. , Pitman (1982) MR666699 |

[a21] | T. Lickteig, M.F. Royal, "Cauchy index computation" Calcolo , 33 (1996) pp. 337–352 MR1627623 Zbl 0904.65024 |

[a22] | H. Widom, "Hankel matrices" Trans. Amer. Math. Soc. , 127 (1966) pp. 179–203 MR0189237 MR0187099 Zbl 0148.12303 |

[a23] | D.A. Bini, L. Gemignani, "Fast fraction free triangularization of Bezoutians with applications to sub-resultant chain computation" Linear Alg. & Its Appl. , 284 (1998) pp. 19–39 MR1655129 Zbl 0937.65031 |

[a24] | D.A. Bini, V. Pan, "Matrix and polynomial computations 1: Fundamental algorithms" , Birkhäuser (1994) |

[a25] | G. Heinig, K. Rost, "Algebraic methods for Toeplitz-like matrices and operators" , Akad. Berlin & Birkhäuser (1984) MR0782105 MR0756628 Zbl 0549.15013 |

[a26] | A.S. Householder, "The numerical treatment of a single nonlinear equation" , McGraw-Hill (1970) MR0388759 Zbl 0242.65047 |

**How to Cite This Entry:**

Hankel matrix.

*Encyclopedia of Mathematics.*URL: http://encyclopediaofmath.org/index.php?title=Hankel_matrix&oldid=49954