# Hankel matrix

A matrix whose entries along a parallel to the main anti-diagonal are equal, for each parallel. Equivalently, $H = ( h _ { i , j} )$ is a Hankel matrix if and only if there exists a sequence $s _ { 1 } , s_{ 2} , \ldots$, such that $h_{ i , j } = s _ { i + j - 1 }$, $i , j = 1,2 , \ldots$. If $s _ { k }$ are square matrices, then $H$ is referred to as a block Hankel matrix. Infinite Hankel matrices are associated with the representation of Hankel operators acting on the Hilbert space of square summable complex sequences.

Hankel matrices are frequently encountered in applications where the close interplay between polynomial and matrix computations is exploited in order to devise very effective numerical solution algorithms. Famous special cases are the Hilbert matrix and Hilbert–Hankel operator, defined by $s _ { i + j - 1} = ( i + j - 1 ) ^ { - 1 }$, which play an important role in the study of spectral properties of integral operators of Carleman type (cf. also Carleman operator). For theoretical properties of Hankel matrices, see [a13]. For an overview of Hankel operators, see [a20]. For references to the varied and numerous relations among Hankel matrices and polynomial computations, see [a24], [a9], [a4].

To a Hankel operator $H = ( s _ { i + j - 1} )$ one naturally associates the function $r ( z ) = \sum _ { k = 1 } ^ { \infty } s _ { k } z ^ { - k }$, which is called its symbol. Kronecker's theorem [a9] says that $H$ has finite rank $n$ if and only if its symbol is a rational function, that is, $r ( z ) = p ( z ) / q ( z )$, where $p ( z )$ and $q ( z )$ are mutually prime polynomials. In this case $n$ is the number of poles of $r ( z )$ and, therefore, the degree of $q ( z )$. In addition, assuming that $H _ { k }$ is the $k \times k$ leading principal submatrix of $H$, i.e., the submatrix made up by the entries in the first $k$ rows and columns of $H$, then $H _ { n }$ is non-singular, whereas $H _ { j }$ is singular for any $j > n$.

Given a Hankel operator $H$ with symbol $r ( z )$, then the problem of finding two polynomials $p ( z )$ and $q ( z )$ of degree at most $n - 1$ and $n$, respectively, such that

\begin{equation} \tag{a1} \frac { r ( z ^ { - 1 } ) } { z } - \frac { p ( z ) } { q ( z ) } = w _ { 0 } z ^ { 2 n } + w _ { 1 } z ^ { 2 n + 1 } +\dots , \end{equation}

is a particular instance of the Padé approximation problem. If $H _ { n }$ is non-singular, these polynomials are uniquely determined up to a suitable normalization, say $q ( 0 ) = 1$, and their computation essentially amounts to solving a linear system with coefficient matrix $H _ { n }$. See [a1], [a3] and [a10] for a survey of both the theory and applications of general Padé approximation problems. In [a19] this theory is first generalized and then applied to the inversion of (block) Hankel matrices.

Other variants of (a1) can also be considered, generally leading to different computational problems. From a system-theoretic point of view, the possibility of recovering a rational function $p ( z ) / q ( z )$, where $q ( z )$ is monic, by its MacLaurin expansion at infinity has been extensively studied as the partial realization problem of system theory (see, for instance, [a11]). It is intimately connected to such topics as the Berlekamp–Massey algorithm in the context of coding theory and Kalman filtering. For applications of the theory of Hankel matrices to engineering problems of system and control theory, see [a14] and [a7].

The connection between Hankel matrices and orthogonal polynomials arises in the solution of moment problems (cf. also Moment problem). Given a positive Borel measure $\eta$ on $( - 1,1 )$, then the Hankel operator $H = ( s _ { i + j - 1} )$ defined by $s _ { i + j - 1 } = \int _ { - 1 } ^ { 1 } z ^ { i + j - 2 } d \eta ( z )$, $i , j = 1,2 , \ldots$, is positive definite and, moreover, the last columns of $H _ { k } ^{- 1}$, $k = 1,2 , \dots$, generate a sequence of orthogonal polynomials linked by a three-term recurrence relation. The converse is known as the Hamburger moment problem (cf. also Moment problem). The underlying theory is very rich and can be suitably extended to both finite Hankel matrices, by considering discrete measures, and to indefinite Hankel matrices, by means of formal orthogonal polynomials. A survey of results on Hankel matrices generated by positive measures can be found in [a22]. See [a8] and [a12] for an introduction to the theory of formal orthogonal polynomials in the context of the algorithms of numerical analysis, including Lanczos' tridiagonalization process, rational interpolation schemes, the Euclidean algorithm, and inverse spectral methods for Jacobi matrices.

Since orthogonal polynomials on the real axis generate Sturm sequences, it follows that the use of quadratic forms associated with Hankel matrices provides a means for solving real root counting problems and real root localization problems; see [a18] and [a2]. Moreover, certain properties of sequences of Hankel determinants give the theoretical bases on which both Koenig's method and the Rutishauser $qd$ algorithm, for the approximation of zeros and poles of meromorphic functions, rely; see [a26].

The problem of inverting a finite non-singular Hankel matrix $H _ { n }$ has been extensively studied in the literature on numerical methods and the connections shown earlier between the theory of Hankel matrices and many other fields have been exploited in order to derive many different Hankel system solvers.

As mentioned above (the Kronecker's theorem), if the Hankel operator $H$ has a rational symbol $r ( z ) = p ( z ) / q ( z )$ with $p ( z )$ and $q ( z )$ mutually prime and $q ( z )$ of degree $n$, then $H _ { n }$ is invertible. On the other hand, if $H _ { n }$ is an invertible finite Hankel matrix of order $n$ determined by its entries $s _ { i +j-1 }$, $1 \leq i , j \leq n$, then the rational function $r ( z ) = \sum _ { i = 1 } ^ { 2 n - 1 } s _ { i } z ^ { - i }$ can uniquely be extended to a power series $g ( z ) = r ( z ) + \sum _ { i = 1 } ^ { \infty } s _ { 2 m + i } z ^ { - ( 2 m + i ) }$ in such a way that $g ( z )$ is the expansion at infinity of $p ( z ) / q ( z )$, where $p ( z )$ and $q ( z )$ are mutually prime with $q ( z )$ monic of degree $n$. In this case the inverse of $H _ { n }$ is given by $H _ { n } ^ { - 1 } = B ( q , t )$, where $t ( z )$ is a polynomial of degree less than $n$ that satisfies the Bezout equation $t ( z ) p ( z ) + q ( z ) v ( z ) = 1$, and $B ( q , t ) = ( b _ { i , j} )$ denotes the Bezout matrix, whose entries are defined by

\begin{equation*} \frac { q ( z ) t ( w ) - q ( w ) t ( z ) } { z - w } = \sum _ { i , j = 1 } ^ { n } b _ { i , j } z ^ { i - 1 } w ^ { j - 1 }. \end{equation*}

Since the Bezout equation can be solved by means of the Euclidean algorithm, this brings up the possibility of a recursive approach to the inversion of Hankel matrices. Analogously, in a matrix setting this recursive structure is fully exploited by the observation that Hankel matrices have low displacement rank in a sense first defined in [a15]. Based on these properties, many fast algorithms for solving Hankel linear systems with cost $O ( n ^ { 2 } )$ have been developed. Superfast $O (n \operatorname{log} ^ { 2 } n )$ algorithms have also appeared; see [a25], [a16], [a4], [a24], [a5] and [a17] for extensive bibliographies on these methods. Generalizations to the case where $H _ { n }$ has entries over an integral domain are discussed in [a23], where the subresultant theory [a6] is described in terms of factorization properties of Hankel matrices. This becomes significant for the efficient solution of many problems in real algebraic geometry involving polynomials and rational functions [a21].

How to Cite This Entry:
Hankel matrix. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Hankel_matrix&oldid=49954
This article was adapted from an original article by Luca Gemignani (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article