Namespaces
Variants
Actions

Difference between revisions of "Hankel matrix"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (AUTOMATIC EDIT (latexlist): Replaced 75 formulas out of 75 by TEX code with an average confidence of 2.0 and a minimal confidence of 2.0.)
 
(2 intermediate revisions by 2 users not shown)
Line 1: Line 1:
A [[Matrix|matrix]] whose entries along a parallel to the main anti-diagonal are equal, for each parallel. Equivalently, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h1300301.png" /> is a Hankel matrix if and only if there exists a sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h1300302.png" />, such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h1300303.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h1300304.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h1300305.png" /> are square matrices, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h1300306.png" /> is referred to as a block Hankel matrix. Infinite Hankel matrices are associated with the representation of Hankel operators acting on the [[Hilbert space|Hilbert space]] of square summable complex sequences.
+
<!--This article has been texified automatically. Since there was no Nroff source code for this article,  
 +
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
 +
was used.
 +
If the TeX and formula formatting is correct, please remove this message and the {{TEX|semi-auto}} category.
  
Hankel matrices are frequently encountered in applications where the close interplay between polynomial and matrix computations is exploited in order to devise very effective numerical solution algorithms. Famous special cases are the Hilbert matrix and Hilbert–Hankel operator, defined by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h1300307.png" />, which play an important role in the study of spectral properties of integral operators of Carleman type (cf. also [[Carleman operator|Carleman operator]]). For theoretical properties of Hankel matrices, see [[#References|[a13]]]. For an overview of Hankel operators, see [[#References|[a20]]]. For references to the varied and numerous relations among Hankel matrices and polynomial computations, see [[#References|[a24]]], [[#References|[a9]]], [[#References|[a4]]].
+
Out of 75 formulas, 75 were replaced by TEX code.-->
  
To a Hankel operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h1300308.png" /> one naturally associates the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h1300309.png" />, which is called its symbol. Kronecker's theorem [[#References|[a9]]] says that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003010.png" /> has finite rank <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003011.png" /> if and only if its symbol is a rational function, that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003012.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003013.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003014.png" /> are mutually prime polynomials. In this case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003015.png" /> is the number of poles of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003016.png" /> and, therefore, the degree of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003017.png" />. In addition, assuming that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003018.png" /> is the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003019.png" /> leading principal submatrix of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003020.png" />, i.e., the submatrix made up by the entries in the first <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003021.png" /> rows and columns of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003022.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003023.png" /> is non-singular, whereas <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003024.png" /> is singular for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003025.png" />.
+
{{TEX|semi-auto}}{{TEX|done}}
 +
A [[Matrix|matrix]] whose entries along a parallel to the main anti-diagonal are equal, for each parallel. Equivalently, $H = ( h _ { i  , j} )$ is a Hankel matrix if and only if there exists a sequence $s _ { 1 } , s_{ 2} , \ldots$, such that $h_{ i , j } = s _ { i  + j - 1 }$, $i , j = 1,2 , \ldots$. If $s _ { k }$ are square matrices, then $H$ is referred to as a block Hankel matrix. Infinite Hankel matrices are associated with the representation of Hankel operators acting on the [[Hilbert space|Hilbert space]] of square summable complex sequences.
  
Given a Hankel operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003026.png" /> with symbol <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003027.png" />, then the problem of finding two polynomials <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003028.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003029.png" /> of degree at most <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003030.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003031.png" />, respectively, such that
+
Hankel matrices are frequently encountered in applications where the close interplay between polynomial and matrix computations is exploited in order to devise very effective numerical solution algorithms. Famous special cases are the Hilbert matrix and Hilbert–Hankel operator, defined by $s _ { i  + j - 1} = ( i + j - 1 ) ^ { - 1 }$, which play an important role in the study of spectral properties of integral operators of Carleman type (cf. also [[Carleman operator|Carleman operator]]). For theoretical properties of Hankel matrices, see [[#References|[a13]]]. For an overview of Hankel operators, see [[#References|[a20]]]. For references to the varied and numerous relations among Hankel matrices and polynomial computations, see [[#References|[a24]]], [[#References|[a9]]], [[#References|[a4]]].
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003032.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
To a Hankel operator $H = ( s _ { i  + j - 1} )$ one naturally associates the function $r ( z ) = \sum _ { k = 1 } ^ { \infty } s _ { k } z ^ { - k }$, which is called its symbol. Kronecker's theorem [[#References|[a9]]] says that $H$ has finite rank $n$ if and only if its symbol is a rational function, that is, $r ( z ) = p ( z ) / q ( z )$, where $p ( z )$ and $q ( z )$ are mutually prime polynomials. In this case $n$ is the number of poles of $r ( z )$ and, therefore, the degree of $q ( z )$. In addition, assuming that $H _ { k }$ is the $k \times k$ leading principal submatrix of $H$, i.e., the submatrix made up by the entries in the first $k$ rows and columns of $H$, then $H _ { n }$ is non-singular, whereas $H _ { j }$ is singular for any $j &gt; n$.
  
is a particular instance of the [[Padé approximation|Padé approximation]] problem. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003033.png" /> is non-singular, these polynomials are uniquely determined up to a suitable normalization, say <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003034.png" />, and their computation essentially amounts to solving a linear system with coefficient matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003035.png" />. See [[#References|[a1]]], [[#References|[a3]]] and [[#References|[a10]]] for a survey of both the theory and applications of general Padé approximation problems. In [[#References|[a19]]] this theory is first generalized and then applied to the inversion of (block) Hankel matrices.
+
Given a Hankel operator $H$ with symbol $r ( z )$, then the problem of finding two polynomials $p ( z )$ and $q ( z )$ of degree at most $n - 1$ and $n$, respectively, such that
  
Other variants of (a1) can also be considered, generally leading to different computational problems. From a system-theoretic point of view, the possibility of recovering a rational function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003036.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003037.png" /> is monic, by its MacLaurin expansion at infinity has been extensively studied as the partial realization problem of system theory (see, for instance, [[#References|[a11]]]). It is intimately connected to such topics as the [[Berlekamp–Massey algorithm|Berlekamp–Massey algorithm]] in the context of coding theory and Kalman filtering. For applications of the theory of Hankel matrices to engineering problems of system and control theory, see [[#References|[a14]]] and [[#References|[a7]]].
+
\begin{equation} \tag{a1} \frac { r ( z ^ { - 1 } ) } { z } - \frac { p ( z ) } { q ( z ) } = w _ { 0 } z ^ { 2 n } + w _ { 1 } z ^ { 2 n + 1 } +\dots , \end{equation}
  
The connection between Hankel matrices and [[Orthogonal polynomials|orthogonal polynomials]] arises in the solution of moment problems (cf. also [[Moment problem|Moment problem]]). Given a positive [[Borel measure|Borel measure]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003038.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003039.png" />, then the Hankel operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003040.png" /> defined by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003041.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003042.png" />, is positive definite and, moreover, the last columns of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003043.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003044.png" />, generate a sequence of orthogonal polynomials linked by a three-term recurrence relation. The converse is known as the Hamburger moment problem (cf. also [[Moment problem|Moment problem]]). The underlying theory is very rich and can be suitably extended to both finite Hankel matrices, by considering discrete measures, and to indefinite Hankel matrices, by means of formal orthogonal polynomials. A survey of results on Hankel matrices generated by positive measures can be found in [[#References|[a22]]]. See [[#References|[a8]]] and [[#References|[a12]]] for an introduction to the theory of formal orthogonal polynomials in the context of the algorithms of numerical analysis, including Lanczos' tridiagonalization process, rational interpolation schemes, the [[Euclidean algorithm|Euclidean algorithm]], and inverse spectral methods for Jacobi matrices.
+
is a particular instance of the [[Padé approximation|Padé approximation]] problem. If $H _ { n }$ is non-singular, these polynomials are uniquely determined up to a suitable normalization, say $q ( 0 ) = 1$, and their computation essentially amounts to solving a linear system with coefficient matrix $H _ { n }$. See [[#References|[a1]]], [[#References|[a3]]] and [[#References|[a10]]] for a survey of both the theory and applications of general Padé approximation problems. In [[#References|[a19]]] this theory is first generalized and then applied to the inversion of (block) Hankel matrices.
  
Since orthogonal polynomials on the real axis generate Sturm sequences (cf. also [[Sturm theorem|Sturm theorem]]), it follows that the use of quadratic forms associated with Hankel matrices provides a means for solving real root counting problems and real root localization problems; see [[#References|[a18]]] and [[#References|[a2]]]. Moreover, certain properties of sequences of Hankel determinants give the theoretical bases on which both Koenig's method and the Rutishauser <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003046.png" /> algorithm, for the approximation of zeros and poles of meromorphic functions, rely; see [[#References|[a26]]].
+
Other variants of (a1) can also be considered, generally leading to different computational problems. From a system-theoretic point of view, the possibility of recovering a rational function $p ( z ) / q ( z )$, where $q ( z )$ is monic, by its MacLaurin expansion at infinity has been extensively studied as the partial realization problem of system theory (see, for instance, [[#References|[a11]]]). It is intimately connected to such topics as the [[Berlekamp–Massey algorithm|Berlekamp–Massey algorithm]] in the context of coding theory and Kalman filtering. For applications of the theory of Hankel matrices to engineering problems of system and control theory, see [[#References|[a14]]] and [[#References|[a7]]].
  
The problem of inverting a finite non-singular Hankel matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003047.png" /> has been extensively studied in the literature on numerical methods and the connections shown earlier between the theory of Hankel matrices and many other fields have been exploited in order to derive many different Hankel system solvers.
+
The connection between Hankel matrices and [[Orthogonal polynomials|orthogonal polynomials]] arises in the solution of moment problems (cf. also [[Moment problem|Moment problem]]). Given a positive [[Borel measure|Borel measure]] $ \eta $ on $( - 1,1 )$, then the Hankel operator $H = ( s _ { i  + j - 1} )$ defined by $s _ { i + j - 1 } = \int _ { - 1 } ^ { 1 } z ^ { i + j - 2 } d \eta ( z )$, $i , j = 1,2 , \ldots$, is positive definite and, moreover, the last columns of $H _ { k } ^{- 1}$, $k = 1,2 , \dots$, generate a sequence of orthogonal polynomials linked by a three-term recurrence relation. The converse is known as the Hamburger moment problem (cf. also [[Moment problem|Moment problem]]). The underlying theory is very rich and can be suitably extended to both finite Hankel matrices, by considering discrete measures, and to indefinite Hankel matrices, by means of formal orthogonal polynomials. A survey of results on Hankel matrices generated by positive measures can be found in [[#References|[a22]]]. See [[#References|[a8]]] and [[#References|[a12]]] for an introduction to the theory of formal orthogonal polynomials in the context of the algorithms of numerical analysis, including Lanczos' tridiagonalization process, rational interpolation schemes, the [[Euclidean algorithm|Euclidean algorithm]], and inverse spectral methods for Jacobi matrices.
  
As mentioned above (the Kronecker's theorem), if the Hankel operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003048.png" /> has a rational symbol <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003049.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003050.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003051.png" /> mutually prime and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003052.png" /> of degree <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003053.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003054.png" /> is invertible. On the other hand, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003055.png" /> is an invertible finite Hankel matrix of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003056.png" /> determined by its entries <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003057.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003058.png" />, then the rational function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003059.png" /> can uniquely be extended to a power series <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003060.png" /> in such a way that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003061.png" /> is the expansion at infinity of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003062.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003063.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003064.png" /> are mutually prime with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003065.png" /> monic of degree <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003066.png" />. In this case the inverse of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003067.png" /> is given by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003068.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003069.png" /> is a polynomial of degree less than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003070.png" /> that satisfies the Bezout equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003071.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003072.png" /> denotes the Bezout matrix, whose entries are defined by
+
Since orthogonal polynomials on the real axis generate [[Sturm sequence]]s, it follows that the use of quadratic forms associated with Hankel matrices provides a means for solving real root counting problems and real root localization problems; see [[#References|[a18]]] and [[#References|[a2]]]. Moreover, certain properties of sequences of Hankel determinants give the theoretical bases on which both Koenig's method and the Rutishauser $qd$ algorithm, for the approximation of zeros and poles of meromorphic functions, rely; see [[#References|[a26]]].
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003073.png" /></td> </tr></table>
+
The problem of inverting a finite non-singular Hankel matrix $H _ { n }$ has been extensively studied in the literature on numerical methods and the connections shown earlier between the theory of Hankel matrices and many other fields have been exploited in order to derive many different Hankel system solvers.
  
Since the Bezout equation can be solved by means of the Euclidean algorithm, this brings up the possibility of a recursive approach to the inversion of Hankel matrices. Analogously, in a matrix setting this recursive structure is fully exploited by the observation that Hankel matrices have low displacement rank in a sense first defined in [[#References|[a15]]]. Based on these properties, many fast algorithms for solving Hankel linear systems with cost <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003074.png" /> have been developed. Superfast <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003075.png" /> algorithms have also appeared; see [[#References|[a25]]], [[#References|[a16]]], [[#References|[a4]]], [[#References|[a24]]], [[#References|[a5]]] and [[#References|[a17]]] for extensive bibliographies on these methods. Generalizations to the case where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003076.png" /> has entries over an integral domain are discussed in [[#References|[a23]]], where the subresultant theory [[#References|[a6]]] is described in terms of factorization properties of Hankel matrices. This becomes significant for the efficient solution of many problems in real [[Algebraic geometry|algebraic geometry]] involving polynomials and rational functions [[#References|[a21]]].
+
As mentioned above (the Kronecker's theorem), if the Hankel operator $H$ has a rational symbol $r ( z ) = p ( z ) / q ( z )$ with $p ( z )$ and $q ( z )$ mutually prime and $q ( z )$ of degree $n$, then $H _ { n }$ is invertible. On the other hand, if $H _ { n }$ is an invertible finite Hankel matrix of order $n$ determined by its entries $s _ { i +j-1 } $, $1 \leq i , j \leq n$, then the rational function $r ( z ) = \sum _ { i = 1 } ^ { 2 n - 1 } s _ { i } z ^ { - i }$ can uniquely be extended to a power series $g ( z ) = r ( z ) + \sum _ { i = 1 } ^ { \infty } s _ { 2 m + i } z ^ { - ( 2 m + i ) }$ in such a way that $g ( z )$ is the expansion at infinity of $p ( z ) / q ( z )$, where $p ( z )$ and $q ( z )$ are mutually prime with $q ( z )$ monic of degree $n$. In this case the inverse of $H _ { n }$ is given by $H _ { n } ^ { - 1 } = B ( q , t )$, where $t ( z )$ is a polynomial of degree less than $n$ that satisfies the Bezout equation $t ( z ) p ( z ) + q ( z ) v ( z ) = 1$, and $B ( q , t ) = ( b _ { i , j} )$ denotes the Bezout matrix, whose entries are defined by
 +
 
 +
\begin{equation*} \frac { q ( z ) t ( w ) - q ( w ) t ( z ) } { z - w } = \sum _ { i , j = 1 } ^ { n } b _ { i , j } z ^ { i - 1 } w ^ { j - 1 }. \end{equation*}
 +
 
 +
Since the Bezout equation can be solved by means of the Euclidean algorithm, this brings up the possibility of a recursive approach to the inversion of Hankel matrices. Analogously, in a matrix setting this recursive structure is fully exploited by the observation that Hankel matrices have low displacement rank in a sense first defined in [[#References|[a15]]]. Based on these properties, many fast algorithms for solving Hankel linear systems with cost $O ( n ^ { 2 } )$ have been developed. Superfast $O (n \operatorname{log} ^ { 2 } n )$ algorithms have also appeared; see [[#References|[a25]]], [[#References|[a16]]], [[#References|[a4]]], [[#References|[a24]]], [[#References|[a5]]] and [[#References|[a17]]] for extensive bibliographies on these methods. Generalizations to the case where $H _ { n }$ has entries over an integral domain are discussed in [[#References|[a23]]], where the subresultant theory [[#References|[a6]]] is described in terms of factorization properties of Hankel matrices. This becomes significant for the efficient solution of many problems in real [[Algebraic geometry|algebraic geometry]] involving polynomials and rational functions [[#References|[a21]]].
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> G.A. Baker,   P.R. Graves-Morris,   "Padé approximants" , Addison-Wesley (1981)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> S. Barnett,   "Polynomials and linear control systems" , M. Dekker (1983)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> C. Brezinski,   "Padé-type approximation and general orthogonal polynomials" , Birkhäuser (1980)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> A. Bultheel,   M. van Barel,   "Linear algebra: Rational approximation and orthogonal polynomials" , ''Studies in Computational Math.'' , North-Holland (1997)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> S. Cabay,   R. Meleshko,   "A weakly stable algorithm for Padé approximation and the inversion of Hankel matrices" ''SIAM J. Matrix Anal. Appl.'' , '''14''' (1993) pp. 735–765</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top"> G.E. Collins,   "Sub-resultants and reduced polynomial remainder sequences" ''J. Assoc. Comput. Mach.'' , '''14''' (1967) pp. 128–142</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top"> "Linear algebra in signals, systems and control" B.N. Datta (ed.) C.R. Johnson (ed.) M.A. Kaashoek (ed.) R. Plemmons (ed.) E.D. Sontag (ed.) , SIAM (1988)</TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top"> A. Draux,   "Polynômes orthogonaux: Formels-applications" , ''Lecture Notes in Mathematics'' , '''974''' , Springer (1983)</TD></TR><TR><TD valign="top">[a9]</TD> <TD valign="top"> P.A. Fuhrmann,   "A polynomial approach to linear algebra" , Springer (1996)</TD></TR><TR><TD valign="top">[a10]</TD> <TD valign="top"> W.B. Gragg.,   "The Padé table and its relation to certain algorithms of numerical analysis" ''SIAM Review'' , '''14''' (1972) pp. 1–61</TD></TR><TR><TD valign="top">[a11]</TD> <TD valign="top"> W.B. Gragg,   A. Lindquist,   "On the partial realization problem." ''Linear Alg. &amp; Its Appl.'' , '''50''' (1983) pp. 277–319</TD></TR><TR><TD valign="top">[a12]</TD> <TD valign="top"> M.H. Gutknecht,   "A completed theory of unsymmetric Lanczos process and related algorithms" ''SIAM J. Matrix Anal. Appl.'' , '''13''' (1992) pp. 594–639</TD></TR><TR><TD valign="top">[a13]</TD> <TD valign="top"> I.S. Iohvidov.,   "Hankel and Toeplitz matrices and forms" , Birkhäuser (1982)</TD></TR><TR><TD valign="top">[a14]</TD> <TD valign="top"> T. Kailath,   "Linear systems" , Prentice-Hall (1980)</TD></TR><TR><TD valign="top">[a15]</TD> <TD valign="top"> T. Kailath,   S.Y. Kung,   M. Morf,   "Displacement ranks of matrices and linear equations" ''J. Math. Anal. Appl.'' , '''68''' (1979) pp. 395–407</TD></TR><TR><TD valign="top">[a16]</TD> <TD valign="top"> "Fast reliable algorithms for matrices with structure" T. Kailath (ed.) A.H. Sayed (ed.) , SIAM (1999)</TD></TR><TR><TD valign="top">[a17]</TD> <TD valign="top"> M. van Barel,   P. Kravanja,   "A stabilized superfast solver for indefinite Hankel systems" ''Linear Alg. &amp; Its Appl.'' , '''284''' (1998) pp. 335–355</TD></TR><TR><TD valign="top">[a18]</TD> <TD valign="top"> M.A. Naimark,   M.G. Krein.,   "The method of symmetric and Hermitian forms in the theory of the separation of the roots of algebraic equations" ''Linear and Multilinear Algebra'' , '''10''' (1981) pp. 265–308</TD></TR><TR><TD valign="top">[a19]</TD> <TD valign="top"> G. Labahn,   D.K. Choi,   S. Cabay,   "The inverses of block Hankel and block Toeplitz matrices" ''SIAM J. Comput.'' , '''19''' (1990) pp. 98–123</TD></TR><TR><TD valign="top">[a20]</TD> <TD valign="top"> S.C. Power,   "Hankel operators in Hilbert spaces" , ''Research Notes in Math.'' , Pitman (1982)</TD></TR><TR><TD valign="top">[a21]</TD> <TD valign="top"> T. Lickteig,   M.F. Royal,   "Cauchy index computation" ''Calcolo'' , '''33''' (1996) pp. 337–352</TD></TR><TR><TD valign="top">[a22]</TD> <TD valign="top"> H. Widom,   "Hankel matrices" ''Trans. Amer. Math. Soc.'' , '''127''' (1966) pp. 179–203</TD></TR><TR><TD valign="top">[a23]</TD> <TD valign="top"> D.A. Bini,   L. Gemignani,   "Fast fraction free triangularization of Bezoutians with applications to sub-resultant chain computation" ''Linear Alg. &amp; Its Appl.'' , '''284''' (1998) pp. 19–39</TD></TR><TR><TD valign="top">[a24]</TD> <TD valign="top"> D.A. Bini,   V. Pan,   "Matrix and polynomial computations 1: Fundamental algorithms" , Birkhäuser (1994)</TD></TR><TR><TD valign="top">[a25]</TD> <TD valign="top"> G. Heinig,   K. Rost,   "Algebraic methods for Toeplitz-like matrices and operators" , Akad. Berlin &amp; Birkhäuser (1984)</TD></TR><TR><TD valign="top">[a26]</TD> <TD valign="top"> A.S. Householder,   "The numerical treatment of a single nonlinear equation" , McGraw-Hill (1970)</TD></TR></table>
+
<table><tr><td valign="top">[a1]</td> <td valign="top"> G.A. Baker, P.R. Graves-Morris, "Padé approximants" , Addison-Wesley (1981) {{MR|0661068}} {{MR|0635620}} {{MR|0635619}} {{ZBL|0603.30045}} {{ZBL|0603.30044}} {{ZBL|0468.30033}} {{ZBL|0468.30032}} </td></tr><tr><td valign="top">[a2]</td> <td valign="top"> S. Barnett, "Polynomials and linear control systems" , M. Dekker (1983) {{MR|0704016}} {{ZBL|0528.93003}} </td></tr><tr><td valign="top">[a3]</td> <td valign="top"> C. Brezinski, "Padé-type approximation and general orthogonal polynomials" , Birkhäuser (1980) {{MR|0561106}} {{ZBL|0418.41012}} </td></tr><tr><td valign="top">[a4]</td> <td valign="top"> A. Bultheel, M. van Barel, "Linear algebra: Rational approximation and orthogonal polynomials" , ''Studies in Computational Math.'' , North-Holland (1997) {{MR|1602017}} {{ZBL|0890.65024}} </td></tr><tr><td valign="top">[a5]</td> <td valign="top"> S. Cabay, R. Meleshko, "A weakly stable algorithm for Padé approximation and the inversion of Hankel matrices" ''SIAM J. Matrix Anal. Appl.'' , '''14''' (1993) pp. 735–765</td></tr><tr><td valign="top">[a6]</td> <td valign="top"> G.E. Collins, "Sub-resultants and reduced polynomial remainder sequences" ''J. Assoc. Comput. Mach.'' , '''14''' (1967) pp. 128–142</td></tr><tr><td valign="top">[a7]</td> <td valign="top"> "Linear algebra in signals, systems and control" B.N. Datta (ed.) C.R. Johnson (ed.) M.A. Kaashoek (ed.) R. Plemmons (ed.) E.D. Sontag (ed.) , SIAM (1988) {{MR|0969778}} {{ZBL|0661.00019}} </td></tr><tr><td valign="top">[a8]</td> <td valign="top"> A. Draux, "Polynômes orthogonaux: Formels-applications" , ''Lecture Notes in Mathematics'' , '''974''' , Springer (1983) {{MR|690769}} {{ZBL|0504.42001}} </td></tr><tr><td valign="top">[a9]</td> <td valign="top"> P.A. Fuhrmann, "A polynomial approach to linear algebra" , Springer (1996) {{MR|1393938}} {{ZBL|0852.15001}} </td></tr><tr><td valign="top">[a10]</td> <td valign="top"> W.B. Gragg., "The Padé table and its relation to certain algorithms of numerical analysis" ''SIAM Review'' , '''14''' (1972) pp. 1–61 {{MR|0305563}} {{ZBL|0238.30008}} </td></tr><tr><td valign="top">[a11]</td> <td valign="top"> W.B. Gragg, A. Lindquist, "On the partial realization problem." ''Linear Alg. &amp; Its Appl.'' , '''50''' (1983) pp. 277–319 {{MR|0699565}} {{ZBL|0519.93024}} </td></tr><tr><td valign="top">[a12]</td> <td valign="top"> M.H. Gutknecht, "A completed theory of unsymmetric Lanczos process and related algorithms" ''SIAM J. Matrix Anal. Appl.'' , '''13''' (1992) pp. 594–639 {{MR|1152770}} {{ZBL|0760.65039}} </td></tr><tr><td valign="top">[a13]</td> <td valign="top"> I.S. Iohvidov., "Hankel and Toeplitz matrices and forms" , Birkhäuser (1982) {{MR|0677503}} {{ZBL|0493.15018}} </td></tr><tr><td valign="top">[a14]</td> <td valign="top"> T. Kailath, "Linear systems" , Prentice-Hall (1980) {{MR|0569473}} {{MR|0584089}} {{ZBL|0454.93001}} </td></tr><tr><td valign="top">[a15]</td> <td valign="top"> T. Kailath, S.Y. Kung, M. Morf, "Displacement ranks of matrices and linear equations" ''J. Math. Anal. Appl.'' , '''68''' (1979) pp. 395–407 {{MR|0533501}} {{ZBL|0433.15001}} </td></tr><tr><td valign="top">[a16]</td> <td valign="top"> "Fast reliable algorithms for matrices with structure" T. Kailath (ed.) A.H. Sayed (ed.) , SIAM (1999) {{MR|1715813}} {{ZBL|0931.65018}} </td></tr><tr><td valign="top">[a17]</td> <td valign="top"> M. van Barel, P. Kravanja, "A stabilized superfast solver for indefinite Hankel systems" ''Linear Alg. &amp; Its Appl.'' , '''284''' (1998) pp. 335–355 {{MR|1655143}} {{ZBL|0938.65059}} </td></tr><tr><td valign="top">[a18]</td> <td valign="top"> M.A. Naimark, M.G. Krein., "The method of symmetric and Hermitian forms in the theory of the separation of the roots of algebraic equations" ''Linear and Multilinear Algebra'' , '''10''' (1981) pp. 265–308 {{MR|0638124}} {{ZBL|0584.12018}} </td></tr><tr><td valign="top">[a19]</td> <td valign="top"> G. Labahn, D.K. Choi, S. Cabay, "The inverses of block Hankel and block Toeplitz matrices" ''SIAM J. Comput.'' , '''19''' (1990) pp. 98–123 {{MR|1035200}} {{ZBL|0716.15003}} </td></tr><tr><td valign="top">[a20]</td> <td valign="top"> S.C. Power, "Hankel operators in Hilbert spaces" , ''Research Notes in Math.'' , Pitman (1982) {{MR|666699}} {{ZBL|}} </td></tr><tr><td valign="top">[a21]</td> <td valign="top"> T. Lickteig, M.F. Royal, "Cauchy index computation" ''Calcolo'' , '''33''' (1996) pp. 337–352 {{MR|1627623}} {{ZBL|0904.65024}} </td></tr><tr><td valign="top">[a22]</td> <td valign="top"> H. Widom, "Hankel matrices" ''Trans. Amer. Math. Soc.'' , '''127''' (1966) pp. 179–203 {{MR|0189237}} {{MR|0187099}} {{ZBL|0148.12303}} </td></tr><tr><td valign="top">[a23]</td> <td valign="top"> D.A. Bini, L. Gemignani, "Fast fraction free triangularization of Bezoutians with applications to sub-resultant chain computation" ''Linear Alg. &amp; Its Appl.'' , '''284''' (1998) pp. 19–39 {{MR|1655129}} {{ZBL|0937.65031}} </td></tr><tr><td valign="top">[a24]</td> <td valign="top"> D.A. Bini, V. Pan, "Matrix and polynomial computations 1: Fundamental algorithms" , Birkhäuser (1994)</td></tr><tr><td valign="top">[a25]</td> <td valign="top"> G. Heinig, K. Rost, "Algebraic methods for Toeplitz-like matrices and operators" , Akad. Berlin &amp; Birkhäuser (1984) {{MR|0782105}} {{MR|0756628}} {{ZBL|0549.15013}} </td></tr><tr><td valign="top">[a26]</td> <td valign="top"> A.S. Householder, "The numerical treatment of a single nonlinear equation" , McGraw-Hill (1970) {{MR|0388759}} {{ZBL|0242.65047}} </td></tr></table>

Latest revision as of 16:45, 1 July 2020

A matrix whose entries along a parallel to the main anti-diagonal are equal, for each parallel. Equivalently, $H = ( h _ { i , j} )$ is a Hankel matrix if and only if there exists a sequence $s _ { 1 } , s_{ 2} , \ldots$, such that $h_{ i , j } = s _ { i + j - 1 }$, $i , j = 1,2 , \ldots$. If $s _ { k }$ are square matrices, then $H$ is referred to as a block Hankel matrix. Infinite Hankel matrices are associated with the representation of Hankel operators acting on the Hilbert space of square summable complex sequences.

Hankel matrices are frequently encountered in applications where the close interplay between polynomial and matrix computations is exploited in order to devise very effective numerical solution algorithms. Famous special cases are the Hilbert matrix and Hilbert–Hankel operator, defined by $s _ { i + j - 1} = ( i + j - 1 ) ^ { - 1 }$, which play an important role in the study of spectral properties of integral operators of Carleman type (cf. also Carleman operator). For theoretical properties of Hankel matrices, see [a13]. For an overview of Hankel operators, see [a20]. For references to the varied and numerous relations among Hankel matrices and polynomial computations, see [a24], [a9], [a4].

To a Hankel operator $H = ( s _ { i + j - 1} )$ one naturally associates the function $r ( z ) = \sum _ { k = 1 } ^ { \infty } s _ { k } z ^ { - k }$, which is called its symbol. Kronecker's theorem [a9] says that $H$ has finite rank $n$ if and only if its symbol is a rational function, that is, $r ( z ) = p ( z ) / q ( z )$, where $p ( z )$ and $q ( z )$ are mutually prime polynomials. In this case $n$ is the number of poles of $r ( z )$ and, therefore, the degree of $q ( z )$. In addition, assuming that $H _ { k }$ is the $k \times k$ leading principal submatrix of $H$, i.e., the submatrix made up by the entries in the first $k$ rows and columns of $H$, then $H _ { n }$ is non-singular, whereas $H _ { j }$ is singular for any $j > n$.

Given a Hankel operator $H$ with symbol $r ( z )$, then the problem of finding two polynomials $p ( z )$ and $q ( z )$ of degree at most $n - 1$ and $n$, respectively, such that

\begin{equation} \tag{a1} \frac { r ( z ^ { - 1 } ) } { z } - \frac { p ( z ) } { q ( z ) } = w _ { 0 } z ^ { 2 n } + w _ { 1 } z ^ { 2 n + 1 } +\dots , \end{equation}

is a particular instance of the Padé approximation problem. If $H _ { n }$ is non-singular, these polynomials are uniquely determined up to a suitable normalization, say $q ( 0 ) = 1$, and their computation essentially amounts to solving a linear system with coefficient matrix $H _ { n }$. See [a1], [a3] and [a10] for a survey of both the theory and applications of general Padé approximation problems. In [a19] this theory is first generalized and then applied to the inversion of (block) Hankel matrices.

Other variants of (a1) can also be considered, generally leading to different computational problems. From a system-theoretic point of view, the possibility of recovering a rational function $p ( z ) / q ( z )$, where $q ( z )$ is monic, by its MacLaurin expansion at infinity has been extensively studied as the partial realization problem of system theory (see, for instance, [a11]). It is intimately connected to such topics as the Berlekamp–Massey algorithm in the context of coding theory and Kalman filtering. For applications of the theory of Hankel matrices to engineering problems of system and control theory, see [a14] and [a7].

The connection between Hankel matrices and orthogonal polynomials arises in the solution of moment problems (cf. also Moment problem). Given a positive Borel measure $ \eta $ on $( - 1,1 )$, then the Hankel operator $H = ( s _ { i + j - 1} )$ defined by $s _ { i + j - 1 } = \int _ { - 1 } ^ { 1 } z ^ { i + j - 2 } d \eta ( z )$, $i , j = 1,2 , \ldots$, is positive definite and, moreover, the last columns of $H _ { k } ^{- 1}$, $k = 1,2 , \dots$, generate a sequence of orthogonal polynomials linked by a three-term recurrence relation. The converse is known as the Hamburger moment problem (cf. also Moment problem). The underlying theory is very rich and can be suitably extended to both finite Hankel matrices, by considering discrete measures, and to indefinite Hankel matrices, by means of formal orthogonal polynomials. A survey of results on Hankel matrices generated by positive measures can be found in [a22]. See [a8] and [a12] for an introduction to the theory of formal orthogonal polynomials in the context of the algorithms of numerical analysis, including Lanczos' tridiagonalization process, rational interpolation schemes, the Euclidean algorithm, and inverse spectral methods for Jacobi matrices.

Since orthogonal polynomials on the real axis generate Sturm sequences, it follows that the use of quadratic forms associated with Hankel matrices provides a means for solving real root counting problems and real root localization problems; see [a18] and [a2]. Moreover, certain properties of sequences of Hankel determinants give the theoretical bases on which both Koenig's method and the Rutishauser $qd$ algorithm, for the approximation of zeros and poles of meromorphic functions, rely; see [a26].

The problem of inverting a finite non-singular Hankel matrix $H _ { n }$ has been extensively studied in the literature on numerical methods and the connections shown earlier between the theory of Hankel matrices and many other fields have been exploited in order to derive many different Hankel system solvers.

As mentioned above (the Kronecker's theorem), if the Hankel operator $H$ has a rational symbol $r ( z ) = p ( z ) / q ( z )$ with $p ( z )$ and $q ( z )$ mutually prime and $q ( z )$ of degree $n$, then $H _ { n }$ is invertible. On the other hand, if $H _ { n }$ is an invertible finite Hankel matrix of order $n$ determined by its entries $s _ { i +j-1 } $, $1 \leq i , j \leq n$, then the rational function $r ( z ) = \sum _ { i = 1 } ^ { 2 n - 1 } s _ { i } z ^ { - i }$ can uniquely be extended to a power series $g ( z ) = r ( z ) + \sum _ { i = 1 } ^ { \infty } s _ { 2 m + i } z ^ { - ( 2 m + i ) }$ in such a way that $g ( z )$ is the expansion at infinity of $p ( z ) / q ( z )$, where $p ( z )$ and $q ( z )$ are mutually prime with $q ( z )$ monic of degree $n$. In this case the inverse of $H _ { n }$ is given by $H _ { n } ^ { - 1 } = B ( q , t )$, where $t ( z )$ is a polynomial of degree less than $n$ that satisfies the Bezout equation $t ( z ) p ( z ) + q ( z ) v ( z ) = 1$, and $B ( q , t ) = ( b _ { i , j} )$ denotes the Bezout matrix, whose entries are defined by

\begin{equation*} \frac { q ( z ) t ( w ) - q ( w ) t ( z ) } { z - w } = \sum _ { i , j = 1 } ^ { n } b _ { i , j } z ^ { i - 1 } w ^ { j - 1 }. \end{equation*}

Since the Bezout equation can be solved by means of the Euclidean algorithm, this brings up the possibility of a recursive approach to the inversion of Hankel matrices. Analogously, in a matrix setting this recursive structure is fully exploited by the observation that Hankel matrices have low displacement rank in a sense first defined in [a15]. Based on these properties, many fast algorithms for solving Hankel linear systems with cost $O ( n ^ { 2 } )$ have been developed. Superfast $O (n \operatorname{log} ^ { 2 } n )$ algorithms have also appeared; see [a25], [a16], [a4], [a24], [a5] and [a17] for extensive bibliographies on these methods. Generalizations to the case where $H _ { n }$ has entries over an integral domain are discussed in [a23], where the subresultant theory [a6] is described in terms of factorization properties of Hankel matrices. This becomes significant for the efficient solution of many problems in real algebraic geometry involving polynomials and rational functions [a21].

References

[a1] G.A. Baker, P.R. Graves-Morris, "Padé approximants" , Addison-Wesley (1981) MR0661068 MR0635620 MR0635619 Zbl 0603.30045 Zbl 0603.30044 Zbl 0468.30033 Zbl 0468.30032
[a2] S. Barnett, "Polynomials and linear control systems" , M. Dekker (1983) MR0704016 Zbl 0528.93003
[a3] C. Brezinski, "Padé-type approximation and general orthogonal polynomials" , Birkhäuser (1980) MR0561106 Zbl 0418.41012
[a4] A. Bultheel, M. van Barel, "Linear algebra: Rational approximation and orthogonal polynomials" , Studies in Computational Math. , North-Holland (1997) MR1602017 Zbl 0890.65024
[a5] S. Cabay, R. Meleshko, "A weakly stable algorithm for Padé approximation and the inversion of Hankel matrices" SIAM J. Matrix Anal. Appl. , 14 (1993) pp. 735–765
[a6] G.E. Collins, "Sub-resultants and reduced polynomial remainder sequences" J. Assoc. Comput. Mach. , 14 (1967) pp. 128–142
[a7] "Linear algebra in signals, systems and control" B.N. Datta (ed.) C.R. Johnson (ed.) M.A. Kaashoek (ed.) R. Plemmons (ed.) E.D. Sontag (ed.) , SIAM (1988) MR0969778 Zbl 0661.00019
[a8] A. Draux, "Polynômes orthogonaux: Formels-applications" , Lecture Notes in Mathematics , 974 , Springer (1983) MR690769 Zbl 0504.42001
[a9] P.A. Fuhrmann, "A polynomial approach to linear algebra" , Springer (1996) MR1393938 Zbl 0852.15001
[a10] W.B. Gragg., "The Padé table and its relation to certain algorithms of numerical analysis" SIAM Review , 14 (1972) pp. 1–61 MR0305563 Zbl 0238.30008
[a11] W.B. Gragg, A. Lindquist, "On the partial realization problem." Linear Alg. & Its Appl. , 50 (1983) pp. 277–319 MR0699565 Zbl 0519.93024
[a12] M.H. Gutknecht, "A completed theory of unsymmetric Lanczos process and related algorithms" SIAM J. Matrix Anal. Appl. , 13 (1992) pp. 594–639 MR1152770 Zbl 0760.65039
[a13] I.S. Iohvidov., "Hankel and Toeplitz matrices and forms" , Birkhäuser (1982) MR0677503 Zbl 0493.15018
[a14] T. Kailath, "Linear systems" , Prentice-Hall (1980) MR0569473 MR0584089 Zbl 0454.93001
[a15] T. Kailath, S.Y. Kung, M. Morf, "Displacement ranks of matrices and linear equations" J. Math. Anal. Appl. , 68 (1979) pp. 395–407 MR0533501 Zbl 0433.15001
[a16] "Fast reliable algorithms for matrices with structure" T. Kailath (ed.) A.H. Sayed (ed.) , SIAM (1999) MR1715813 Zbl 0931.65018
[a17] M. van Barel, P. Kravanja, "A stabilized superfast solver for indefinite Hankel systems" Linear Alg. & Its Appl. , 284 (1998) pp. 335–355 MR1655143 Zbl 0938.65059
[a18] M.A. Naimark, M.G. Krein., "The method of symmetric and Hermitian forms in the theory of the separation of the roots of algebraic equations" Linear and Multilinear Algebra , 10 (1981) pp. 265–308 MR0638124 Zbl 0584.12018
[a19] G. Labahn, D.K. Choi, S. Cabay, "The inverses of block Hankel and block Toeplitz matrices" SIAM J. Comput. , 19 (1990) pp. 98–123 MR1035200 Zbl 0716.15003
[a20] S.C. Power, "Hankel operators in Hilbert spaces" , Research Notes in Math. , Pitman (1982) MR666699
[a21] T. Lickteig, M.F. Royal, "Cauchy index computation" Calcolo , 33 (1996) pp. 337–352 MR1627623 Zbl 0904.65024
[a22] H. Widom, "Hankel matrices" Trans. Amer. Math. Soc. , 127 (1966) pp. 179–203 MR0189237 MR0187099 Zbl 0148.12303
[a23] D.A. Bini, L. Gemignani, "Fast fraction free triangularization of Bezoutians with applications to sub-resultant chain computation" Linear Alg. & Its Appl. , 284 (1998) pp. 19–39 MR1655129 Zbl 0937.65031
[a24] D.A. Bini, V. Pan, "Matrix and polynomial computations 1: Fundamental algorithms" , Birkhäuser (1994)
[a25] G. Heinig, K. Rost, "Algebraic methods for Toeplitz-like matrices and operators" , Akad. Berlin & Birkhäuser (1984) MR0782105 MR0756628 Zbl 0549.15013
[a26] A.S. Householder, "The numerical treatment of a single nonlinear equation" , McGraw-Hill (1970) MR0388759 Zbl 0242.65047
How to Cite This Entry:
Hankel matrix. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Hankel_matrix&oldid=18576
This article was adapted from an original article by Luca Gemignani (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article