Difference between revisions of "Hankel matrix"
Ulf Rehmann (talk | contribs) m (MR/ZBL numbers added) |
m (link) |
||
Line 15: | Line 15: | ||
The connection between Hankel matrices and [[Orthogonal polynomials|orthogonal polynomials]] arises in the solution of moment problems (cf. also [[Moment problem|Moment problem]]). Given a positive [[Borel measure|Borel measure]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003038.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003039.png" />, then the Hankel operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003040.png" /> defined by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003041.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003042.png" />, is positive definite and, moreover, the last columns of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003043.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003044.png" />, generate a sequence of orthogonal polynomials linked by a three-term recurrence relation. The converse is known as the Hamburger moment problem (cf. also [[Moment problem|Moment problem]]). The underlying theory is very rich and can be suitably extended to both finite Hankel matrices, by considering discrete measures, and to indefinite Hankel matrices, by means of formal orthogonal polynomials. A survey of results on Hankel matrices generated by positive measures can be found in [[#References|[a22]]]. See [[#References|[a8]]] and [[#References|[a12]]] for an introduction to the theory of formal orthogonal polynomials in the context of the algorithms of numerical analysis, including Lanczos' tridiagonalization process, rational interpolation schemes, the [[Euclidean algorithm|Euclidean algorithm]], and inverse spectral methods for Jacobi matrices. | The connection between Hankel matrices and [[Orthogonal polynomials|orthogonal polynomials]] arises in the solution of moment problems (cf. also [[Moment problem|Moment problem]]). Given a positive [[Borel measure|Borel measure]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003038.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003039.png" />, then the Hankel operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003040.png" /> defined by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003041.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003042.png" />, is positive definite and, moreover, the last columns of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003043.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003044.png" />, generate a sequence of orthogonal polynomials linked by a three-term recurrence relation. The converse is known as the Hamburger moment problem (cf. also [[Moment problem|Moment problem]]). The underlying theory is very rich and can be suitably extended to both finite Hankel matrices, by considering discrete measures, and to indefinite Hankel matrices, by means of formal orthogonal polynomials. A survey of results on Hankel matrices generated by positive measures can be found in [[#References|[a22]]]. See [[#References|[a8]]] and [[#References|[a12]]] for an introduction to the theory of formal orthogonal polynomials in the context of the algorithms of numerical analysis, including Lanczos' tridiagonalization process, rational interpolation schemes, the [[Euclidean algorithm|Euclidean algorithm]], and inverse spectral methods for Jacobi matrices. | ||
− | Since orthogonal polynomials on the real axis generate | + | Since orthogonal polynomials on the real axis generate [[Sturm sequence]]s, it follows that the use of quadratic forms associated with Hankel matrices provides a means for solving real root counting problems and real root localization problems; see [[#References|[a18]]] and [[#References|[a2]]]. Moreover, certain properties of sequences of Hankel determinants give the theoretical bases on which both Koenig's method and the Rutishauser <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003046.png" /> algorithm, for the approximation of zeros and poles of meromorphic functions, rely; see [[#References|[a26]]]. |
The problem of inverting a finite non-singular Hankel matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003047.png" /> has been extensively studied in the literature on numerical methods and the connections shown earlier between the theory of Hankel matrices and many other fields have been exploited in order to derive many different Hankel system solvers. | The problem of inverting a finite non-singular Hankel matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h130/h130030/h13003047.png" /> has been extensively studied in the literature on numerical methods and the connections shown earlier between the theory of Hankel matrices and many other fields have been exploited in order to derive many different Hankel system solvers. |
Revision as of 17:58, 30 November 2014
A matrix whose entries along a parallel to the main anti-diagonal are equal, for each parallel. Equivalently, is a Hankel matrix if and only if there exists a sequence , such that , . If are square matrices, then is referred to as a block Hankel matrix. Infinite Hankel matrices are associated with the representation of Hankel operators acting on the Hilbert space of square summable complex sequences.
Hankel matrices are frequently encountered in applications where the close interplay between polynomial and matrix computations is exploited in order to devise very effective numerical solution algorithms. Famous special cases are the Hilbert matrix and Hilbert–Hankel operator, defined by , which play an important role in the study of spectral properties of integral operators of Carleman type (cf. also Carleman operator). For theoretical properties of Hankel matrices, see [a13]. For an overview of Hankel operators, see [a20]. For references to the varied and numerous relations among Hankel matrices and polynomial computations, see [a24], [a9], [a4].
To a Hankel operator one naturally associates the function , which is called its symbol. Kronecker's theorem [a9] says that has finite rank if and only if its symbol is a rational function, that is, , where and are mutually prime polynomials. In this case is the number of poles of and, therefore, the degree of . In addition, assuming that is the leading principal submatrix of , i.e., the submatrix made up by the entries in the first rows and columns of , then is non-singular, whereas is singular for any .
Given a Hankel operator with symbol , then the problem of finding two polynomials and of degree at most and , respectively, such that
(a1) |
is a particular instance of the Padé approximation problem. If is non-singular, these polynomials are uniquely determined up to a suitable normalization, say , and their computation essentially amounts to solving a linear system with coefficient matrix . See [a1], [a3] and [a10] for a survey of both the theory and applications of general Padé approximation problems. In [a19] this theory is first generalized and then applied to the inversion of (block) Hankel matrices.
Other variants of (a1) can also be considered, generally leading to different computational problems. From a system-theoretic point of view, the possibility of recovering a rational function , where is monic, by its MacLaurin expansion at infinity has been extensively studied as the partial realization problem of system theory (see, for instance, [a11]). It is intimately connected to such topics as the Berlekamp–Massey algorithm in the context of coding theory and Kalman filtering. For applications of the theory of Hankel matrices to engineering problems of system and control theory, see [a14] and [a7].
The connection between Hankel matrices and orthogonal polynomials arises in the solution of moment problems (cf. also Moment problem). Given a positive Borel measure on , then the Hankel operator defined by , , is positive definite and, moreover, the last columns of , , generate a sequence of orthogonal polynomials linked by a three-term recurrence relation. The converse is known as the Hamburger moment problem (cf. also Moment problem). The underlying theory is very rich and can be suitably extended to both finite Hankel matrices, by considering discrete measures, and to indefinite Hankel matrices, by means of formal orthogonal polynomials. A survey of results on Hankel matrices generated by positive measures can be found in [a22]. See [a8] and [a12] for an introduction to the theory of formal orthogonal polynomials in the context of the algorithms of numerical analysis, including Lanczos' tridiagonalization process, rational interpolation schemes, the Euclidean algorithm, and inverse spectral methods for Jacobi matrices.
Since orthogonal polynomials on the real axis generate Sturm sequences, it follows that the use of quadratic forms associated with Hankel matrices provides a means for solving real root counting problems and real root localization problems; see [a18] and [a2]. Moreover, certain properties of sequences of Hankel determinants give the theoretical bases on which both Koenig's method and the Rutishauser algorithm, for the approximation of zeros and poles of meromorphic functions, rely; see [a26].
The problem of inverting a finite non-singular Hankel matrix has been extensively studied in the literature on numerical methods and the connections shown earlier between the theory of Hankel matrices and many other fields have been exploited in order to derive many different Hankel system solvers.
As mentioned above (the Kronecker's theorem), if the Hankel operator has a rational symbol with and mutually prime and of degree , then is invertible. On the other hand, if is an invertible finite Hankel matrix of order determined by its entries , , then the rational function can uniquely be extended to a power series in such a way that is the expansion at infinity of , where and are mutually prime with monic of degree . In this case the inverse of is given by , where is a polynomial of degree less than that satisfies the Bezout equation , and denotes the Bezout matrix, whose entries are defined by
Since the Bezout equation can be solved by means of the Euclidean algorithm, this brings up the possibility of a recursive approach to the inversion of Hankel matrices. Analogously, in a matrix setting this recursive structure is fully exploited by the observation that Hankel matrices have low displacement rank in a sense first defined in [a15]. Based on these properties, many fast algorithms for solving Hankel linear systems with cost have been developed. Superfast algorithms have also appeared; see [a25], [a16], [a4], [a24], [a5] and [a17] for extensive bibliographies on these methods. Generalizations to the case where has entries over an integral domain are discussed in [a23], where the subresultant theory [a6] is described in terms of factorization properties of Hankel matrices. This becomes significant for the efficient solution of many problems in real algebraic geometry involving polynomials and rational functions [a21].
References
[a1] | G.A. Baker, P.R. Graves-Morris, "Padé approximants" , Addison-Wesley (1981) MR0661068 MR0635620 MR0635619 Zbl 0603.30045 Zbl 0603.30044 Zbl 0468.30033 Zbl 0468.30032 |
[a2] | S. Barnett, "Polynomials and linear control systems" , M. Dekker (1983) MR0704016 Zbl 0528.93003 |
[a3] | C. Brezinski, "Padé-type approximation and general orthogonal polynomials" , Birkhäuser (1980) MR0561106 Zbl 0418.41012 |
[a4] | A. Bultheel, M. van Barel, "Linear algebra: Rational approximation and orthogonal polynomials" , Studies in Computational Math. , North-Holland (1997) MR1602017 Zbl 0890.65024 |
[a5] | S. Cabay, R. Meleshko, "A weakly stable algorithm for Padé approximation and the inversion of Hankel matrices" SIAM J. Matrix Anal. Appl. , 14 (1993) pp. 735–765 |
[a6] | G.E. Collins, "Sub-resultants and reduced polynomial remainder sequences" J. Assoc. Comput. Mach. , 14 (1967) pp. 128–142 |
[a7] | "Linear algebra in signals, systems and control" B.N. Datta (ed.) C.R. Johnson (ed.) M.A. Kaashoek (ed.) R. Plemmons (ed.) E.D. Sontag (ed.) , SIAM (1988) MR0969778 Zbl 0661.00019 |
[a8] | A. Draux, "Polynômes orthogonaux: Formels-applications" , Lecture Notes in Mathematics , 974 , Springer (1983) MR690769 Zbl 0504.42001 |
[a9] | P.A. Fuhrmann, "A polynomial approach to linear algebra" , Springer (1996) MR1393938 Zbl 0852.15001 |
[a10] | W.B. Gragg., "The Padé table and its relation to certain algorithms of numerical analysis" SIAM Review , 14 (1972) pp. 1–61 MR0305563 Zbl 0238.30008 |
[a11] | W.B. Gragg, A. Lindquist, "On the partial realization problem." Linear Alg. & Its Appl. , 50 (1983) pp. 277–319 MR0699565 Zbl 0519.93024 |
[a12] | M.H. Gutknecht, "A completed theory of unsymmetric Lanczos process and related algorithms" SIAM J. Matrix Anal. Appl. , 13 (1992) pp. 594–639 MR1152770 Zbl 0760.65039 |
[a13] | I.S. Iohvidov., "Hankel and Toeplitz matrices and forms" , Birkhäuser (1982) MR0677503 Zbl 0493.15018 |
[a14] | T. Kailath, "Linear systems" , Prentice-Hall (1980) MR0569473 MR0584089 Zbl 0454.93001 |
[a15] | T. Kailath, S.Y. Kung, M. Morf, "Displacement ranks of matrices and linear equations" J. Math. Anal. Appl. , 68 (1979) pp. 395–407 MR0533501 Zbl 0433.15001 |
[a16] | "Fast reliable algorithms for matrices with structure" T. Kailath (ed.) A.H. Sayed (ed.) , SIAM (1999) MR1715813 Zbl 0931.65018 |
[a17] | M. van Barel, P. Kravanja, "A stabilized superfast solver for indefinite Hankel systems" Linear Alg. & Its Appl. , 284 (1998) pp. 335–355 MR1655143 Zbl 0938.65059 |
[a18] | M.A. Naimark, M.G. Krein., "The method of symmetric and Hermitian forms in the theory of the separation of the roots of algebraic equations" Linear and Multilinear Algebra , 10 (1981) pp. 265–308 MR0638124 Zbl 0584.12018 |
[a19] | G. Labahn, D.K. Choi, S. Cabay, "The inverses of block Hankel and block Toeplitz matrices" SIAM J. Comput. , 19 (1990) pp. 98–123 MR1035200 Zbl 0716.15003 |
[a20] | S.C. Power, "Hankel operators in Hilbert spaces" , Research Notes in Math. , Pitman (1982) MR666699 |
[a21] | T. Lickteig, M.F. Royal, "Cauchy index computation" Calcolo , 33 (1996) pp. 337–352 MR1627623 Zbl 0904.65024 |
[a22] | H. Widom, "Hankel matrices" Trans. Amer. Math. Soc. , 127 (1966) pp. 179–203 MR0189237 MR0187099 Zbl 0148.12303 |
[a23] | D.A. Bini, L. Gemignani, "Fast fraction free triangularization of Bezoutians with applications to sub-resultant chain computation" Linear Alg. & Its Appl. , 284 (1998) pp. 19–39 MR1655129 Zbl 0937.65031 |
[a24] | D.A. Bini, V. Pan, "Matrix and polynomial computations 1: Fundamental algorithms" , Birkhäuser (1994) |
[a25] | G. Heinig, K. Rost, "Algebraic methods for Toeplitz-like matrices and operators" , Akad. Berlin & Birkhäuser (1984) MR0782105 MR0756628 Zbl 0549.15013 |
[a26] | A.S. Householder, "The numerical treatment of a single nonlinear equation" , McGraw-Hill (1970) MR0388759 Zbl 0242.65047 |
Hankel matrix. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Hankel_matrix&oldid=23850