Many problems in engineering and applied mathematics ultimately require the solution of linear systems of equations. For small-size problems, there is often not much else to do except to use one of the already standard methods of solution, such as Gaussian elimination (cf. also Gauss method). However, in many applications, can be very large (, ) and, moreover, the linear equations may have to be solved over and over again, with different problem/model parameters, till a satisfactory solution to the original physical problem is obtained. In such cases, the burden, i.e. the number of flops required to solve an linear system of equations, can become prohibitively large. This is one reason why one seeks in various classes of applications to identify special/characteristic matrix structures that may be assumed in order to reduce the computational burden.
The most obvious structures are those that involve explicit patterns among the matrix entries, such as Toeplitz, Hankel, Vandermonde, Cauchy, and Pick matrices. Several fast algorithms have been devised over the years to exploit these special structures. However, even more common than these explicit matrix structures are matrices in which the structure is implicit. For example, in certain least-squares problems one often encounters products of Toeplitz matrices; these products are not generally Toeplitz, but, on the other hand, they are not "unstructured" . Similarly, in probabilistic calculations the matrix of interest is often not a Toeplitz matrix, but rather its inverse, which is rarely Toeplitz itself, but of course is not unstructured: its inverse is Toeplitz. It is well-known that flops suffice to solve linear systems of equations with an Toeplitz coefficient matrix; a question is whether one will need flops to invert a non-Toeplitz coefficient matrix whose inverse is known to be Toeplitz? When pressed, one's response clearly must be that it is conceivable that flops will suffice, and this is in fact true.
Such problems suggest the need for a quantitative way of defining and identifying structure in matrices. Over the years, starting with [a1], it was found that an elegant and useful way to do so is the concept of displacement structure. This concept has also been useful for a host of problems apparently far removed from the solution of linear equations, such as the study of constrained and unconstrained rational interpolation, maximum entropy extension, signal detection, digital filter design, non-linear Riccati differential equations, inverse scattering, certain Fredholm integral equations, etc. (see [a3], [a2] and the many references therein).
For motivation, consider an Hermitian Toeplitz matrix , . Since such matrices are completely specified by entries, rather than , one would of course expect a reduction in computational effort for handling problems involving such matrices. However, exploiting the Toeplitz structure is apparently more difficult than it may at first seem. To see this, consider the simple case of a real symmetric Toeplitz matrix and apply the first step of the Gaussian elimination procedure to it, namely
where the so-called Schur complement matrix is seen to be
However, is no longer Toeplitz, so the special structure is lost in the very first step of the procedure. The fact is that what is preserved is not the Toeplitz structure, but a deeper notion called "displacement structure" .
There are several forms of displacement structure, the earliest of which is the following [a1]. Consider an Hermitian matrix and the lower-triangular shift matrix with ones on the first subdiagonal and zeros elsewhere (i.e., a lower-triangular Jordan block with eigenvalues at ). The displacement of with respect to , denoted by , is defined as the difference
The matrix is said to have displacement structure (or to be of low displacement rank) with respect to if the rank of is considerably lower than (and independent of) . For example, a Hermitian Toeplitz matrix has displacement rank with respect to .
More generally, let denote the rank of . Then one can write as , where is an -matrix and is a "signature" matrix of the form with . This representation is highly non-unique, since can be replaced by for any -unitary matrix , i.e. for any such that ; this flexibility is actually very useful. The matrix is said to be a generator matrix for since, along with , it completely identifies . If one labels the columns of as
and lets denote a lower-triangular Toeplitz matrix whose first column is , then it can be seen that the unique that solves the displacement equation is given by
Such displacement representations of as a combination of products of lower- and upper-triangular Toeplitz matrices allow, for example, bilinear forms such as to be rapidly evaluated via convolutions (and hence fast Fourier transforms).
As mentioned above, a general Toeplitz matrix has displacement rank , with in fact and . But there are interesting non-Toeplitz matrices with , for example the inverse of a Toeplitz matrix. In fact, this is a special case of the following fundamental result: If is invertible and satisfies , then there exists a such that . The fact that and are interchanged in the latter formula suggests that one can define a so-called "natural" inverse, , where is the "reverse" identity matrix (with ones on the anti-diagonal). For then one sees (with ) that
Therefore, and have the same displacement rank and (when is Hermitian) the same displacement inertia (since is the same). (A real symmetric Toeplitz matrix has , so that has a representation of the form , which after suitably identifying is a special so-called Gohberg–Semencul formula.) The proof of the above fundamental result is very simple (see, e.g., [a3]) and in fact holds with replaced by any matrix .
An interesting example is obtained by choosing , , when the solution of the displacement equation becomes a "Pick" matrix of the form
where are and vectors. Pick matrices arise in solving analytic interpolation problems and the displacement theory gives new and efficient computational algorithms for solving a variety of such problems (see, e.g., [a2] and Nevanlinna–Pick interpolation).
One can handle non-Hermitian structured matrices by using displacement operators of the form . When is diagonal with distinct entries and , becomes a Vandermonde matrix. A closely related displacement operator, first introduced in [a4], has the form . Choosing , leads to Cauchy-like solutions of the form
The name comes from the fact that when and , is a Cauchy matrix. When , and and , one gets the so-called Loewner matrix.
It is often convenient to use generating-function language, and to define
(Finite structured matrices can be extended to be semi-infinite in many natural ways, e.g. by extending the generator matrix by adding additional rows.) In this language, one can introduce general displacement equations of the form
Choosing corresponds to matrix displacement equations of the form (in an obvious notation), , while corresponds to . There are many other useful choices, but to enable recursive matrix factorizations it is necessary to restrict to the form . Note, in particular, that for one gets , the Szegö kernel. Other choices lead to the various so-called de Branges and Bergman kernels. See [a5] and the references therein.
A central feature in many of the applications mentioned earlier is the ability to efficiently obtain the so-called triangular LDU-factorization of a matrix and of its inverse (cf. also Matrix factorization). Among purely matrix computations, one may mention that this enables fast determination of the so-called QR-decomposition and the factorization of composite matrices such as , being Toeplitz. The LDU-factorization of structured matrices is facilitated by the fact that Schur complements inherit displacement structure. For example, writing
it turns out that
where . By properly defining the and , this allows one to find the displacement structure of various composite matrices. For example, choosing , , , and , gives the previously mentioned result on inverses of structured matrices.
Computations on structured matrices are made efficient by working not with the entries of , but with the entries of the generator matrix . The basic triangular factorization algorithm is Gaussian elimination, which, as noted in the first calculation, amounts to finding a sequence of Schur complements. Incorporating structure into the Gaussian elimination procedure was, in retrospect, first done (in the special case ) by I. Schur himself in a remarkable 1917 paper dealing with the apparently very different problem of checking when a power series is bounded in the unit disc.
The algorithm below is but one generalization of Schur's original recursion. It provides an efficient procedure for the computation of the triangular factors of a Hermitian positive-definite matrix satisfying
with . So, let denote the triangular decomposition of , where , and the lower triangular factor is normalized in such a way that the appear on its main diagonal. The non-zero part of the consecutive columns of will be further denoted by . Then it holds that the successive Schur complements of with respect to its leading -submatrices, denoted by , are also structured and satisfy
with lower-triangular shift matrices , and where the generator matrices are obtained by the following recursive construction:
start with ;
repeat for :
where is an arbitrary -unitary matrix, and denotes the top row of . Then
The degree of freedom in choosing is often very useful. One particular choice leads to the so-called proper form of the generator recursion. Let reduce to the form
with a non-zero scalar entry in the leading position. Then
In words, this shows that can be obtained from as follows:
1) reduce to proper form;
2) multiply by and keep the last columns of unaltered;
3) shift down the first column of by one position.
Extensions of the algorithm to more general structured matrices are possible (see, e.g., [a2], [a3]). In addition, the algorithm can be extended to provide simultaneous factorizations of both a matrix and its inverse, ; see, e.g., [a3].
An issue that arises in the study of such fast factorization algorithms is their numerical stability in finite-precision implementations. It was mentioned earlier that the generalized Schur algorithm amounts to combining Gaussian elimination with structure, and it is well-known that Gaussian elimination in its purest form is numerically unstable (meaning that the error in the factorization can be quite large, where and denote the computed and , respectively). The instability can often be controlled by resorting to pivoting techniques, i.e., by permuting the order of the rows, and perhaps columns, of the matrices before the Gaussian elimination steps. However, pivoting can destroy matrix structure and can thus lead to a loss in computational efficiency. It was observed in [a6], though, that for diagonal displacement operators , and more specifically for Cauchy-like matrices, partial pivoting does not destroy matrix structure. This fact was exploited in [a7] in the context of the generalized Schur algorithm and applied to other structured matrices. This was achieved by showing how to transform different kinds of matrix structure into Cauchy-like structure.
Partial pivoting by itself is not sufficient to guarantee numerical stability even for slow algorithms. Moreover, the above transformation-and-pivoting technique is only efficient for fixed-order problems, since the transformations have to be repeated afresh whenever the order changes. Another approach to the numerical stability of the generalized Schur algorithm is to examine the steps of the algorithm directly and to stabilize them without resorting to transformations among matrix structures. This was done in [a8], where it was shown, in particular, how to implement the hyperbolic rotations in a reliable manner. For all practical purposes, the main conclusion of [a8] is that the generalized Schur algorithm, with certain modifications, is backward stable for a large class of structured matrices.
|[a1]||T. Kailath, S.Y. Kung, M. Morf, "Displacement ranks of a matrix" Bull. Amer. Math. Soc. , 1 : 5 (1979) pp. 769–773|
|[a2]||T. Kailath, A.H. Sayed, "Displacement structure: Theory and applications" SIAM Review , 37 (1995) pp. 297–386|
|[a3]||T. Kailath, "Displacement structure and array algorithms" T. Kailath (ed.) A.H. Sayed (ed.) , Fast Reliable Algorithms for Matrices with Structure , SIAM (Soc. Industrial Applied Math.) (1999)|
|[a4]||G. Heinig, K. Rost, "Algebraic methods for Toeplitz-like matrices and operators" , Akad. (1984)|
|[a5]||H. Lev-Ari, T. Kailath, "Triangular factorization of structured Hermitian matrices" I. Gohberg (ed.) et al. (ed.) , Schur Methods in Operator Theory and Signal Processing , Birkhäuser (1986) pp. 301–324|
|[a6]||G. Heinig, "Inversion of generalized Cauchy matrices and other classes of structured matrices" A. Bojanczyk (ed.) G. Cybenko (ed.) , Linear Algebra for Signal Processing , IMA volumes in Mathematics and its Applications , 69 (1994) pp. 95–114|
|[a7]||I. Gohberg, T. Kailath, V. Olshevsky, "Fast Gaussian elimination with partial pivoting for matrices with displacement structure" Math. Comput. , 64 (1995) pp. 1557–1576|
|[a8]||S. Chandrasekaran, A.H. Sayed, "Stabilizing the generalized Schur algorithm" SIAM J. Matrix Anal. Appl. , 17 (1996) pp. 950–983|
Displacement structure. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Displacement_structure&oldid=16958