Namespaces
Variants
Actions

Normal form (for matrices)

From Encyclopedia of Mathematics
Revision as of 14:54, 7 June 2020 by Ulf Rehmann (talk | contribs) (tex encoded by computer)
Jump to: navigation, search


The normal form of a matrix $ A $ is a matrix $ N $ of a pre-assigned special form obtained from $ A $ by means of transformations of a prescribed type. One distinguishes various normal forms, depending on the type of transformations in question, on the domain $ K $ to which the coefficients of $ A $ belong, on the form of $ A $, and, finally, on the specific nature of the problem to be solved (for example, on the desirability of extending or not extending $ K $ on transition from $ A $ to $ N $, on the necessity of determining $ N $ from $ A $ uniquely or with a certain amount of arbitrariness). Frequently, instead of "normal form" one uses the term "canonical form of a matrixcanonical form" . Among the classical normal forms are the following. (Henceforth $ M _ {m \times n } ( K) $ denotes the set of all matrices of $ m $ rows and $ n $ columns with coefficients in $ K $.)

The Smith normal form.

Let $ K $ be either the ring of integers $ \mathbf Z $ or the ring $ F[ \lambda ] $ of polynomials in $ \lambda $ with coefficients in a field $ F $. A matrix $ B \in M _ {m \times n } ( K) $ is called equivalent to a matrix $ A \in M _ {m \times n } ( K) $ if there are invertible matrices $ C \in M _ {m \times m } ( K) $ and $ D \in M _ {n \times n } ( K) $ such that $ B = C A D $. Here $ B $ is equivalent to $ A $ if and only if $ B $ can be obtained from $ A $ by a sequence of elementary row-and-column transformations, that is, transformations of the following three types: a) permutation of the rows (or columns); b) addition to one row (or column) of another row (or column) multiplied by an element of $ K $; or c) multiplication of a row (or column) by an invertible element of $ K $. For transformations of this kind the following propositions hold: Every matrix $ A \in M _ {m \times n } ( K) $ is equivalent to a matrix $ N \in M _ {m \times n } ( K) $ of the form

$$ N = \left \| \begin{array}{cccccccc} d _ {1} &{} &{} &{} &{} &{} &{} & 0 \\ {} &\cdot &{} &{} &{} &{} &{} &{} \\ {} &{} &\cdot &{} &{} &{} &{} &{} \\ {} &{} &{} &d _ {r} &{} &{} &{} &{} \\ {} &{} &{} &{} & 0 &{} &{} &{} \\ {} &{} &{} &{} &{} &\cdot &{} &{} \\ {} &{} &{} &{} &{} &{} &\cdot &{} \\ 0 &{} &{} &{} &{} &{} &{} & 0 \\ \end{array} \right \| , $$

where $ d _ {i} \neq 0 $ for all $ i $; $ d _ {i} $ divides $ d _ {i+} 1 $ for $ i = 1 \dots r - 1 $; and if $ K = \mathbf Z $, then all $ d _ {i} $ are positive; if $ K = F [ \lambda ] $, then the leading coefficients of all polynomials $ d _ {i} $ are 1. This matrix is called the Smith normal form of $ A $. The $ d _ {i} $ are called the invariant factors of $ A $ and the number $ r $ is called its rank. The Smith normal form of $ A $ is uniquely determined and can be found as follows. The rank $ r $ of $ A $ is the order of the largest non-zero minor of $ A $. Suppose that $ 1 \leq j \leq r $; then among all minors of $ A $ of order $ j $ there is at least one non-zero. Let $ \Delta _ {j} $, $ j = 1 \dots r $, be the greatest common divisor of all non-zero minors of $ A $ of order $ j $( normalized by the condition $ \Delta _ {j} > 0 $ for $ K = \mathbf Z $ and such that the leading coefficient of $ \Delta _ {j} $ is 1 for $ K = F [ \lambda ] $), and let $ \Delta _ {0} = 1 $. Then $ d _ {j} = \Delta _ {j} / \Delta _ {j-} 1 $, $ j = 1 \dots r $. The invariant factors form a full set of invariants of the classes of equivalent matrices: Two matrices in $ M _ {m \times n } ( K) $ are equivalent if and only if their ranks and their invariant factors with equal indices are equal.

The invariant factors $ d _ {1} \dots d _ {r} $ split (in a unique manner, up to the order of the factors) into the product of powers of irreducible elements $ e _ {1} \dots e _ {s} $ of $ K $( which are positive integers $ > 1 $ when $ K = \mathbf Z $, and polynomials of positive degree with leading coefficient 1 when $ K = F [ \lambda ] $):

$$ d _ {i} = \ e _ {1} ^ {n _ {i1} } \dots e _ {s} ^ {n _ {is} } ,\ \ i = 1 \dots r , $$

where the $ n _ {ij} $ are non-negative integers. Every factor $ e _ {j} ^ {n _ {ij} } $ for which $ n _ {ij} > 0 $ is called an elementary divisor of $ A $( over $ K $). Every elementary divisor of $ A $ occurs in the set $ {\mathcal E} _ {A , K } $ of all elementary divisors of $ A $ with multiplicity equal to the number of invariant factors having this divisor in their decompositions. In contrast to the invariant factors, the elementary divisors depend on the ring $ K $ over which $ A $ is considered: If $ K = F [ \lambda ] $, $ \widetilde{F} $ is an extension of $ F $ and $ \widetilde{K} = \widetilde{F} [ \lambda ] $, then, in general, a matrix $ A \in M _ {m \times n } ( K) \subset M _ {m \times n } ( \widetilde{K} ) $ has distinct elementary divisors (but the same invariant factors), depending on whether $ A $ is regarded as an element of $ M _ {m \times n } ( K) $ or of $ M _ {m \times n } ( \widetilde{K} ) $. The invariant factors can be recovered from the complete collection of elementary divisors, and vice versa.

For a practical method of finding the Smith normal form see, for example, [1].

The main result on the Smith normal form was obtained for $ K = \mathbf Z $( see [7]) and $ K = F [ \lambda ] $( see [8]). With practically no changes, the theory of Smith normal forms goes over to the case when $ K $ is any principal ideal ring (see [3], [6]). The Smith normal form has important applications; for example, the structure theory of finitely-generated modules over principal ideal rings is based on it (see [3], [6]); in particular, this holds for the theory of finitely-generated Abelian groups and theory of the Jordan normal form (see below).

The natural normal form

Let $ K $ be a field. Two square matrices $ A , B \in M _ {n \times n } ( K) $ are called similar over $ K $ if there is a non-singular matrix $ C \in M _ {n \times n } ( K) $ such that $ B = C ^ {-} 1 A C $. There is a close link between similarity and equivalence: Two matrices $ A , B \in M _ {n \times n } ( K) $ are similar if and only if the matrices $ \lambda E - A $ and $ \lambda E - B $, where $ E $ is the identity matrix, are equivalent. Thus, for the similarity of $ A $ and $ B $ it is necessary and sufficient that all invariant factors, or, what is the same, the collection of elementary divisorsx over $ K [ \lambda ] $ of $ \lambda E - A $ and $ \lambda E - B $, are the same. For a practical method of finding a $ C $ for similar matrices $ A $ and $ B $, see [1], [4].

The matrix $ \lambda E - A $ is called the characteristic matrix of $ A \in M _ {n \times n } ( K) $, and the invariant factors of $ \lambda E - A $ are called the similarity invariants of $ A $; there are $ n $ of them, say $ d _ {1} \dots d _ {n} $. The polynomial $ d _ {n} $ is the determinant of $ \lambda E - A $ and is called the characteristic polynomial of $ A $. Suppose that $ d _ {1} = \dots = d _ {q} = 1 $ and that for $ j \geq q + 1 $ the degree of $ d _ {j} $ is greater than 1. Then $ A $ is similar over $ K $ to a block-diagonal matrix $ N _ {1} \in M _ {n \times n } ( K) $ of the form

$$ N _ {1} = \left \| \begin{array}{ccccc} L ( d _ {q+} 1 ) &{} &{} &{} & 0 \\ {} &\cdot &{} &{} &{} \\ {} &{} &\cdot &{} &{} \\ {} &{} &{} &\cdot &{} \\ 0 &{} &{} &{} &L ( d _ {n} ) \\ \end{array} \right \| , $$

where $ L ( f ) $ for a polynomial

$$ f = \lambda ^ {p} + \alpha _ {1} \lambda ^ {p-} 1 + \dots + \alpha _ {p} $$

denotes the so-called companion matrix

$$ L ( f ) = \ \left \| \begin{array}{cccccc} 0 & 1 & 0 &\dots & 0 & 0 \\ 0 & 0 & 1 &\dots & 0 & 0 \\ \cdot &\cdot &\cdot &\dots &\cdot &\cdot \\ 0 & 0 & 0 &\dots & 0 & 1 \\ {- \alpha _ {p} } &{- \alpha _ {p-} 1 } &{- \alpha _ {p-} 2 } &\dots &{- \alpha _ {2} } &{- \alpha _ {1} } \\ \end{array} \right \| . $$

The matrix $ N _ {1} $ is uniquely determined from $ A $ and is called the first natural normal form of $ A $( see [1], [2]).

Now let $ {\mathcal E} _ {A , K [ \lambda ] } $ be the collection of all elementary divisors of $ \lambda E - A $. Then $ A $ is similar over $ K $ to a block-diagonal matrix $ N _ {2} $( cf. Block-diagonal operator) whose blocks are the companion matrices of all elementary divisors $ e _ {j} ^ {n _ {ij} } \in {\mathcal E} _ {A , K [ \lambda ] } $ of $ \lambda E - A $:

$$ N _ {2} = \ \left \| \begin{array}{ccccc} \cdot &{} &{} &{} & 0 \\ {} &\cdot &{} &{} &{} \\ {} &{} &L ( e _ {j} ^ {n _ {ij} } ) &{} &{} \\ {} &{} &{} &\cdot &{} \\ {} &{} &{} &{} &{} \\ 0 &{} &{} &{} &\cdot \\ \end{array} \right \| . $$

The matrix $ N _ {2} $ is determined from $ A $ only up to the order of the blocks along the main diagonal; it is called the second natural normal form of $ A $( see [1], [2]), or its Frobenius, rational or quasi-natural normal form (see [4]). In contrast to the first, the second natural form changes, generally speaking, on transition from $ K $ to an extension.

The Jordan normal form.

Let $ K $ be a field, let $ A \in M _ {n \times n } ( K) $, and let $ {\mathcal E} _ {A , K [ \lambda ] } = \{ e _ {i} ^ {n _ {ij} } \} $ be the collection of all elementary divisors of $ \lambda E - A $ over $ K [ \lambda ] $. Suppose that $ K $ has the property that the characteristic polynomial $ d _ {n} $ of $ A $ splits in $ K [ \lambda ] $ into linear factors. (This is so, for example, if $ K $ is the field of complex numbers or, more generally, any algebraically closed field.) Then every one of the polynomials $ e _ {i} $ has the form $ \lambda - a _ {i} $ for some $ a _ {i} \in K $, and, accordingly, $ e _ {i} ^ {n _ {ij} } $ has the form $ ( \lambda - a _ {i} ) ^ {n _ {ij} } $. The matrix $ J ( f ) $ in $ M _ {s \times s } ( K) $ of the form

$$ J ( f ) = \ \left \| \begin{array}{cccccc} a & 1 &{} &{} &{} & 0 \\ {} &\cdot &{} &{} &{} &{} \\ {} &{} &\cdot &{} &{} &{} \\ {} &{} &{} &\cdot &{} &{} \\ {} &{} &{} &{} &\cdot & 1 \\ 0 &{} &{} &{} &{} & a \\ \end{array} \right \| , $$

where $ f = ( \lambda - a ) ^ {s} $, $ a \in K $, is called the hypercompanion matrix of $ f $( see [1]) or the Jordan block of order $ s $ with eigenvalue $ a $. The following fundamental proposition holds: A matrix $ A $ is similar over $ K $ to a block-diagonal matrix $ J \in M _ {n \times n } ( K) $ whose blocks are the hypercompanion matrices of all elementary divisors of $ \lambda E - A $:

$$ J = \left \| \begin{array}{ccccc} \cdot &{} &{} &{} & 0 \\ {} &\cdot &{} &{} &{} \\ {} &{} &J ( e _ {i} ^ {n _ {ij} } ) &{} &{} \\ {} &{} &{} &\cdot &{} \\ 0 &{} &{} &{} &\cdot \\ \end{array} \right \| . $$

The matrix $ J $ is determined only up to the order of the blocks along the main diagonal; it is a Jordan matrix and is called the Jordan normal form of $ A $. If $ K $ does not have the property mentioned above, then $ A $ cannot be brought, over $ K $, to the Jordan normal form (but it can over a finite extension of $ K $). See [4] for information about the so-called generalized Jordan normal form, reduction to which is possible over any field $ K $.

Apart from the various normal forms for arbitrary matrices, there are also special normal forms of special matrices. Classical examples are the normal forms of symmetric and skew-symmetric matrices. Let $ K $ be a field. Two matrices $ A , B \in M _ {n \times n } ( K) $ are called congruent (see [1]) if there is a non-singular matrix $ C \in M _ {n \times n } ( K) $ such that $ B = C ^ {T} A C $. Normal forms under the congruence relation have been investigated most thoroughly for the classes of symmetric and skew-symmetric matrices. Suppose that $ \mathop{\rm char} K \neq 2 $ and that $ A $ is skew-symmetric, that is, $ A ^ {T} = - A $. Then $ A $ is congruent to a uniquely determined matrix $ H $ of the form

$$ H = \left \| \begin{array}{rcrccrcccc} 0 & 1 &{} &{} &{} &{} &{} &{} &{} &{} \\ - 1 & 0 &{} &{} &{} &{} &{} &{} &{} &{} \\ {} &{} & 0 & 1 &{} &{} &{} &{} &{} &{} \\ {} &{} &- 1 & 0 &{} &{} &{} &{} &{} &{} \\ {} &{} &{} &{} &\cdot &{} &{} &{} &{} &{} \\ {} &{} &{} &{} &{} & 0 & 1 &{} &{} &{} \\ {} &{} &{} &{} &{} &- 1 & 0 &{} &{} &{} \\ {} &{} &{} &{} &{} &{} &{} & 0 &{} &{} \\ {} &{} &{} &{} &{} &{} &{} &{} &\cdot &{} \\ {} &{} &{} &{} &{} &{} &{} &{} &{} & 0 \\ \end{array} \right \| , $$

which can be regarded as the normal form of $ A $ under congruence. If $ A $ is symmetric, that is, $ A ^ {T} = A $, then it is congruent to a matrix $ D $ of the form

$$ D = \left \| \begin{array}{cccccccc} \epsilon _ {1} &{} &{} &{} &{} &{} &{} & 0 \\ {} &\cdot &{} &{} &{} &{} &{} &{} \\ {} &{} &\cdot &{} &{} &{} &{} &{} \\ {} &{} &{} &\epsilon _ {r} &{} &{} &{} &{} \\ {} &{} &{} &{} & 0 &{} &{} &{} \\ {} &{} &{} &{} &{} &\cdot &{} &{} \\ {} &{} &{} &{} &{} &{} &\cdot &{} \\ 0 &{} &{} &{} &{} &{} &{} & 0 \\ \end{array} \right \| , $$

where $ \epsilon _ {1} \neq 0 $ for all $ i $. The number $ r $ is the rank of $ A $ and is uniquely determined. The subsequent finer choice of the $ \epsilon _ {i} $ depends on the properties of $ K $. Thus, if $ K $ is algebraically closed, one may assume that $ \epsilon _ {1} = \dots = \epsilon _ {r} = 1 $; if $ K $ is the field of real numbers, one may assume that $ \epsilon _ {1} = \dots \epsilon _ {p} = 1 $ and $ \epsilon _ {p+} 1 = \dots = \epsilon _ {r} = - 1 $ for a certain $ p $. $ D $ is uniquely determined by these properties and can be regarded as the normal form of $ A $ under congruence. See [6], [10] and Quadratic form for information about the normal forms of symmetric matrices for a number of other fields, and also about Hermitian analogues of this theory.

A common feature in the theories of normal forms considered above (and also in others) is the fact that the admissible transformations over the relevant set of matrices are determined by the action of a certain group, so that the classes of matrices that can be carried into each other by means of these transformations are the orbits (cf. Orbit) of this group, and the appropriate normal form is the result of selecting in each orbit a certain canonical representative. Thus, the classes of equivalent matrices are the orbits of the group $ G = \mathop{\rm GL} _ {m} ( K) \times \mathop{\rm GL} _ {n} ( K) $( where $ \mathop{\rm GL} _ {s} ( K) $ is the group of invertible square matrices of order $ s $ with coefficients in $ K $), acting on $ M _ {m \times n } ( K) $ by the rule $ A \rightarrow C ^ {-} 1 A D $, where $ ( C , D ) \in G $. The classes of similar matrices are the orbits of $ \mathop{\rm GL} _ {n} ( K) $ on $ M _ {n \times n } ( K) $ acting by the rule $ A \rightarrow C ^ {-} 1 A C $, where $ C \in \mathop{\rm GL} _ {n} ( K) $. The classes of congruent symmetric or skew-symmetric matrices are the orbits of the group $ \mathop{\rm GL} _ {n} ( K) $ on the set of all symmetric or skew-symmetric matrices of order $ n $, acting by the rule $ A \rightarrow C ^ {T} A C $, where $ C \in \mathop{\rm GL} _ {n} ( K) $. From this point of view every normal form is a specific example of the solution of part of the general problem of orbital decomposition for the action of a certain transformation group.

References

[1] M. Markus, "A survey of matrix theory and matrix inequalities" , Allyn & Bacon (1964)
[2] P. Lancaster, "Theory of matrices" , Acad. Press (1969) MR0245579 Zbl 0186.05301
[3] S. Lang, "Algebra" , Addison-Wesley (1974) MR0783636 Zbl 0712.00001
[4] A.I. Mal'tsev, "Foundations of linear algebra" , Freeman (1963) (Translated from Russian) Zbl 0396.15001
[5] N. Bourbaki, "Elements of mathematics. Algebra: Modules. Rings. Forms" , 2 , Addison-Wesley (1975) pp. Chapt.4;5;6 (Translated from French) MR0643362 Zbl 1139.12001
[6] N. Bourbaki, "Elements of mathematics. Algebra: Algebraic structures. Linear algebra" , 1 , Addison-Wesley (1974) pp. Chapt.1;2 (Translated from French) MR0354207
[7] H.J.S. Smith, "On systems of linear indeterminate equations and congruences" , Collected Math. Papers , 1 , Chelsea, reprint (1979) pp. 367–409
[8] G. Frobenius, "Theorie der linearen Formen mit ganzen Coeffizienten" J. Reine Angew. Math. , 86 (1879) pp. 146–208
[9] F.R. [F.R. Gantmakher] Gantmacher, "The theory of matrices" , 1 , Chelsea, reprint (1977) (Translated from Russian) MR1657129 MR0107649 MR0107648 Zbl 0927.15002 Zbl 0927.15001 Zbl 0085.01001
[10] J.-P. Serre, "A course in arithmetic" , Springer (1973) (Translated from French) MR0344216 Zbl 0256.12001

Comments

The Smith canonical form and a canonical form related to the first natural normal form are of substantial importance in linear control and system theory [a1], [a2]. Here one studies systems of equations $ \dot{x} = A x + B u $, $ x \in \mathbf R ^ {n} $, $ u \in \mathbf R ^ {m} $, and the similarity relation is: $ ( A , B ) \sim ( S A S ^ {-} 1 , S B ) $. A pair of matrices $ A \in \mathbf R ^ {n \times n } $, $ B \in \mathbf R ^ {n \times m } $ is called completely controllable if the rank of the block matrix

$$ ( B , A B \dots A ^ {n} B ) = R ( A , B ) $$

is $ n $. Observe that $ R ( S A S ^ {-} 1 , S B ) = S R ( A , B ) $, so that a canonical form can be formed by selecting $ n $ independent column vectors from $ R ( A , B ) $. This can be done in many ways. The most common one is to test the columns of $ R ( A , B ) $ for independence in the order in which they appear in $ R ( A , B ) $. This yields the following so-called Brunovskii–Luenberger canonical form or block companion canonical form for a completely-controllable pair $ ( A , B ) $:

$$ \overline{A}\; = S ^ {-} 1 A S = \ \left \| \begin{array}{ccc} \overline{A}\; _ {11} &\dots &\overline{A}\; _ {1m} \\ \cdot &{} &\cdot \\ \cdot &{} &\cdot \\ \cdot &{} &\cdot \\ \overline{A}\; _ {m1} &\dots &\overline{A}\; _ {mm} \\ \end{array} \right \| , $$

$$ \overline{B}\; = S ^ {-} 1 B = ( \overline{b}\; _ {1} \dots \overline{b}\; _ {m} ) , $$

where $ \overline{A}\; _ {ij} $ is a matrix of size $ d _ {i} \times d _ {j} $ for certain $ d _ {i} \in \mathbf N \cup \{ 0 \} $, $ \sum _ {i=} 1 ^ {m} d _ {i} = n $, of the form

$$ \overline{A}\; _ {ii} = \left \| \begin{array}{ccccccc} 0 & 0 & 0 &\dots & 0 & 0 &* \\ 1 & 0 & 0 &\dots & 0 & 0 &* \\ 0 & 1 & 0 &\dots & 0 & 0 &* \\ \cdot &\cdot &\cdot &{} &\cdot &\cdot &\cdot \\ 0 & 0 & 0 &\dots & 0 & 1 &* \\ \end{array} \right \| , $$

$$ \overline{A}\; _ {ij} = \left \| \begin{array}{cccc} 0 &\dots & 0 &* \\ 0 &\dots & 0 &* \\ \cdot &{} &\cdot &\cdot \\ 0 &\dots & 0 &* \\ \end{array} \right \| \ \textrm{ for } i \neq j , $$

and $ \overline{b}\; _ {j} $ for $ d _ {j} \neq 0 $ is the $ ( d _ {1} + \dots + d _ {j-} 1 + 1 ) $- th standard basis vector of $ \mathbf R ^ {n} $; the $ \overline{b}\; _ {j} $ with $ d _ {j} = 0 $ have arbitrary coefficients $ * $. Here the $ * $' s denote coefficients which can take any value. If $ d _ {j} $ or $ d _ {i} $ is zero, the block $ A _ {ij} $ is empty (does not occur). Instead of $ \mathbf R $ any field can be used. The $ d _ {j} $ are called controllability indices or Kronecker indices. They are invariants.

Canonical forms are often used in (numerical) computations. This must be done with caution, because they may not depend continuously on the parameters [a3]. For example, the Jordan canonical form is not continuous; an example of this is:

$$ \left \| \begin{array}{cc} 1 & t \\ 0 & 1 \\ \end{array} \right \| \ \mapsto \left \| \begin{array}{cc} 1 & 1 \\ 0 & 1 \\ \end{array} \right \| \ \textrm{ for } t \neq 0 , $$

$$ \left \| \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array} \right \| \ \mapsto \left \| \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array} \right \| . $$

The matter of continuous canonical forms has much to do with moduli problems (cf. Moduli theory). Related is the matter of canonical forms for families of objects, e.g. canonical forms for holomorphic families of matrices under similarity [a4]. For a survey of moduli-type questions in linear control theory cf. [a5].

In the case of a controllable pair $ ( A , B ) $ with $ m = 1 $, i.e. $ B $ is a vector $ b \in \mathbf R ^ {n} $, the matrix $ A $ is cyclic, see also the section below on normal forms for operators. In this special case there is just one block $ \overline{A}\; _ {11} $( and one vector $ \overline{b}\; _ {1} $). This canonical form for a cyclic matrix with a cyclic vector is also called the Frobenius canonical form or the companion canonical form.

References

[a1] W.A. Wolovich, "Linear multivariable systems" , Springer (1974) MR0359881 Zbl 0291.93002
[a2] J. Klamka, "Controllability of dynamical systems" , Kluwer (1990) MR2461640 MR1325771 MR1134783 MR0707724 MR0507539 Zbl 0911.93015 Zbl 0876.93016 Zbl 0930.93008 Zbl 1043.93509 Zbl 0853.93020 Zbl 0852.93007 Zbl 0818.93002 Zbl 0797.93004 Zbl 0814.93012 Zbl 0762.93006 Zbl 0732.93008 Zbl 0671.93040 Zbl 0667.93007 Zbl 0666.93009 Zbl 0509.93012 Zbl 0393.93041
[a3] S.H. Golub, J.H. Wilkinson, "Ill conditioned eigensystems and the computation of the Jordan canonical form" SIAM Rev. , 18 (1976) pp. 578–619 MR0413456 Zbl 0341.65027
[a4] V.I. Arnol'd, "On matrices depending on parameters" Russ. Math. Surv. , 26 : 2 (1971) pp. 29–43 Uspekhi Mat. Nauk , 26 : 2 (1971) pp. 101–114 Zbl 0259.15011
[a5] M. Hazewinkel, "(Fine) moduli spaces for linear systems: what are they and what are they good for" C.I. Byrnes (ed.) C.F. Martin (ed.) , Geometrical Methods for the Theory of Linear Systems , Reidel (1980) pp. 125–193 MR0608993 Zbl 0481.93023
[a6] H.W. Turnball, A.C. Aitken, "An introduction to the theory of canonical matrices" , Blackie & Son (1932)

Self-adjoint operator on a Hilbert space

A normal form of an operator is a representation, up to an isomorphism, of a self-adjoint operator $ A $ acting on a Hilbert space $ {\mathcal H} $ as an orthogonal sum of multiplication operators by the independent variable.

To begin with, suppose that $ A $ is a cyclic operator; this means that there is an element $ h _ {0} \in {\mathcal H} $ such that every element $ h \in {\mathcal H} $ has a unique representation in the form $ F ( A) h _ {0} $, where $ F ( \xi ) $ is a function for which

$$ \int\limits _ {- \infty } ^ { {+ } \infty } | F ( \xi ) | ^ {2} d ( E _ \xi h _ {0} , h _ {0} ) < \infty ; $$

here $ E _ \xi $, $ - \infty < \xi < \infty $, is the spectral resolution of $ A $. Let $ {\mathcal L} _ \rho ^ {2} $ be the space of square-integrable functions on $ ( - \infty , + \infty ) $ with weight $ \rho ( \xi ) = ( E _ \xi h _ {0} , h _ {0} ) $, and let $ K _ \rho F = \xi F ( \xi ) $ be the multiplication operator by the independent variable, with domain of definition

$$ D _ {K _ \rho } = \ \left \{ { F ( \xi ) } : { \int\limits _ {- \infty } ^ { {+ } \infty } \xi ^ {2} | F ( \xi ) | ^ {2} d \rho ( \xi ) < \infty } \right \} . $$

Then the operators $ A $ and $ K _ \rho $ are isomorphic, $ A \simeq K _ \rho $; that is, there exists an isomorphic and isometric mapping $ U : {\mathcal H} \rightarrow {\mathcal L} _ \rho ^ {2} $ such that $ U D _ {A} = D _ {K _ \rho } $ and $ A = U ^ {-} 1 K _ \rho U $.

Suppose, next, that $ A $ is an arbitrary self-adjoint operator. Then $ {\mathcal H} $ can be split into an orthogonal sum of subspaces $ {\mathcal H} _ \alpha $ on each of which $ A $ induces a cyclic operator $ A _ \alpha $, so that $ H = \sum \oplus H _ \alpha $, $ A = \sum \oplus A _ \alpha $ and $ A _ \alpha \simeq K _ {\rho _ \alpha } $. If the operator $ K = \sum \oplus K _ {\rho _ \alpha } $ is given on $ {\mathcal L} ^ {2} = \sum \oplus {\mathcal L} _ {\rho _ \alpha } ^ {2} $, then $ A \simeq K $.

The operator $ K $ is called the normal form or canonical representation of $ A $. The theorem on the canonical representation extends to the case of arbitrary normal operators (cf. Normal operator).

References

[1] A.I. Plesner, "Spectral theory of linear operators" , F. Ungar (1965) (Translated from Russian) MR0194900 Zbl 0188.44402 Zbl 0185.21002
[2] N.I. Akhiezer, I.M. Glazman, "Theory of linear operators in Hilbert spaces" , 1–2 , Pitman (1981) (Translated from Russian) MR0615737 MR0615736

V.I. Sobolev

The normal form of an operator $ A $ is a representation of $ A $, acting on a Fock space constructed over a certain space $ L _ {2} ( M , \sigma ) $, where $ ( M , \sigma ) $ is a measure space, in the form of a sum

$$ \tag{1 } A = \sum _ {m , n \geq 0 } \int\limits K _ {n,m} ( x _ {1} \dots x _ {n} ; \ y _ {1} \dots y _ {m} ) \times $$

$$ \times a ^ {*} ( x _ {1} ) \dots a ^ {*} ( x _ {n} ) a ( y _ {1} ) \dots a ( y _ {m} ) \prod _ { i= } 1 ^ { n } d \sigma ( x _ {i} ) \prod _ { j= } 1 ^ { m } d \sigma ( y _ {j} ) , $$

where $ a ( x) , a ^ {*} ( x) $( $ x \in M $) are operator-valued generalized functions generating families of annihilation operators $ \{ {a ( f ) } : {f \in L _ {2} ( M , \sigma ) } \} $ and creation operators $ \{ {a ^ {*} ( f ) } : {f \in L _ {2} ( M , \sigma ) } \} $:

$$ a ( f ) = \int\limits _ { M } a ( x) f ( x) d \sigma ( x) ,\ \ a ^ {*} ( f ) = \int\limits _ { M } a ^ {*} ( x) \overline{f}\; ( x) d \sigma ( x) . $$

In each term of expression (1) all factors $ a ( y _ {j} ) $, $ j = 1 \dots m $, stand to the right of all factors $ a ^ {*} ( x _ {i} ) $, $ i = 1 \dots n $, and the (possibly generalized) functions $ K _ {n,m} ( x _ {1} \dots x _ {n} ; y _ {1} \dots y _ {m} ) $ in the two sets of variables $ ( x _ {1} \dots x _ {n} ) \in M ^ {n} $, $ ( y _ {1} \dots y _ {m} ) \in M ^ {m} $, $ n , m = 0 , 1 \dots $ are, in the case of a symmetric (Boson) Fock space, symmetric in the variables of each set separately, and, in the case of an anti-symmetric (Fermion) Fock space, anti-symmetric in these variables.

For any bounded operator $ A $ the normal form exists and is unique.

The representation (1) can be rewritten in a form containing the annihilation and creation operators directly:

$$ \tag{2 } A = $$

$$ = \ \sum _ {m , n } \sum _ { \begin{array}{c} \{ i _ {1} \dots i _ {n} \} \\ \{ j _ {1} \dots j _ {m} \} \end{array} } c _ {i _ {1} \dots i _ {n} j _ {1} \dots j _ {m} } a ^ {*} ( f _ { i _ {1} } ) \dots a ^ {*} ( f _ {i _ {n} } ) a ( f _ {j _ {1} } ) \dots a ( f _ {j _ {m} } ) , $$

where $ \{ {f _ {i} } : {i = 1 , 2 ,\dots } \} $ is an orthonormal basis in $ L _ {2} ( M , \sigma ) $ and the summation in (2) is over all pairs of finite collections $ \{ f _ {i _ {1} } \dots f _ {i _ {n} } \} $, $ \{ f _ {j _ {1} } \dots f _ {j _ {m} } \} $ of elements of this basis.

In the case of an arbitrary (separable) Hilbert space $ H $ the normal form of an operator $ A $ acting on the Fock space $ \Gamma ( H) $ constructed over $ H $ is determined for a fixed basis $ \{ {f _ {i} } : {i = 1 , 2 ,\dots } \} $ in $ H $ by means of the expression (2), where $ a ( f ) $, $ a ^ {*} ( f ) $, $ f \in H $, are families of annihilation and creation operators acting on $ \Gamma ( H) $.

References

[1] F.A. Berezin, "The method of second quantization" , Acad. Press (1966) (Translated from Russian) (Revised (augmented) second edition: Kluwer, 1989) MR0208930 Zbl 0151.44001

R.A. Minlos

Comments

References

[a1] N.N. [N.N. Bogolyubov] Bogolubov, A.A. Logunov, I.T. Todorov, "Introduction to axiomatic quantum field theory" , Benjamin (1975) (Translated from Russian) MR0452276 MR0452277
[a2] G. Källen, "Quantum electrodynamics" , Springer (1972) MR0153346 MR0056465 MR0051156 MR0039581 Zbl 0116.45005 Zbl 0074.44202 Zbl 0050.43001 Zbl 0046.21402 Zbl 0041.57104
[a3] J. Glimm, A. Jaffe, "Quantum physics, a functional integral point of view" , Springer (1981) Zbl 0461.46051

Recursive functions

The normal form of a recursive function is a method for specifying an $ n $- place recursive function $ \phi $ in the form

$$ \tag{* } \phi ( x _ {1} \dots x _ {n} ) = \ g ( \mu z ( f ( x _ {1} \dots x _ {n} , z ) = 0 ) ) , $$

where $ f $ is an $ ( n + 1 ) $- place primitive recursive function, $ g $ is a $ 1 $- place primitive recursive function and $ \mu z ( f ( x _ {1} \dots x _ {n} , z ) = 0 ) $ is the result of applying the least-number operator to $ f $. Kleene's normal form theorem asserts that there is a primitive recursive function $ g $ such that every recursive function $ \phi $ can be represented in the form (*) with a suitable function $ f $ depending on $ \phi $; that is,

$$ ( \exists g ) ( \forall \phi ) ( \exists f ) ( \forall x _ {1} \dots x _ {n} ) : $$

$$ [ \phi ( x _ {1} \dots x _ {n} ) = g ( \mu z ( f ( x _ {1} \dots x _ {n} , z ) = 0 ) ) ] . $$

The normal form theorem is one of the most important results in the theory of recursive functions.

A.A. Markov [2] obtained a characterization of those functions $ g $ that can be used in the normal form theorem for the representation (*). A function $ g $ can be used as function whose existence is asserted in the normal form theorem if and only if the equation $ g ( x) = n $ has infinitely many solutions for each $ n $. Such functions are called functions of great range.

References

[1] A.I. Mal'tsev, "Algorithms and recursive functions" , Wolters-Noordhoff (1970) (Translated from Russian) Zbl 0198.02501
[2] A.A. Markov, "On the representation of recursive functions" Izv. Akad. Nauk SSSR Ser. Mat. , 13 : 5 (1949) pp. 417–424 (In Russian) MR0031444

V.E. Plisko

Comments

References

[a1] S.C. Kleene, "Introduction to metamathematics" , North-Holland (1951) pp. 288 MR1234051 MR1570642 MR0051790 Zbl 0875.03002 Zbl 0604.03002 Zbl 0109.00509 Zbl 0047.00703

Normal form of a system of differential equations

A normal form of a system of differential equations

$$ \tag{1 } \dot{x} _ {i} = \phi _ {i} ( x _ {1} \dots x _ {n} ) ,\ \ i = 1 \dots n , $$

near an invariant manifold $ M $ is a formal system

$$ \tag{2 } \dot{y} _ {i} = \psi _ {i} ( x _ {1} \dots y _ {n} ) ,\ \ i = 1 \dots n , $$

that is obtained from (1) by an invertible formal change of coordinates

$$ \tag{3 } x _ {i} = \xi _ {i} ( y _ {i} \dots y _ {n} ) ,\ \ i = 1 \dots n , $$

in which the Taylor–Fourier series $ \psi _ {i} $ contain only resonance terms. In a particular case, normal forms occurred first in the dissertation of H. Poincaré (see [1]). By means of a normal form (2) some systems (1) can be integrated, and many can be investigated for stability and can be integrated approximately; for systems (1) a search has been made for periodic solutions and families of conditionally periodic solutions, and their bifurcation has been studied.

Normal forms in a neighbourhood of a fixed point.

Suppose that $ M $ contains a fixed point $ X \equiv ( x _ {1} \dots x _ {n} ) = 0 $ of the system (1) (that is, $ \phi _ {i} ( 0) = 0 $), that the $ \phi _ {i} $ are analytic at it and that $ \lambda _ {1} \dots \lambda _ {n} $ are the eigen values of the matrix $ \| \partial \phi _ {i} / \partial x _ {j} \| $ for $ X = 0 $. Let $ \Lambda \equiv ( \lambda _ {1} \dots \lambda _ {n} ) \neq 0 $. Then in a full neighbourhood of $ X = 0 $ the system (1) has the following normal form (2): the matrix $ \| \partial \psi _ {i} / \partial y _ {j} \| $ has for $ Y \equiv ( y _ {1} \dots y _ {n} ) = 0 $ a normal form (for example, the Jordan normal form) and the Taylor series

$$ \tag{4 } \psi _ {i} = y _ {i} \sum _ {Q \in N _ {i} } g _ {i Q } Y ^ {Q} ,\ \ i = 1 \dots n , $$

contain only resonance terms for which

$$ \tag{5 } ( Q , \Lambda ) \equiv \ q _ {1} \lambda _ {1} + \dots + q _ {n} \lambda _ {n} = 0 . $$

Here $ Q \equiv ( q _ {1} \dots q _ {n} ) $, $ Y ^ {Q} \equiv y _ {1} ^ {q _ {1} } \dots y _ {n} ^ {q _ {n} } $, $ N _ {i} = \{ {Q } : {\textrm{ integers } q _ {j} \geq 0, q _ {i} \geq - 1, q _ {1} + \dots + q _ {n} \geq 0 } \} $. If equation (5) has no solutions $ Q \neq 0 $ in $ N = N _ {1} \cup \dots \cup N _ {n} $, then the normal form (2) is linear:

$$ \dot{y} _ {i} = \lambda _ {i} y _ {i} ,\ \ i = 1 \dots n . $$

Every system (1) with $ \Lambda \neq 0 $ can be reduced in a neighbourhood of a fixed point to its normal form (2) by some formal transformation (3), where the $ \xi _ {i} $ are (possibly divergent) power series, $ \xi _ {i} ( 0) = 0 $ and $ \mathop{\rm det} \| \partial \xi _ {i} / \partial y _ {j} \| \neq 0 $ for $ Y = 0 $.

Generally speaking, the normalizing transformation (3) and the normal form (2) (that is, the coefficients $ g _ {iQ} $ in (4)) are not uniquely determined by the original system (1). A normal form (2) preserves many properties of the system (1), such as being real, symmetric, Hamiltonian, etc. (see , [3]). If the original system contains small parameters, one can include them among the coordinates $ x _ {j} $, and then $ \dot{x} _ {j} = 0 $. Such coordinates do not change under a normalizing transformation (see [3]).

If $ k $ is the number of linearly independent solutions $ Q \in N $ of equation (5), then by means of a transformation

$$ y _ {i} = \ z _ {1} ^ {\alpha _ {i1} } \dots z _ {n} ^ {\alpha _ {in} } ,\ \ i = 1 \dots n , $$

where the $ \alpha _ {ij} $ are integers and $ \mathop{\rm det} \| \alpha _ {ij} \| = \pm 1 $, the normal form (2) is carried to a system

$$ \dot{z} _ {j} = z _ {i} f ( z _ {1} \dots z _ {k} ) ,\ \ i = 1 \dots n $$

(see , [3]). The solution of this system reduces to a solution of the subsystem of the first $ k $ equations and to $ n - k $ quadratures. The subsystem has to be investigated in the neighbourhood of the multiple singular point $ z _ {1} = \dots = z _ {k} = 0 $, because the $ f _ {1} \dots f _ {k} $ do not contain linear terms. This can be done by a local method (see [3]).

The following problem has been examined (see ): Under what conditions on the normal form (2) does the normalizing transformation of an analytic system (1) converge (be analytic)? Let

$$ \omega _ {k} = \min | ( Q , \Lambda ) | $$

for those $ Q \in N $ for which

$$ ( Q , \Lambda ) \neq 0 ,\ \ q _ {1} + \dots + q _ {n} < 2 ^ {k} . $$

Condition $ \omega $: $ \sum _ {k=} 1 ^ \infty 2 ^ {-} k \mathop{\rm log} \omega _ {k} ^ {-} 1 < \infty $.

Condition $ \overline \omega \; $: $ {\lim\limits \sup } 2 ^ {-} k \mathop{\rm log} \omega _ {k} ^ {-} 1 < \infty $ as $ k \rightarrow \infty $.

Condition $ \overline \omega \; $ is weaker than $ \omega $. Both are satisfied for almost-all $ \Lambda $( relative to Lebesgue measure) and are very weak arithmetic restrictions on $ \Lambda $.

In case $ \mathop{\rm Re} \Lambda = 0 $ there is also condition $ A $( for the general case, see in ): There exists a power series $ a ( Y) $ such that in (4), $ \phi _ {i} = \lambda _ {i} y _ {i} a $, $ i = 1 \dots n $.

If for an analytic system (1) $ \Lambda $ satisfies condition $ \omega $ and the normal form (2) satisfies condition $ A $, then there exists an analytic transformation of (1) to a certain normal form. If (2) is obtained from an analytic system and fails to satisfy either condition $ \overline \omega \; $ or condition $ A $, then there exists an analytic system (1) that has (2) as its normal form, and every transformation to a normal form diverges (is not analytic).

Thus, the problem raised above is solved for all normal forms except those for which $ \Lambda $ satisfies condition $ \omega $, but not $ \overline \omega \; $, while the remaining coefficients of the normal form satisfy condition $ A $. The latter is a very rigid restriction on the coefficients of a normal form, and for large $ n $ it holds, generally speaking, only in degenerate cases. That is, the basic reason for divergence of a transformation to normal form is not small denominators, but degeneracy of the normal form.

But even in cases of divergence of the normalizing transformation (3) with respect to (2), one can study properties of the solutions of the system (1). For example, a real system (1) has a smooth transformation to the normal form (2) even when it is not analytic. The majority of results on smooth normalization have been obtained under the condition that all $ \mathop{\rm Re} \lambda _ {j} \neq 0 $. Under this condition, with the help of a change $ X \rightarrow V $ of finite smoothness class, a system (1) can be brought to a truncated normal form

$$ \tag{6 } \dot{v} _ {i} = \widetilde \psi _ {i} ( V) ,\ \ i = 1 \dots n , $$

where the $ \widetilde \psi _ {i} $ are polynomials of degree $ m $( see [4]–). If in the normalizing transformation (3) all terms of degree higher than $ m $ are discarded, the result is a transformation

$$ \tag{7 } x _ {i} = \widetilde \xi _ {i} ( U) ,\ \ i = 1 \dots n $$

(the $ \widetilde \xi _ {i} $ are polynomials), that takes (1) to the form

$$ \tag{8 } \dot{u} _ {i} = \widetilde \psi _ {i} ( U) + \widetilde \phi _ {i} ( U) ,\ \ i = 1 \dots n , $$

where the $ \widetilde \psi _ {i} $ are polynomials containing only resonance terms and the $ \widetilde \psi _ {i} $ are convergent power series containing only terms of degree higher than $ m $. Solutions of the truncated normal form (6) are approximations for solutions of (8) and, after the transformation (7), give approximations of solutions of the original system (1). In many cases one succeeds in constructing for (6) a Lyapunov function (or Chetaev function) $ f ( V) $ such that

$$ | f ( V) | \leq c _ {1} | V | ^ \gamma \ \ \textrm{ and } \ \ \left | \sum _ { j= } 1 ^ { n } \frac{\partial f }{\partial v _ {j} } \widetilde \phi _ {j} \right | > c _ {2} | V | ^ {\gamma + m } , $$

where $ c _ {1} $ and $ c _ {2} $ are positive constants. Then $ f ( U) $ is a Lyapunov (Chetaev) function for the system (8); that is, the point $ X = 0 $ is stable (unstable). For example, if all $ \mathop{\rm Re} \lambda _ {i} < 0 $, one can take $ m = 1 $, $ f = \sum _ {i=} 1 ^ {n} v _ {i} ^ {2} $ and obtain Lyapunov's theorem on stability under linear approximation (see [7]; for other examples see the survey [8]).

From the normal form (2) one can find invariant analytic sets of the system (1). In what follows it is assumed for simplicity of exposition that $ \mathop{\rm Re} \Lambda = 0 $. From the normal form (2) one extracts the formal set

$$ {\mathcal A} = \{ {Y } : {\psi _ {i} = \lambda _ {i} y _ {i} a ,\ i = 1 \dots n } \} , $$

where $ a $ is a free parameter. Condition $ A $ is satisfied on the set $ {\mathcal A} $. Let $ X $ be the union of subspaces of the form $ \{ {Y } : {y _ {i} = 0, i = i _ {1} \dots i _ {l} } \} $ such that the corresponding eigen values $ \lambda _ {j} $, $ j \neq i _ {1} \dots i _ {l} $, $ 1 \leq j \leq n $, are pairwise commensurable. The formal set $ {\mathcal A} tilde = {\mathcal A} \cap K $ is analytic in the system (1). From $ {\mathcal A} $ one selects the subset $ {\mathcal B} $ that is analytic in (1) if condition $ \omega $ holds (see [3]). On the sets $ {\mathcal A} tilde $ and $ {\mathcal B} $ lie periodic solutions and families of conditionally-periodic solutions of (1). By considering the sets $ {\mathcal A} tilde $ and $ {\mathcal B} $ in systems with small parameters, one can study all analytic perturbations and bifurcations of such solutions (see [9]).

Generalizations.

If a system (1) does not lead to a normal form (2) but to a system whose right-hand sides contain certain non-resonance terms, then the resulting simplification is less substantial, but can improve the quality of the transformation. Thus, the reduction to a "semi-normal form" is analytic under a weakened condition $ A $( see ). Another version is a transformation that normalizes a system (1) only on certain submanifolds (for example, on certain coordinate subspaces; see ). A combination of these approaches makes it possible to prove for (1) the existence of invariant submanifolds and of solutions of specific form (see [9]).

Suppose that a system (1) is defined and analytic in a neighbourhood of an invariant manifold $ M $ of dimension $ k + l $ that is fibred into $ l $- dimensional invariant tori. Then close to $ M $ one can introduce local coordinates

$$ S = ( s _ {1} \dots s _ {k} ) ,\ \ Y = ( y _ {1} \dots y _ {l} ) ,\ \ Z = ( z _ {1} \dots z _ {m} ) , $$

$$ k + l + m = n , $$

such that $ Z = 0 $ on $ M $, $ y _ {j} $ is of period $ 2 \pi $, $ S $ ranges over a certain domain $ H $, and (1) takes the form

$$ \tag{9 } \left . \begin{array}{c} \dot{S} = \Phi ^ {(} 1) ( S , Y , Z ) , \\ \dot{Y} = \Omega ( S , Y ) + \Phi ^ {(} 2) ( S , Y , Z ) , \\ \dot{Z} = ( S , Y ) Z + \Phi ^ {(} 3) ( S , Y , Z ) , \end{array} \right \} $$

where $ \Phi ^ {(} j) = O ( | Z | ) $, $ j = 1 , 2 $, $ \Phi ^ {(} 3) = O ( | Z | ^ {2} ) $ and $ A $ is a matrix. If $ \Omega = \textrm{ const } $ and $ A $ is triangular with constant main diagonal $ \Lambda ( \lambda _ {1} \dots \lambda _ {n} ) $, then (under a weak restriction on the small denominators) there is a formal transformation of the local coordinates $ S , Y , Z \rightarrow U , V , W $ that takes the system (9) to the normal form

$$ \tag{10 } \left . \begin{array}{c} \dot{U} = \sum \Psi _ {PQ} ^ {(} 1) ( U) W ^ {Q} \mathop{\rm exp} i ( P , V ) , \\ \dot{V} = \sum \Psi _ {PQ} ^ {(} 2) ( U) W ^ {Q} \mathop{\rm exp} i ( P , V ) , \\ \dot{w} _ {j} = w _ {j} \sum g _ {jPQ} ( U) W ^ {Q} \mathop{\rm exp} ( P , V ) ,\ \ j = 1 \dots m , \end{array} \right \} $$

where $ P \in \mathbf Z ^ {l} $, $ Q \in \mathbf N ^ {m} $, $ U \in H $, and $ i ( P , \Omega ) + ( Q , \Lambda ) = 0 $.

If among the coordinates $ Z $ there is a small parameter, (9) can be averaged by the Krylov–Bogolyubov method of averaging (see [10]), and the averaged system is a normal form. More generally, perturbation theory can be regarded as a special case of the theory of normal forms, when one of the coordinates is a small parameter (see [11]).

Theorems on the convergence of a normalizing change, on the existence of analytic invariant sets, etc., carry over to the systems (9) and (10). Here the best studied case is when $ M $ is a periodic solution, that is, $ k = 0 $, $ l = 1 $. In this case the theory of normal forms is in many respects identical with the case when $ M $ is a fixed point. Poincaré suggested that one should consider a pointwise mapping of a normal section across the periods. In this context arose a theory of normal forms of pointwise mappings, which is parallel to the corresponding theory for systems (1). For other generalizations of normal forms see [3], , [12][14].

References

[1] H. Poincaré, "Thèse, 1928" , Oeuvres , 1 , Gauthier-Villars (1951) pp. IL-CXXXII
[2a] A.D. [A.D. Bryuno] Bruno, "Analytical form of differential equations" Trans. Moscow Math. Soc. , 25 (1971) pp. 131–288 Trudy Moskov. Mat. Obshch. , 25 (1971) pp. 119–262
[2b] A.D. [A.D. Bryuno] Bruno, "Analytical form of differential equations" Trans. Moscow Math. Soc. (1972) pp. 199–239 Trudy Moskov. Mat. Obshch. , 26 (1972) pp. 199–239
[3] A.D. Bryuno, "Local methods in nonlinear differential equations" , 1 , Springer (1989) (Translated from Russian) MR0993771
[4] P. Hartman, "Ordinary differential equations" , Birkhäuser (1982) MR0658490 Zbl 0476.34002
[5a] V.S. Samovol, "Linearization of a system of differential equations in the neighbourhood of a singular point" Soviet Math. Dokl. , 13 (1972) pp. 1255–1259 Dokl. Akad. Nauk SSSR , 206 (1972) pp. 545–548 Zbl 0667.34041
[5b] V.S. Samovol, "Equivalence of systems of differential equations in the neighbourhood of a singular point" Trans. Moscow Math. Soc. (2) , 44 (1982) pp. 217–237 Trudy Moskov. Mat. Obshch. , 44 (1982) pp. 213–234
[6a] G.R. Belitskii, "Equivalence and normal forms of germs of smooth mappings" Russian Math. Surveys , 33 : 1 (1978) pp. 95–155 Uspekhi Mat. Nauk. , 33 : 1 (1978) MR0490708
[6b] G.R. Belitskii, "Normal forms relative to a filtering action of a group" Trans. Moscow Math. Soc. , 40 (1979) pp. 3–46 Trudy Moskov. Mat. Obshch. , 40 (1979) pp. 3–46
[6c] G.R. Belitskii, "Smooth equivalence of germs of vector fields with a single zero eigenvalue or a pair of purely imaginary eigenvalues" Funct. Anal. Appl. , 20 : 4 (1986) pp. 253–259 Funkts. Anal. i Prilozen. , 20 : 4 (1986) pp. 1–8
[7] A.M. [A.M. Lyapunov] Liapunoff, "Problème général de la stabilité du mouvement" , Princeton Univ. Press (1947) (Translated from Russian)
[8] A.L. Kunitsyn, A.P. Markev, "Stability in resonant cases" Itogi Nauk. i Tekhn. Ser. Obsh. Mekh. , 4 (1979) pp. 58–139 (In Russian)
[9] J.N. Bibikov, "Local theory of nonlinear analytic ordinary differential equations" , Springer (1979) MR0547669 Zbl 0404.34005
[10] N.N. Bogolyubov, Yu.A. Mitropol'skii, "Asymptotic methods in the theory of non-linear oscillations" , Hindushtan Publ. Comp. , Delhi (1961) (Translated from Russian) MR0100379 Zbl 0151.12201
[11] A.D. [A.D. Bryuno] Bruno, "Normal form in perturbation theory" , Proc. VIII Internat. Conf. Nonlinear Oscillations, Prague, 1978 , 1 , Academia (1979) pp. 177–182 (In Russian)
[12] V.V. Kostin, Le Dinh Thuy, "Some tests of the convergence of a normalizing transformation" Dapovidi Akad. Nauk URSR Ser. A : 11 (1975) pp. 982–985 (In Russian) MR407356
[13] E.J. Zehnder, "C.L. Siegel's linearization theorem in infinite dimensions" Manuscr. Math. , 23 (1978) pp. 363–371 MR0501144 Zbl 0374.47037
[14] N.V. Nikolenko, "The method of Poincaré normal forms in problems of integrability of equations of evolution type" Russian Math. Surveys , 41 : 5 (1986) pp. 63–114 Uspekhi Mat. Nauk , 41 : 5 (1986) pp. 109–152 MR0878327 Zbl 0632.35026

A.D. Bryuno

Comments

For more on various linearization theorems for ordinary differential equations and canonical form theorems for ordinary differential equations, as well as generalizations to the case of non-linear representations of nilpotent Lie algebras, cf. also Poincaré–Dulac theorem and Analytic theory of differential equations, and [a1].

References

[a1] V.I. Arnol'd, "Geometrical methods in the theory of ordinary differential equations" , Springer (1983) (Translated from Russian)
How to Cite This Entry:
Normal form (for matrices). Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Normal_form_(for_matrices)&oldid=49339
This article was adapted from an original article by V.L. Popov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article