Namespaces
Variants
Actions

Differential equation, ordinary

From Encyclopedia of Mathematics
Jump to: navigation, search


An equation with a function in one independent variable as unknown, containing not only the unknown function itself, but also its derivatives of various orders.

The term "differential equations" was proposed in 1676 by G. Leibniz. The first studies of these equations were carried out in the late 17th century in the context of certain problems in mechanics and geometry.

Ordinary differential equations have important applications and are a powerful tool in the study of many problems in the natural sciences and in technology; they are extensively employed in mechanics, astronomy, physics, and in many problems of chemistry and biology. The reason for this is the fact that objective laws governing certain phenomena (processes) can be written as ordinary differential equations, so that the equations themselves are a quantitative expression of these laws. For instance, Newton's laws of mechanics make it possible to reduce the description of the motion of mass points or solid bodies to solving ordinary differential equations. The computation of radiotechnical circuits or satellite trajectories, studies of the stability of a plane in flight, and explaining the course of chemical reactions are all carried out by studying and solving ordinary differential equations. The most interesting and most important applications of these equations are in the theory of oscillations (cf. Oscillations, theory of) and in automatic control theory. Applied problems in turn produce new formulations of problems in the theory of ordinary differential equations; the mathematical theory of optimal control (cf. Optimal control, mathematical theory of) in fact arose in this manner.

In what follows the independent variable is denoted by $ t $, the unknown functions by $ x , y , z $, etc., while the derivatives of these functions with respect to $ t $ will be denoted by $ \dot{x} , \ddot{x} , \dots, x ^ {( n) } $, etc.

The simplest ordinary differential equation is already encountered in analysis: The problem of finding the primitive function of a given continuous function $ f ( t) $ amounts to finding an unknown function $ x ( t) $ which satisfies the equation

$$ \tag{1 } \dot{x} = f ( t) . $$

In order to prove that this equation is solvable, a special apparatus had to be developed — the theory of the Riemann integral.

A natural generalization of equation (1) is an ordinary differential equation of the first order, solved with respect to the derivative:

$$ \tag{2 } \dot{x} ( t) = f ( t , x ) , $$

where $ f ( t , x ) $ is a known function, defined in a certain region of the $ ( t , x ) $-plane. Many practical problems can be reduced to the solution (or, as is often said, the integration) of this equation. A solution of the ordinary differential equation (2) is a function $ x ( t) $ defined and differentiable on some interval $ I $ and satisfying the conditions

$$ ( t , x ( t) ) \in D ,\ t \in I , $$

$$ \dot{x} ( t) = f ( t , x ( t) ) ,\ t \in I . $$

The solution of (2) may be geometrically represented in the $ ( t , x ) $-plane as a curve with equation $ x = x ( t) $, $ t \in I $. This curve is known as an integral curve, with a tangent at every point, and is totally contained in $ D $. The geometrical interpretation of equation (2) itself is as a field of directions in $ D $, obtained by drawing a segment $ l _ {t , x } $ of small length with angular coefficient $ f ( t , x ) $ through each point $ ( t , x ) \in D $. Any integral curve $ x = x ( t) $ at each of its points is tangent to the segment $ l _ {t , x ( t) } $.

The existence theorem answers the question of the existence of a solution of equation (2): If $ f ( t , x ) \in C( D) $ (i.e. is continuous in $ D $), then at least one continuously-differentiable integral curve of equation (2) passes through any point $ ( t _ {0} , x _ {0} ) \in D $, and each such curve may be extended in both directions up to the boundary of any closed subregion lying completely in $ D $ and containing the point $ ( t _ {0} , x _ {0} ) $. In other words, for any point $ ( t _ {0} , x _ {0} ) \in D $ it is possible to find at least one non-extendable solution $ x = x ( t) $, $ t \in I $, such that $ x ( t) \in C ^ {1} ( I) $ (i.e. $ x $ is continuous in $ I $ together with its derivative $ \dot{x} $),

$$ \tag{3 } x ( t _ {0} ) = x _ {0} , $$

and $ x ( t) $ tends to the boundary of $ D $ as $ t $ tends to the right or left end of the interval $ I $.

A very important theoretical problem is to clarify the assumptions to be made concerning the right-hand side of an ordinary differential equation and the additional conditions to be imposed on the equation in order that it has a unique solution. The following existence and uniqueness theorem is valid: If $ f ( t , x ) \in C( D) $ satisfies a Lipschitz condition with respect to $ x $ in $ D $ and if $ ( t _ {0} , x _ {0} ) \in D $, then equation (2) has a unique, non-extendable solution satisfying condition (3). In particular, if two solutions $ x _ {1} ( t) $, $ t \in I _ {1} $, and $ x _ {2} ( t) $, $ t \in I _ {2} $, of such an equation (2) coincide for at least one value $ t = t _ {0} $, i.e. $ x _ {1} ( t _ {0} ) = x _ {2} ( t _ {0} ) $, then

$$ x _ {1} ( t) = x _ {2} ( t) ,\ t \in I _ {1} \cap I _ {2} . $$

The geometrical content of this theorem is that the entire region $ D $ is covered by integral curves of equation (2), with no intersections between any two curves. Unique solutions may also be obtained under weaker assumptions regarding the function $ f ( t , x ) $ [6].

The relation (3) is known as an initial condition. The numbers $ t _ {0} $ and $ x _ {0} $ are called initial values for the solution of equation (2), while the point $ ( t _ {0} , x _ {0} ) $ is called the initial point corresponding to the integral curve. The task of finding the solution of this equation satisfying initial condition (3) (or, in other words, with initial values $ t _ {0} $, $ x _ {0} $) is known as the Cauchy problem or the initial value problem. The theorem just given provides sufficient conditions for the unique solvability of the Cauchy problem (2), (3).

Applied problems often involve systems of ordinary differential equations, containing several unknown functions of the same variable and their derivatives. A natural generalization of equation (2) is the normal form of a system of differential equations of order $ n $:

$$ \tag{4 } {\dot{x} } {} ^ {i} = f ^ { i } ( t , x ^ {1}, \dots, x ^ {n} ) ,\ i = 1, \dots, n , $$

where $ x ^ {1}, \dots, x ^ {n} $ are unknown functions of the variable $ t $ and $ f ^ { i } $, $ i = 1, \dots, n $, are given functions in $ n + 1 $ variables. Writing

$$ \mathbf x = ( x ^ {1}, \dots, x ^ {n} ) , $$

$$ \mathbf f ( t , \mathbf x ) = ( f ^ { 1 } ( t , \mathbf x ), \dots, f ^ { n } ( t , \mathbf x ) ) , $$

the system (4) takes the vector form:

$$ \tag{5 } \dot{\mathbf x} = \mathbf f ( t , \mathbf x ) . $$

The vector function

$$ \tag{6 } \mathbf x = \mathbf x ( t) = ( x ^ {1} ( t), \dots, x ^ {n} ( t) ) , \ t \in I , $$

is a solution of the system (4) or of the vector equation (5). Each solution can be represented in the $ ( n + 1 ) $-dimensional space $ t , x ^ {1}, \dots, x ^ {n} $ as an integral curve — the graph of the vector function (6).

The Cauchy problem for equation (5) is to find the solution satisfying the initial conditions

$$ x ^ {1} ( t _ {0} ) = x _ {0} ^ {1}, \dots, x ^ {n} ( t _ {0} ) = x _ {0} ^ {n} , $$

or

$$ \tag{7 } \mathbf x ( t _ {0} ) = \mathbf x _ {0} . $$

The solution of the Cauchy problem (5), (7) is conveniently written as

$$ \tag{8 } \mathbf x = \mathbf x ( t , t _ {0} , \mathbf x _ {0} ) ,\ t \in I . $$

The existence and uniqueness theorem for equation (5) is formulated as for equation (2).

Very general systems of ordinary differential equations (solved with respect to the leading derivatives of all unknown functions) are reducible to normal systems. An important special class of systems (5) are linear systems of $ n $ (coupled) ordinary differential equations of the first order:

$$ \dot{\mathbf x} = A ( t) \mathbf x + \mathbf F ( t) , $$

where $ A ( t) $ is an $ ( n \times n) $-dimensional matrix.

Of major importance in applications and in the theory of ordinary differential equations are autonomous systems of ordinary differential equations (cf. Autonomous system):

$$ \tag{9 } \dot{\mathbf x} = \mathbf f ( \mathbf x ) , $$

i.e. normal systems whose right-hand side does not explicitly depend on the variable $ t $. In such a case equation (6) is conveniently regarded as a parametric representation of a curve, by regarding the solution as the phase trajectory in the $ n $-dimensional phase space $ x ^ {1}, \dots, x ^ {n} $. If $ \mathbf x = \mathbf x ( t) $ is a solution of the system (9), the function $ \mathbf x = \mathbf x ( t + c ) $, where $ c $ is an arbitrary constant, will also satisfy (9).

Another generalization of equation (2) is an ordinary differential equation of order $ n $, solved with respect to its leading derivative:

$$ \tag{10 } y ^ {( n) } = f ( t , y , \dot{y}, \dots, y ^ {( n - 1 ) } ) . $$

An important special class of such equations are linear ordinary differential equations:

$$ y ^ {( n) } + a _ {1} ( t) y ^ {( n - 1 ) } + \dots + a _ {n - 1 } ( t) \dot{y} + a _ {n} ( t) y = F ( t) . $$

Equation (10) is reduced to a system of $ n $ first-order equations if one introduces new unknown functions of the variable $ t $ by the formulas

$$ x ^ {1} = y , x ^ {2} = \dot{y}, \dots, x ^ {n} = y ^ {( n - 1 ) } . $$

If, for example, equation (10) describes the dynamics of a certain object and the motion of this object is to be studied starting from a definite moment $ t = t _ {0} $ corresponding to a definite initial state, the following additional conditions must be imposed on equation (10):

$$ \tag{11 } y ( t _ {0} ) = y _ {0} , \dot{y} ( t _ {0} ) = {\dot{y} } _ {0}, \dots, y ^ {( n - 1 ) } ( t _ {0} ) = y _ {0} ^ {( n - 1 ) } . $$

The task of finding an $ n $ times differentiable function $ y = y ( t) $, $ t \in I $, for which equation (10) becomes an identity for all $ t \in I $ and which satisfies the initial conditions (11) is known as the Cauchy problem.

The existence and uniqueness theorem: If

$$ f ( t , u _ {1}, \dots, u _ {n} ) \in C ( D) , $$

if it satisfies a Lipschitz condition with respect to $ u _ {1}, \dots, u _ {n} $ and if

$$ ( t _ {0} , y _ {0} , {\dot{y} } _ {0}, \dots, {y _ {0} } ^ {( n - 1 ) } ) \in D , $$

then the Cauchy problem (10), (11) has a unique solution.

The Cauchy problem does not account for all problems which have been studied for equations (10) of higher orders (or systems (5)). Specific physical and technological problems often do not involve initial conditions but rather supplementary conditions of different kinds (so-called boundary conditions), when the values of the function $ y ( t) $ being sought and its derivatives (or relations between these derivatives) are given for certain different values of the independent variable. For instance, in the brachistochrone problem, the equation

$$ 2 y \ddot{y} + {\dot{y} } {} ^ {2} + 1 = 0 $$

is to be integrated under the boundary conditions $ y ( a) = A $, $ y ( b) = B $. Finding a $ 2 \pi $-periodic solution for the Duffing equation is reduced to extracting the solution which satisfies the periodicity conditions $ y ( 0) = y ( 2 \pi ) $, $ \dot{y} ( 0) = \dot{y} ( 2 \pi ) $; in the study of laminar flow around a plate one encounters the problem:

$$ \dddot{y} + y \ddot{y} = 0 ,\ \ y ( 0) = \dot{y} ( 0) = 0 , $$

$$ \dot{y} ( t) \rightarrow 2 \ \textrm{ as } t \rightarrow \infty . $$

A problem of finding a solution satisfying conditions different from the initial conditions (11) for ordinary differential equations or for a system of ordinary differential equations is known as a boundary value problem (cf. Boundary value problem, ordinary differential equations). The theoretical analysis of the existence and uniqueness of a solution of a boundary value problem is of importance to the practical problem involved, since it proves the mutual compatibility of the assumptions made in the mathematical description of the problem and the relative completeness of this description. One important boundary value problem is the Sturm–Liouville problem. Boundary value problems for linear equations and systems are closely connected with problems involving eigen values and eigen functions (cf. Eigen function; Eigen value) and also with the spectral analysis of ordinary differential operators.

The principal task of the theory of ordinary differential equations is the study of solutions of such equations. However, the meaning of such a study of solutions of ordinary differential equations has been understood in various ways at different times. The original trend was to carry out the integration of equations in quadratures, i.e. to obtain a closed formula yielding (in explicit, implicit or parametric form) an expression for the dependence of a specific solution on $ t $ in terms of elementary functions and their integrals. Such formulas, if found, are of help in calculations and in the study of the properties of the solutions. Of special interest is the description of the totality of solutions of a given equation. Under very general assumptions, equation (5) corresponds to a family of vector functions depending on $ n $ arbitrary independent parameters. If the equation of this family has the form

$$ \mathbf x = \pmb\phi ( t , c _ {1}, \dots, c _ {n} ) , $$

the function $ \pmb\phi $ is said to be the general solution of equation (5).

However, the first examples of ordinary differential equations which are not integrable in quadratures appeared in mid-19th century. It was found that solutions in closed form can be found for a few classes of equations only (see, for example, Bernoulli equation; Differential equation with total differential; Linear ordinary differential equation with constant coefficients). A detailed study was then begun of the most important and frequently encountered equations which cannot be solved in quadratures (e.g. the Bessel equation), special notation was introduced for such equations, their properties were studied and their values were tabulated. Many special functions appeared in this way.

Because of practical demands, methods of approximate integration of ordinary differential equations were also developed, such as the method of sequential approximation (cf. Sequential approximation, method of), the Adams method, etc. Various methods for graphical and mechanical integration of these equations were proposed. Mathematical analysis offers of a rich selection of numerical methods for solving many problems in ordinary differential equations (cf. Differential equations, ordinary, approximate methods of solution of). These methods are convenient computational algorithms with effective estimates of accuracy, and the modern computational techniques make it possible to obtain a numerical solution to each such problem in an economical and rapid manner.

However, the application of numerical methods to a specific equation yields only a finite number of particular solutions on a finite segment of variation of the independent variable. They cannot yield information about the asymptotic behaviour of the solutions, and cannot tell if a certain equation has a periodic solution or an oscillating solution. It is often important in many practical problems to establish the nature of the solution on an infinite interval of variation of the independent variable, and to obtain a complete picture of the integral curves. For this reason, the main trend in the theory of ordinary differential equations underwent a switchover to the study of the general features in the behaviour of solutions of ordinary differential equations, and to the development of methods for studying the global properties of solutions from the differential equation itself, without recourse to its integration.

All this formed the subject matter of the qualitative theory of differential equations, established in the late 19th century and still in full development.

Of decisive importance is the clarification as to whether or not the Cauchy problem is a well-posed problem for an ordinary differential equation. Since in concrete problems the initial values can never be perfectly exact, it is important to find the conditions under which small changes in initial values entail only small changes in the results. The theorem on continuous dependence of the solutions on initial values is valid: Let (8) be the solution of equation (5), where $ \mathbf f ( t , \mathbf x ) \in C ( D) $ and let it satisfy a Lipschitz condition with respect to $ \mathbf x $; then, for any $ \epsilon > 0 $ and any compact $ J \subset I $, $ t _ {0} \in J $, it is possible to find a $ \delta > 0 $ such that the solution $ \mathbf x ( t , t _ {0} , \mathbf x _ {0} ^ {*} ) $ of this equation, where $ | \mathbf x _ {0} ^ {*} - \mathbf x _ {0} | < \delta $, is defined on $ J $ and for all $ t \in J $,

$$ \tag{12 } | \mathbf x ( t , t _ {0} , \mathbf x _ {0} ^ {*} ) - \mathbf x ( t , t _ {0} , \mathbf x _ {0} ) | < \epsilon . $$

In other words, if the variations of the independent variable are restricted to a compact interval, then, if the variations in the initial values are sufficiently small, the solution will vary only slightly on the complete interval chosen. This result may also be generalized to obtain conditions which would ensure the differentiability of solutions (of differential equations) with respect to the initial values.

However, this theorem fails to give a complete answer to the problem which is of interest in practical applications, since it only speaks about a compact segment of variation of the independent variable. Now it is often necessary (e.g. in the theory of controlled motion) to deal with the solution of the Cauchy problem (5), (7) defined for all $ t \geq t _ {0} $, i.e. to clarify the stability of the solution with respect to small changes in the initial values on the entire infinite interval $ t \geq t _ {0} $, i.e. to obtain conditions which would ensure the validity of inequality (12) for all $ t \geq t _ {0} $. Studies of the stability of equilibrium positions or of the stationary conditions of a concrete system are reduced to this very problem. A solution which varies only to a small extent on the infinite interval $ [ t _ {0} , \infty ) $ if the deviations from the initial values are sufficiently small is said to be Lyapunov stable (cf. Lyapunov stability).

In selecting an ordinary differential equation to describe a real process, some features must always be neglected and others idealized. This means that a description of a process by ordinary differential equations is only approximate. For instance, the study of the operation of a valve oscillator leads to the van der Pol equation if certain assumptions, which do not fully correspond to the real state of things, are made. Furthermore, the course of the process is often affected by perturbing factors which are practically impossible to allow for in setting up equations; all that is known is that their effect is "small" . It is therefore important to clarify the variation of the solution as a result of small variations in the system of equations itself, i.e. on passing from equation (5) to the perturbed equation

$$ \dot{\mathbf x} = \mathbf f ( t , \mathbf x ) + \mathbf R ( t , \mathbf x ) , $$

which allows for small correction terms. It was found that on a compact interval the variations of the independent variable (under the same assumptions as in the theorem of continuous dependence of the solutions on the initial values) produce only small variations in the solution provided the perturbation $ \mathbf R ( t , \mathbf x ) $ is sufficiently small. If this property is retained on the infinite interval $ t \geq t _ {0} $, the solution is said to be stable under constantly acting perturbations.

Studies of Lyapunov stability, stability under constantly acting perturbations and their modifications form the subject of a highly important branch of the qualitative theory — stability theory. Of foremost interest in practice are systems of ordinary differential equations whose solutions change little for all small variations of these equations; such systems are known as robust systems (cf. Rough system).

Another important task in the qualitative theory is to obtain a pattern of the behaviour of the family of solutions throughout the domain of definition of the equation. In the case of the autonomous system (9) the problem is the construction of a phase picture, i.e. a qualitative overall description of the totality of phase trajectories in the phase space. Such a geometric picture gives a complete representation of the nature of all motions which may take place in the system under study. It is therefore important, first of all, to clarify the behaviour of the trajectories in a neighbourhood of equilibrium positions, and to find separatrices (cf. Separatrix) and limit cycles (cf. Limit cycle). An especially urgent task is to find stable limit cycles, since these correspond to auto-oscillations in real systems (cf. Auto-oscillation).

Any real object is characterized by different parameters, which often enter into the right-hand side of the system of ordinary differential equations describing the behaviour of the object,

$$ \tag{13 } \dot{\mathbf x} = \mathbf f ( t , \mathbf x , \pmb\epsilon ) , $$

in the form of certain quantities $ ( \epsilon ^ {1}, \dots, \epsilon ^ {k} ) = \pmb\epsilon $. The values of these parameters are never known with perfect accuracy, so that it is important to clarify the conditions ensuring the stability of the solutions of equation (13) to small perturbations of the parameter $ \pmb\epsilon $. If the independent variable varies in a given compact interval, then — under certain natural assumptions regarding the right-hand side of equation (13) — the solutions will show a continuous (and even differentiable) dependence on the parameters.

The clarification of the dependence of the solutions on the parameter is directly related to the question of the quality of the idealization leading to the mathematical model of the behaviour of the object — the system of ordinary differential equations. A typical example of idealization is the neglect of a small parameter. If, with allowance for this small parameter, the system (13) is obtained, then, owing to the fact that the variation of the solutions with the parameter is continuous, it is perfectly permissible to neglect this parameter in the study of the behaviour of the object on a compact interval of time. Thus, as a first approximation, one is considering the simpler system

$$ \dot{\mathbf x} = \mathbf f ( t , \mathbf x , 0 ) . $$

This result is the principle of the extensively employed method of small parameters (cf. Small parameter, method of the); the Krylov–Bogolyubov method of averaging and other asymptotic methods for solving ordinary differential equations. However, the study of a number of phenomena yields a system of differential equations with small parameter in front of the derivative:

$$ \pmb\epsilon \dot{x} = f ( t , x , y ) ,\ \ \dot{y} = g ( t , x , y ) . $$

Here it is in general no longer permissible to assume that $ \pmb\epsilon = 0 $, even if it is attempted to construct a rough representation of the phenomenon on a compact interval of time.

The theory of ordinary differential equations considers certain fruitful important generalizations of the problems outlined above. First, one may extend the class of functions within which the solution of the Cauchy problem (2), (3) is sought: Determine the solution in the class of absolutely-continuous functions and prove the existence of such solutions. Of special practical interest is to find the solution of equation (2) if the function $ f ( t , x ) $ is discontinuous or many-valued with respect to $ x $. The most general problem in this respect is the problem of solving a differential inclusion.

Also under consideration are ordinary differential equations of order $ n $ more general than (10), which are unsolved with respect to the leading derivative

$$ F ( t , y , \dot{y}, \dots, y ^ {( n ) } ) = 0 . $$

Studies of this equation are closely connected with the theory of implicit functions.

Equation (2) connects the derivative of the solution at a point $ t $ with the value of the solution at this point: $ \dot{x} ( t) = f ( t , x ( t) ) $, but certain applied problems (e.g. those in which allowance must be made for a delaying effect of the executing mechanism) yield retarded ordinary differential equations (cf. Differential equations, ordinary, retarded):

$$ \dot{x} = f ( t , x ( t - \tau ) ) , $$

in which the derivative of the solution at a point $ t $ is connected with the value of the solution at a point $ t - \tau $. A special section of the theory of ordinary differential equations deals with such equations, and also with the more general ordinary differential equations with distributed arguments (cf. Differential equations, ordinary, with distributed arguments).

The study of the phase space of the autonomous system (9) leads to yet another generalization of ordinary differential equations. Denote by $ \mathbf x = \mathbf x ( t , \mathbf x _ {0} ) $ the trajectory of this system passing through the point $ \mathbf x _ {0} $. If the point $ \mathbf x _ {0} $ is mapped to the point $ \mathbf x ( t , \mathbf x _ {0} ) $, one obtains a transformation of the phase space depending on the parameter $ t $ which determines the motion in this space. The properties of such motions are studied in the theory of dynamical systems. They may be studied not only in Euclidean space but also on manifolds; an example are differential equations on a torus.

Above ordinary differential equations in the field of real numbers have been considered (e.g. finding a real-valued function $ x ( t) $ of a real variable $ t $ satisfying equation (2)). However, certain properties of such equations are more conveniently studied with the aid of complex numbers. A natural further generalization is the study of ordinary differential equations in the field of complex numbers. Thus, one may consider the equation

$$ \frac{dw}{dz} = f ( z , w ) , $$

where $ f ( z , w ) $ is an analytic function of its variables, and pose the problem of finding an analytic function $ w ( z) $ in the complex variable $ z $ which would satisfy this equation. The study of such equations, equations of higher orders and systems forms the subject of the analytic theory of differential equations; in particular, it contains results of importance to mathematical physics, concerning linear ordinary differential equations of the second order (cf. Linear ordinary differential equation of the second order).

One may also consider the equation

$$ \tag{14 } \frac{d \mathbf x }{dt} = \mathbf f ( t , \mathbf x ) $$

on the assumption that $ \mathbf x $ belongs to an infinite-dimensional Banach space $ B $, $ t $ is a real or complex independent variable and $ \mathbf f ( t , \mathbf x ) $ is an operator mapping the product $ ( - \infty , + \infty ) \times B $ into $ B $. Equation (14) may serve in processing, for example, systems of ordinary differential equations of infinite order (cf. Differential equations, infinite-order system of). Equations of the type (14) are studied in the theory of abstract differential equations (cf. Differential equation, abstract), which is the meeting point of ordinary differential equations and functional analysis. Of major interest are linear differential equations of the form

$$ \frac{d \mathbf x }{dt} = A ( t) \mathbf x + \mathbf F ( t) $$

with bounded or unbounded operators; certain classes of partial differential equations (cf. Differential equation, partial) can be written in the form of such an equation.

References

[1] E. Kamke, "Differentialgleichungen: Lösungen und Lösungsmethoden" , 1. Gewöhnliche Differentialgleichungen , Chelsea, reprint (1947)
[2] E.A. Coddington, N. Levinson, "Theory of ordinary differential equations" , McGraw-Hill (1955) MR0069338 Zbl 0064.33002
[3] S. Lefschetz, "Differential equations: geometric theory" , Interscience (1957) MR0094488 Zbl 0080.06401
[4] I.G. Petrovskii, "Ordinary differential equations" , Prentice-Hall (1966) (Translated from Russian) MR0193298
[5] L.S. Pontryagin, "Ordinary differential equations" , Addison-Wesley (1962) (Translated from Russian) MR0140742 Zbl 0112.05502
[6] G. Sansone, "Ordinary differential equations" , 1–2 , Zanichelli (1948–1949) (In Italian) MR0159075 MR0183915 MR0064221 Zbl 0429.34003 Zbl 0125.05102 Zbl 0108.08703
[7] P. Hartman, "Ordinary differential equations" , Birkhäuser (1982) MR0658490 Zbl 0476.34002

Comments

The collection of all trajectories $ \{ \mathbf x ( t) \} \subset D $ is often referred to as the phase portrait of a differential equation $ \dot{x} = f ( x) $. In connection with stability under various kinds of perturbations, persistence of certain features of the phase portrait, such as persistence of equilibria and persistence of closed orbits, is often of importance: cf. e.g. [a4], Chapt. 16, for a number of results. Particularly nice robust systems are the structurally stable ones. A structurally stable differential equation $ \dot{x} = f ( x) $ on $ D $ is one such that if $ \dot{x} = g ( x) $ is a sufficiently nearby equation, i.e. if $ f $ is near $ g $ is some suitable sense, then there is a homeomorphism of $ D $ into itself (that is, a one-to-one onto continuous mapping $ D \rightarrow D $ whose inverse is also continuous) which takes the phase portrait of $ \dot{x} = g ( x) $ into the phase portrait of $ \dot{x} = f ( x) $. Cf. [a4] and Rough system for some more details.

Cf. [a5] for a comprehensive account of differential equations with discontinuous right-hand side.

References

[a1] V.I. Arnol'd, "Geometrical methods in the theory of ordinary differential equations" , Springer (1983) (Translated from Russian)
[a2] V.I. Arnol'd, "Ordinary differential equations" , M.I.T. (1973) (Translated from Russian) Zbl 1049.34001 Zbl 0744.34001 Zbl 0659.58012 Zbl 0602.58020 Zbl 0577.34001 Zbl 0956.34502 Zbl 0956.34501 Zbl 0956.34503 Zbl 0237.34008 Zbl 0135.42601
[a3] J.K. Hale, "Ordinary differential equations" , Wiley (1969) MR0419901 Zbl 0186.40901
[a4] M.W. Hirsch, S. Smale, "Differential equations, dynamical systems, and linear algebra" , Acad. Press (1974) MR0486784 Zbl 0309.34001
[a5] A.F. Filippov, "Differential equations with discontinuous right-hand sides" , Kluwer (1988) (Translated from Russian) MR2118433 MR1850551 MR1354280 MR1334843 MR0790682 MR0114016 Zbl 0664.34001
How to Cite This Entry:
Differential equation, ordinary. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Differential_equation,_ordinary&oldid=51983
This article was adapted from an original article by E.F. Mishchenko (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article