Namespaces
Variants
Actions

Differential equations, ordinary, approximate methods of solution of

From Encyclopedia of Mathematics
Jump to: navigation, search

Methods for obtaining analytical expressions (formulas) or numerical values which approximate to some degree of accuracy the required particular solution of a differential equation or of a system of equations for one or more values of the argument. The importance of approximate methods of solution of differential equations is due to the fact that exact solutions in the form of analytical expressions are only known for a few types of differential equations.

One of the oldest methods for the approximate solution of ordinary differential equations is their expansion into a Taylor series. In this method the solution of an equation

with the initial condition

is approximated by a partial sum of the Taylor series

The derivatives are expressed in terms of and its partial derivatives:

etc. The error of the method is directly proportional to . For large values of the error slowly tends to zero as , while if the value of exceeds that of the convergence radius of the Taylor series, the error usually does not tend to zero as . This fact, as well as the necessity for computing a large number of partial derivatives, strongly restricts the scope of applicability of the method. The method of expansion into a Taylor series as well as methods based on the expansion in series of a more general nature are commonly used to find an approximate solution in the form of an analytical expression. Methods applicable for this purpose also include the Chaplygin method, which involves differential inequalities, and the method of sequential approximation (cf. Sequential approximation, method of). These methods are mainly employed in theoretical investigations and are used only rarely to obtain numerical solutions of differential equations in practical computations.

Asymptotic methods for the approximate solution of differential equations based on separating the equation to be solved into principal terms and terms which are small as compared with principal terms are often employed in practical work. Small parameter methods (cf. Small parameter, method of the) for the equation may serve as an example. Asymptotic methods are used both to obtain analytical expressions approximating the solution and in the study of the qualitative behaviour of solutions.

In the most frequently used methods for obtaining a numerical solution of differential equations, the solution is sought in the form of a table of approximate values of the unknown function for several values of the argument in an interval . For instance, consider the solution of an equation with initial condition on an interval under the assumption that the solution is to be computed for the argument values

The quantities are called the nodes, while the magnitude is called the step; denotes the value of the approximate solution at the node .

One of the simplest numerical methods — the Euler method — is based on an approximate computation of the integral term in the identity

by quadratures, using the rectangle formula

The truncation error of Euler's method is proportional to . By approximating the integral

(1)

by more exact quadrature formulas it is possible to obtain more exact numerical methods. For instance, if the trapezoidal formula is employed to approximate (1), then

This equation is usually not directly solvable, e.g. for . It may be solved by iteration methods, taking the value of obtained by Euler's method as an initial approximation. One iteration leads to the formulas:

where

and

The error of these formulas is of the order . They belong to the family of Runge–Kutta methods (cf. Runge–Kutta method). One familiar method of this type involves an error of the order . Runge–Kutta methods are one-step methods, since it is sufficient to know — the value of the approximate solution in the previous step — to compute . This is the reason why the Runge–Kutta methods may also be employed if the nodes are not equispaced — i.e. if the difference is not constant. In choosing the integration step, it is useful if some estimate of the step error of the method is available. This error can, e.g., be estimated by Richardson extrapolation: is computed twice, by two steps of length and one of length ; the values thus obtained are denoted by and , respectively; the error in the step is

where is the order of with respect to (for Euler's method , for the trapezoidal formula , etc.). Another method for estimating the step error is to obtain formulas of Runge–Kutta type with a control term which approximates the principal term of the step error of the method accurate up to small terms of higher orders. Also so-called implicit one-step methods have been developed and have been proved highly effective in certain classes of problems, but mainly for boundary value problems.

In addition to one-step methods, multi-step methods (or finite-difference methods) are also employed in the numerical solution of differential equations. In these methods the determination of requires the knowledge not of alone, but also the knowledge of in a number of previous nodes. The formulas of -step methods have the form:

where are constants and . If , the respective method is said to be an explicit method; if , it is said to be an implicit method. Methods of the Adams type (cf. Adams method) are a particular case of multi-step methods:

The computations are usually based on a pair of -step formulas, one of which is explicit, while the other one is implicit. Such pairs of formulas are said to be predictor-corrector methods. Hamming's formulas may be quoted as an example of such formulas:

the truncation error of which is of the order . Computation according to Hamming's formulas involves, in that order: calculation of a "prediction" ; of a "modification" ; of a "correction" ; and finally, of the approximate solution .

In celestial mechanics Störmer's formulas (cf. Störmer method) are extensively employed. They are especially suited for equations of the type

and have the form:

(the explicit Stormer formula) and

(the implicit Stormer formula). In the Stormer formulas

and is the finite difference of order .

The employment of multi-step methods is only possible if the values of the solution at the first nodes are known. To find these values, one-step methods with a truncation error of the same order are usually employed.

The solution of boundary value problems for ordinary differential equations may be reduced to solving a number of problems with initial conditions. The simplest method of this type is the shooting method, which is used in both linear and non-linear boundary value problems. For instance, let a solution be sought for the boundary value problem for one equation of the second order:

Putting , it is possible to solve the problem with the initial conditions

(2)

on the segment and to compute the value of . is found from the condition . The solution of the boundary value problem will then coincide with the solution of the problem with the initial conditions , . The root of the equation is usually sought by some approximation method, and its determination involves repeated solutions of the problem with the initial conditions (2). The shooting method is often unstable with respect to the computational error.

Linear boundary value problems are often solved by the method of invariant imbedding, in which solving the boundary value problem for an equation of the second order is reduced to solving three problems with an initial condition for first-order equations. Thus, let the boundary value problem to be solved be

Choose functions and such that for all . These functions may be obtained as solutions of problems with the initial conditions

and

The solution of these problems is known as the forward sweep. Forward sweep yields two conditions for the determination of and :

On the strength of these conditions one finds , after which the solution of the original boundary value problem is obtained as the solution of the problem with initial condition

on the segment ; this is the so-called backward sweep. These methods are suited for the solution of boundary value problems of systems of differential equations. Finite difference analogues of these methods are widely employed in computational practice.

In addition, linearization methods in combination with the shooting method are used in solving non-linear boundary value problems. The Newton method is the most commonly used method of this class.

Various variational methods are also employed in solving boundary value problems: the Ritz method, the Galerkin method, etc. Variational methods reduce the solution of boundary value problems to the minimizations of some functional; the approximate solution is looked for in a pre-set form

the parameters being determined from the condition of the minimum of the functional.

Most numerical methods for solving ordinary differential equations have been realized as computer library programs.

Besides the analytical and numerical methods for the approximate solution of ordinary differential equations, graphical methods are also employed. An example is the isocline method, which involves the construction of the field of directions defined by the equation. Analogue computers and other modelling apparatuses are also used.

References

[1] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[2] I.S. Berezin, N.P. Zhidkov, "Computing methods" , Pergamon (1973) (Translated from Russian)
[3] S.G. Mikhlin, Kh.L. Smolitskii, "Approximate method for solution of differential and integral equations" , American Elsevier (1967) (Translated from Russian)
[4] N.N. Moiseev, "Computational methods in the theory of optimal systems" , Moscow (1971) (In Russian)
[5] W.E. Milne, "Numerical solution of differential equations" , Wiley (1953)
[6] L. Collatz, "Numerical treatment of differential equations" , Springer (1966) (Translated from German)
[7] R.W. Hamming, "Numerical methods for scientists and engineers" , McGraw-Hill (1962)
[8] S.K. Godunov, V.S. Ryaben'kii, "The theory of difference schemes" , North-Holland (1964) (Translated from Russian)
[9] L. Collatz, "Eigenwertprobleme und ihre numerische Behandlung" , Chelsea, reprint (1948)


Comments

The classification of all possible differential equations allowing analytical solutions goes back to L. Euler. A survey of more than 1500 analytically solvable differential equations may be found in [a5].

In the above discussion of the order of Euler's method, the trapezoidal method and Runge–Kutta methods, the given orders should be interpreted as local orders, that is, the order of accuracy after just one step; the actual (global) order at a fixed node is one less. For instance, Euler's method has local order 2, but it produces numerical solutions with errors of magnitude as .

In the West standard terminology for Runge–Kutta methods with control term for step error estimation (usually termed local error) is embedded Runge–Kutta methods. The classical example of such a method is the fourth-order Merson's method with, for linear equations, fifth-order error estimate (cf. [a7] and [a4]). Widely-used embedded methods are those of E. Fehlberg ([a2], [a3], see also [a4]), and the more recent method of J.R. Dormand and P.J. Prince [a1].

Predictor-corrector methods date back to F.R. Moulton [a9] and W.E. Milne [a8]. The example of Hamming's method given above is in fact a predictor-corrector method equipped with the so-called Milne device, a technique for estimating the local error of predictor-corrector methods. A detailed treatment of Milne's device can be found in [a6].

Implicit Runge–Kutta methods, in particular those based on (symmetric) Gauss formulas, belong to the most powerful tools for solving boundary value problems; since these problems are global in nature, their disadvantage when used for initial value problems is no longer a serious problem. Collocation on Gauss points can be viewed as a finite-difference method for boundary value problems.

References

[a1] J.R. Dormand, P.J. Prince, "A family of embedded Runge–Kutta formulae" J. Comp. Appl. Math. , 6 (1980) pp. 19–26
[a2] E. Fehlberg, "Classical fifth-, sixth-, seventh- and eight-order Runge–Kutta formulas with stepsize control" Nasa TR , 287 (1968) (Abstract in: Computing (1969), 93–106)
[a3] E. Fehlberg, "Low order classical Runge–Kutta formulas with stepsize control and their application to some heat transfer problems" Nasa TR , 315 (1969) (Abstract in: Computing (1969), 61–71)
[a4] E. Hairer, S.P. Nørsett, G. Wanner, "Solving ordinary differential equations" , I. Nonstiff problems , Springer (1987)
[a5] E. Kamke, "Differentialgleichungen: Lösungen und Lösungsmethoden" , 1–2 , Chelsea, reprint (1947)
[a6] J.D. Lambert, "Computational methods in ordinary differential equations" , Wiley (1973)
[a7] R.H. Merson, "An operational method for the study of integration processes" , Proc. Symp. Data Processing , Weapons Res. Establ. Salisbury , Salisbury (1957) pp. 110–125
[a8] W.E. Milne, "Numerical integration of ordinary differential equations" Amer. Math. Monthly , 33 (1926) pp. 455–460
[a9] F.R. Moulton, "New methods in exterior ballistics" , Univ. Chicago Press (1926)
[a10] U.M. Ascher, R.M.M. Mattheij, R.D. Russell, "Numerical solution for boundary value problems for ordinary differential equations" , Prentice-Hall (1988)
[a11] J.M. Watt (ed.) , Modern numerical methods for ordinary differential equations , Clarendon Press (1976)
How to Cite This Entry:
Differential equations, ordinary, approximate methods of solution of. S.S. Gaisaryan (originator), Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Differential_equations,_ordinary,_approximate_methods_of_solution_of&oldid=11532
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098