Non-linear equation, numerical methods

From Encyclopedia of Mathematics
Revision as of 14:52, 7 June 2020 by Ulf Rehmann (talk | contribs) (Undo revision 47994 by Ulf Rehmann (talk))
Jump to: navigation, search

Iteration methods for the solution of non-linear equations.

By a non-linear equation one means (see [1][3]) an algebraic or transcendental equation of the form


where is a real variable and a non-linear function, and by a system of non-linear equations a system of the form


that is not linear; the solutions of (2) are -dimensional vectors . Equation (1) and system (2) can be treated as a non-linear operator equation:


with a non-linear operator acting from a finite-dimensional vector subspace into .

Numerical methods for the solution of a non-linear equation (3) are called iteration methods if they are defined by the transition from a known approximation at the -th iteration to a new iteration and allow one to find in a sufficiently large number of iterations a solution of (3) within prescribed accuracy . The most important iteration methods for the approximate solution of (3), both of a general and of a special form, characteristic for discrete (grid) methods for solving boundary value problems for equations and systems of partial differential equations of strongly-elliptic type, are the object of study of the present article. Non-linear operator equations connected with the discussion of infinite-dimensional spaces (see, for example [4][8]) are a very broad mathematical concept, including as special cases, for example, non-linear integral equations and non-linear boundary value problems. Numerical methods for the approximate solution of them include also methods for their approximation by finite-dimensional equations; these methods are treated separately.

One of the most important methods for solving an equation (3) is the simple iteration method (successive substitution), which assumes that one can replace (3) by an equivalent system


where is an element of a finite-dimensional normed space and , mapping into , is a contractive operator:


Then by a general contractive-mapping principle (see [1][4] and Contracting-mapping principle) equation (1) has a unique solution, the method of simple iteration


converges for any initial approximation , and the error at the -th iteration satisfies the estimate

Suppose that some solution of (3) has a surrounding ball such that the system (3) considered together with the additional condition


is equivalent to (4) considered together with (7), and that (5) holds for and any with . Then for a choice of an initial approximation from in the method (6) convergence of to with an error estimate is guaranteed.

For twice continuously-differentiable functions , if a good initial approximation to a solution of the system (2) is available, frequently an effective means of improving the accuracy is the Newton–Kantorovich method, in which an equation from (2) determining a certain surface is replaced by the equation of the tangent surface to at the point , where is a previously obtained approximation to a solution of (2) (see [1][5]). Under certain additional conditions the Newton–Kantorovich method leads to an error estimate of the form

where and are certain constants. At every iteration of this method one has to solve the system of linear algebraic equations with matrix

Sometimes one keeps this matrix fixed for several iterations, sometimes one replaces the derivatives by difference approximations.

The Newton–Kantorovich method belongs to the group of linearization methods (3). Another method in this group is the secant method.

A large number of iteration methods (the so-called descent methods) (see [1][3], [9], [10][13]) are based on replacing the solving of the equations (3) by minimizing a certain functional (cf. also Descent, method of). For example, for one can take


In a number of cases when the initial non-linear equations are the Euler equations for the problem of minimizing a certain functional , such a variational formulation of the problem is even more natural; the operators in similar situations are gradients of the functional and are called potential operators (see [5][6]). Among the several versions of descent methods one can mention the methods of coordinate-wise descent, several gradient methods and, in particular, the method of steepest descent, the method of conjugate gradients, and others, and also their modifications (see [2], [9], [10][13]). A number of iteration methods for the solution of the equations (3), describing a certain stationary state, can be treated as discretizations of the corresponding non-stationary problems. Therefore, methods of this class are called stabilization methods (cf. Adjustment method and, for example, [2]). Examples of such non-stationary problems are problems described by a system of ordinary differential equations

The introduction of an additional independent variable is characteristic for the method of differentiation with respect to a parameter (a continuation method, see [5], [13]). Its essence consists in the introduction of an auxiliary parameter , the choice of continuously-differentiable functions , and replacement of (2) by the system


for the system (9) must be easy to solve, and the functions must coincide with the , . Generally speaking, the system (9) determines

as a function of , and the required solution of (2) is . If (9) is differentiable with respect to , the result is a system of ordinary differential equations


If one solves the Cauchy problem for it on with initial conditions that are solutions of the system

then one finds a solution of (2). Discretization of (10) with respect to leads to a numerical method for solving the system (2).

In the method of continuation with respect to a parameter (cf. Continuation method (to a parametrized family, for non-linear operators)) the system (9) is solved for , , and for every one of these values of one applies a certain iteration method with an initial approximation that agrees with the approximation obtained by solving the system for the preceding value of . Both these methods are in essence iteration methods for solving (2) with a special procedure for finding a good initial approximation.

In the case of systems the problem of localizing the solution causes great difficulty. Since the majority of iteration methods converge only in the presence of a fairly good approximation to a solution, the two methods described above often make it possible to dispense with the need to localize a solution directly. For localization one also frequently uses theorems based on topological principles and on the monotonicity of operators (see [4][8]).

For the solution of equations (1), which are the simplest special cases of (3), the number of known iteration methods applicable in practice is very large (see, for example, [1][3], [12], [14]). Apart from the ones already considered one can mention, for example, the iteration methods of higher order (see [1], [14]), which include Newton's methods as a special case, and the many iteration methods especially oriented to finding real or complex roots of polynomials

where the are real or complex numbers (see [1], [2]).

The problem of localizing the solutions of (1) reduces to a search for an interval at the ends of which the continuous function takes values of opposite sign. When is a polynomial, the task is less subtle, since theoretical bounds are known (see [1]) and there are methods for finding all roots with required accuracy without giving good initial approximations to them (see [12]).

Iteration methods for solving equations (3) that arise in grid analogues of non-linear boundary value problems for partial differential equations are special cases of methods for solving grid systems (see, for example, [13], [15][19]). Probably one of the most intensively applied methods for solving (3) is a modified method of simple iteration, which can be can be written in the form


where (3) is regarded as an operator equation in the -dimensional space , and , where stands for the set of symmetric positive linear operators mapping into itself. An expedient study of such methods should proceed not in , but in the space with the new scalar product

where is the scalar product in .

If the operator satisfies the conditions of strict monotonicity and is Lipschitz continuous:


then (3) has a unique solution and the method (11), for a suitable choice of , converges for any with error estimate


where (see [13], [15]).

In the most general version of this theorem it suffices to require that (12) and (13) hold for a solution and all in a ball and that lies in this ball (see [13]). In this case the constants and may depend on . To verify these conditions it suffices, e.g., to localize by means of a priori estimates, obtaining , and then to take , provided that (12) and (13) hold for any and in , where . The constant in (14) can be diminished if is differentiable and for its derivatives , represented as the sum of the symmetric part and the skew-symmetric part the inequalities

are known. Then (see [11], [13], [15]). Sometimes in the discussion of certain types of non-linearity it is advisable to use instead of (12) and (13) inequalities like

(see [13]). For the operators in (11) one can use, for example, splitting difference operators (the method of alternating directions) or factorized difference operators (the alternating-triangle method, the incomplete matrix factorization method), and others. Most attractive from the asymptotic point of view is the use of operators such that the constants or do not depend on the dimension of the space (see [13]) and the operators are sufficiently simple. Along these lines one succeeds in a number of cases in constructing iteration methods that make it possible to find a solution of (3) with accuracy at the expense of altogether (or even ) arithmetic operations for , (see [13]), if the computational work in the evaluation of for a given can be estimated by . To verify conditions of the type (12) and (13), in many cases the grid (difference) analogues of the Sobolev imbedding theorems turn out to be very powerful (see [13]). It is important to take into account the specific nature of the non-linearity. For example, when , where is a positive linear operator and a quadratic non-linear operator having the property of "skew symmetryskew-symmetry" (that is, for all ), one often succeeds in obtaining the constant in any ball and depending only on ; then (11) converges for any (see [13]). In a number of cases one can replace the original problem on the basis of a priori estimates by an equivalent one for which the required conditions hold in the whole space.


[1] I.S. Berezin, N.P. Zhidkov, "Computing methods" , Pergamon (1973) (Translated from Russian)
[2] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[3] J.M. Ortega, W.C. Rheinboldt, "Iterative solution of non-linear equations in several variables" , Acad. Press (1970)
[4] M.A. Krasnosel'skii, G.M. Vainikko, P.P. Zabreiko, et al., "Approximate solution of operator equations" , Wolters-Noordhoff (1972) (Translated from Russian)
[5] S.G. Mikhlin, "The numerical performance of variational methods" , Wolters-Noordhoff (1971) (Translated from Russian)
[6] M.M. Vainberg, "Variational method and method of monotone operators in the theory of nonlinear equations" , Wiley (1973) (Translated from Russian)
[7] H. Gajewski, K. Gröger, K. Zacharias, "Nichtlineare Operatorengleichungen und Operatorendifferentialgleichungen" , Akademie Verlag (1974)
[8] J.-L. Lions, "Quelques méthodes de résolution des problèmes aux limites nonlineaires" , Dunod (1969)
[9] F.P. Vasil'ev, "Lectures on methods for solving extremal problems" , Moscow (1974) (In Russian)
[10] B.N. Pshenichnyi, Yu.M. Danilin, "Numerical methods in extremal problems" , MIR (1978) (Translated from Russian)
[11] R. Glowinski, "Numerical methods for nonlinear variational problems" , Springer (1984)
[12] V.V. Voevodin, "Numerical methods of algebra. Theory and algorithms" , Moscow (1966) (In Russian)
[13] E.G. D'yakonov, "Minimization of computational work. Asymptotically-optimal algorithms" , Moscow (1989) (In Russian)
[14] J.F. Traub, "Iterative methods for the solution of equations" , Prentice-Hall (1964)
[15] A.A. Samarskii, E.S. Nikolaev, "Numerical methods for grid equations" , 1–2 , Birkhäuser (1989) (Translated from Russian)
[16] R. Glowinski, J.-L. Lions, R. Trémolières, "Numerical analysis of variational inequalities" , North-Holland (1981) (Translated from French)
[17] W. Hackbusch, "Multigrid solution of continuation problems" R. Ansorge (ed.) Th. Meis (ed.) W. Törnig (ed.) , Iterative solution of nonlinear systems of equations , Lect. notes in math. , 953 , Springer (1982) pp. 20–44
[18] F. Thomasset, "Implementation of finite element methods for Navier–Stokes equations" , Springer (1981)
[19] W. Hackbusch (ed.) V. Trottenberg (ed.) , Multigrid Methods. Proc. Köln-Porz, 1981 , Lect. notes in math. , 960 , Springer (1982)



[a1] J.W. Daniel, "The approximate minimization of functionals" , Prentice-Hall (1971)
How to Cite This Entry:
Non-linear equation, numerical methods. Encyclopedia of Mathematics. URL:,_numerical_methods&oldid=47994
This article was adapted from an original article by E.G. D'yakonov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article