A method for calculating zeros of continuous functions. Suppose that a zero of a continuous function is contained in ; let , be different points of this interval. The iteration formula of the secant method is
If the sequence converges, then it must converge to a zero of . If has a continuous derivative on , then the local convergence of the secant method to a simple root is super-linear. If the conditions on the smoothness of are strengthened, one can indicate the exact order of (local) convergence . Namely, for and such that , , , one has
Here , and .
Super-linear convergence of the secant method for smooth functions is very important since there is no need to calculate the derivatives at each step, only the new value of the function is calculated. Thus, for comparison, in Newton's method (cf. Newton method), whose order of (local) convergence is 2, at each step one has to calculate the values of the function and its derivative; as a rule this is no less laborious than calculating two values of a function.
Since the convergence of the secant method depends on the smoothness of the function and the choice of the initial approximation, in standard computer programs for computing zeros of continuous functions this method is combined with some method for which convergence is guaranteed, for example, the method of bisecting an interval. At each step in such a combined method a root is located in an interval at the ends of which the function changes sign (it is assumed that this condition also holds for the initial interval ). By a certain test the next approximation is chosen either by (1) or by the formula for bisecting an interval. If is smooth, then from some onwards the iterations proceed automatically by the secant method. It is possible to have an even more complicated combination of methods, for example, the algorithm (see ) in which as well as the above-mentioned methods, the method of inverse quadratic interpolation is used. Sometimes by the secant method one means the iteration formula:
Another terminology for the method (2) is the method of false position, or regula falsi. This method only converges linearly.
In generalizations of the secant method to the case of a system of equations it is possible to consider (1) in two ways. It can be assumed that (1) is obtained from the formula of Newton's method by a discrete approximation of the derivative. Another possibility is to assume that a linear interpolation for at points and has been carried out, and then is taken to be the zero of the linear interpolant. Both interpretations enable one to obtain a large number of multi-dimensional analogues of the secant method; some of these (but certainly not all) have the same order of (local) convergence (see ).
|||R.P. Brent, "Algorithms for minimization without derivatives" , Prentice-Hall (1973)|
|||G.E. Forsythe, M.A. Malcolm, C.B. Moler, "Computer methods for mathematical computations" , Prentice-Hall (1977)|
|||J.M. Ortega, W.C. Rheinboldt, "Iterative solution of non-linear equations in several variables" , Acad. Press (1970)|
|[a1]||F.B. Hildebrand, "Introduction to numerical analysis" , Dover, reprint (1987) pp. Chapt. 10|
|[a2]||W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, "Numerical recipes" , Cambridge Univ. Press (1986) pp. §9.2|
Secant method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Secant_method&oldid=16449