Difference between revisions of "Secant method"
(Importing text file) |
Ulf Rehmann (talk | contribs) m (tex encoded by computer) |
||
Line 1: | Line 1: | ||
− | + | <!-- | |
+ | s0836501.png | ||
+ | $#A+1 = 31 n = 0 | ||
+ | $#C+1 = 31 : ~/encyclopedia/old_files/data/S083/S.0803650 Secant method | ||
+ | Automatically converted into TeX, above some diagnostics. | ||
+ | Please remove this comment and the {{TEX|auto}} line below, | ||
+ | if TeX found to be correct. | ||
+ | --> | ||
− | + | {{TEX|auto}} | |
+ | {{TEX|done}} | ||
− | + | A method for calculating zeros of continuous functions. Suppose that a zero $ \alpha $ | |
+ | of a continuous function $ f ( x) $ | ||
+ | is contained in $ [ a, b] $; | ||
+ | let $ x _ {0} $, | ||
+ | $ x _ {1} $ | ||
+ | be different points of this interval. The iteration formula of the secant method is | ||
− | + | $$ \tag{1 } | |
+ | x _ {k + 1 } = x _ {k} - | ||
− | + | \frac{f ( x _ {k} ) ( x _ {k - 1 } - x _ {k} ) }{f ( x _ {k - 1 } ) - f ( x _ {k} ) } | |
+ | ,\ \ | ||
+ | k = 1, 2 ,\dots . | ||
+ | $$ | ||
+ | |||
+ | If the sequence $ \{ x _ {k} \} $ | ||
+ | converges, then it must converge to a zero of $ f $. | ||
+ | If $ f $ | ||
+ | has a continuous derivative on $ [ a, b] $, | ||
+ | then the local convergence of the secant method to a simple root is super-linear. If the conditions on the smoothness of $ f $ | ||
+ | are strengthened, one can indicate the exact order of (local) convergence [[#References|[1]]]. Namely, for $ f \in C ^ {2} [ a, b] $ | ||
+ | and $ \alpha $ | ||
+ | such that $ f ( \alpha ) = 0 $, | ||
+ | $ f ^ { \prime } ( \alpha ) \neq 0 $, | ||
+ | $ f ^ { \prime\prime } ( \alpha ) \neq 0 $, | ||
+ | one has | ||
+ | |||
+ | $$ | ||
+ | \lim\limits _ {k \rightarrow \infty } \ | ||
+ | |||
+ | \frac{| x _ {k + 1 } - \alpha | }{| x _ {k} - \alpha | ^ {p} } | ||
+ | = A. | ||
+ | $$ | ||
+ | |||
+ | Here $ p = ( \sqrt 5 + 1)/2 \approx 1.618 $, | ||
+ | and $ A = | f ^ { \prime\prime } ( \alpha )/2f ^ { \prime } ( \alpha ) | $. | ||
Super-linear convergence of the secant method for smooth functions is very important since there is no need to calculate the derivatives at each step, only the new value of the function is calculated. Thus, for comparison, in Newton's method (cf. [[Newton method|Newton method]]), whose order of (local) convergence is 2, at each step one has to calculate the values of the function and its derivative; as a rule this is no less laborious than calculating two values of a function. | Super-linear convergence of the secant method for smooth functions is very important since there is no need to calculate the derivatives at each step, only the new value of the function is calculated. Thus, for comparison, in Newton's method (cf. [[Newton method|Newton method]]), whose order of (local) convergence is 2, at each step one has to calculate the values of the function and its derivative; as a rule this is no less laborious than calculating two values of a function. | ||
− | Since the convergence of the secant method depends on the smoothness of the function and the choice of the initial approximation, in standard computer programs for computing zeros of continuous functions this method is combined with some method for which convergence is guaranteed, for example, the method of bisecting an interval. At each step in such a combined method a root | + | Since the convergence of the secant method depends on the smoothness of the function and the choice of the initial approximation, in standard computer programs for computing zeros of continuous functions this method is combined with some method for which convergence is guaranteed, for example, the method of bisecting an interval. At each step in such a combined method a root $ \alpha $ |
+ | is located in an interval $ [ a _ {k} , b _ {k} ] $ | ||
+ | at the ends of which the function changes sign (it is assumed that this condition also holds for the initial interval $ [ a, b] $). | ||
+ | By a certain test the next approximation is chosen either by (1) or by the formula for bisecting an interval. If $ f $ | ||
+ | is smooth, then from some $ k _ {0} $ | ||
+ | onwards the iterations proceed automatically by the secant method. It is possible to have an even more complicated combination of methods, for example, the $ \textrm{ ZEROIN } $ | ||
+ | algorithm (see [[#References|[2]]]) in which as well as the above-mentioned methods, the method of inverse quadratic interpolation is used. Sometimes by the secant method one means the iteration formula: | ||
+ | |||
+ | $$ \tag{2 } | ||
+ | x _ {k + 1 } = x _ {k} - | ||
− | + | \frac{f ( x _ {k} ) ( x _ {k} - x _ {0} ) }{f ( x _ {k} ) - f ( x _ {0} ) } | |
+ | ,\ \ | ||
+ | k = 1, 2 ,\dots . | ||
+ | $$ | ||
Another terminology for the method (2) is the method of false position, or regula falsi. This method only converges linearly. | Another terminology for the method (2) is the method of false position, or regula falsi. This method only converges linearly. | ||
− | In generalizations of the secant method to the case of a system of equations it is possible to consider (1) in two ways. It can be assumed that (1) is obtained from the formula of Newton's method by a discrete approximation of the derivative. Another possibility is to assume that a linear interpolation for | + | In generalizations of the secant method to the case of a system of equations it is possible to consider (1) in two ways. It can be assumed that (1) is obtained from the formula of Newton's method by a discrete approximation of the derivative. Another possibility is to assume that a linear interpolation for $ f ( x) $ |
+ | at points $ ( x _ {k - 1 } , f ( x _ {k - 1 } )) $ | ||
+ | and $ ( x _ {k} , f ( x _ {k} )) $ | ||
+ | has been carried out, and then $ x _ {k + 1 } $ | ||
+ | is taken to be the zero of the linear interpolant. Both interpretations enable one to obtain a large number of multi-dimensional analogues of the secant method; some of these (but certainly not all) have the same order of (local) convergence $ p = ( 1 + \sqrt 5 )/2 $( | ||
+ | see [[#References|[3]]]). | ||
====References==== | ====References==== | ||
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> R.P. Brent, "Algorithms for minimization without derivatives" , Prentice-Hall (1973)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> G.E. Forsythe, M.A. Malcolm, C.B. Moler, "Computer methods for mathematical computations" , Prentice-Hall (1977)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> J.M. Ortega, W.C. Rheinboldt, "Iterative solution of non-linear equations in several variables" , Acad. Press (1970)</TD></TR></table> | <table><TR><TD valign="top">[1]</TD> <TD valign="top"> R.P. Brent, "Algorithms for minimization without derivatives" , Prentice-Hall (1973)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> G.E. Forsythe, M.A. Malcolm, C.B. Moler, "Computer methods for mathematical computations" , Prentice-Hall (1977)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> J.M. Ortega, W.C. Rheinboldt, "Iterative solution of non-linear equations in several variables" , Acad. Press (1970)</TD></TR></table> | ||
− | |||
− | |||
====Comments==== | ====Comments==== | ||
− | |||
====References==== | ====References==== | ||
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> F.B. Hildebrand, "Introduction to numerical analysis" , Dover, reprint (1987) pp. Chapt. 10</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, "Numerical recipes" , Cambridge Univ. Press (1986) pp. §9.2</TD></TR></table> | <table><TR><TD valign="top">[a1]</TD> <TD valign="top"> F.B. Hildebrand, "Introduction to numerical analysis" , Dover, reprint (1987) pp. Chapt. 10</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, "Numerical recipes" , Cambridge Univ. Press (1986) pp. §9.2</TD></TR></table> |
Latest revision as of 08:12, 6 June 2020
A method for calculating zeros of continuous functions. Suppose that a zero $ \alpha $
of a continuous function $ f ( x) $
is contained in $ [ a, b] $;
let $ x _ {0} $,
$ x _ {1} $
be different points of this interval. The iteration formula of the secant method is
$$ \tag{1 } x _ {k + 1 } = x _ {k} - \frac{f ( x _ {k} ) ( x _ {k - 1 } - x _ {k} ) }{f ( x _ {k - 1 } ) - f ( x _ {k} ) } ,\ \ k = 1, 2 ,\dots . $$
If the sequence $ \{ x _ {k} \} $ converges, then it must converge to a zero of $ f $. If $ f $ has a continuous derivative on $ [ a, b] $, then the local convergence of the secant method to a simple root is super-linear. If the conditions on the smoothness of $ f $ are strengthened, one can indicate the exact order of (local) convergence [1]. Namely, for $ f \in C ^ {2} [ a, b] $ and $ \alpha $ such that $ f ( \alpha ) = 0 $, $ f ^ { \prime } ( \alpha ) \neq 0 $, $ f ^ { \prime\prime } ( \alpha ) \neq 0 $, one has
$$ \lim\limits _ {k \rightarrow \infty } \ \frac{| x _ {k + 1 } - \alpha | }{| x _ {k} - \alpha | ^ {p} } = A. $$
Here $ p = ( \sqrt 5 + 1)/2 \approx 1.618 $, and $ A = | f ^ { \prime\prime } ( \alpha )/2f ^ { \prime } ( \alpha ) | $.
Super-linear convergence of the secant method for smooth functions is very important since there is no need to calculate the derivatives at each step, only the new value of the function is calculated. Thus, for comparison, in Newton's method (cf. Newton method), whose order of (local) convergence is 2, at each step one has to calculate the values of the function and its derivative; as a rule this is no less laborious than calculating two values of a function.
Since the convergence of the secant method depends on the smoothness of the function and the choice of the initial approximation, in standard computer programs for computing zeros of continuous functions this method is combined with some method for which convergence is guaranteed, for example, the method of bisecting an interval. At each step in such a combined method a root $ \alpha $ is located in an interval $ [ a _ {k} , b _ {k} ] $ at the ends of which the function changes sign (it is assumed that this condition also holds for the initial interval $ [ a, b] $). By a certain test the next approximation is chosen either by (1) or by the formula for bisecting an interval. If $ f $ is smooth, then from some $ k _ {0} $ onwards the iterations proceed automatically by the secant method. It is possible to have an even more complicated combination of methods, for example, the $ \textrm{ ZEROIN } $ algorithm (see [2]) in which as well as the above-mentioned methods, the method of inverse quadratic interpolation is used. Sometimes by the secant method one means the iteration formula:
$$ \tag{2 } x _ {k + 1 } = x _ {k} - \frac{f ( x _ {k} ) ( x _ {k} - x _ {0} ) }{f ( x _ {k} ) - f ( x _ {0} ) } ,\ \ k = 1, 2 ,\dots . $$
Another terminology for the method (2) is the method of false position, or regula falsi. This method only converges linearly.
In generalizations of the secant method to the case of a system of equations it is possible to consider (1) in two ways. It can be assumed that (1) is obtained from the formula of Newton's method by a discrete approximation of the derivative. Another possibility is to assume that a linear interpolation for $ f ( x) $ at points $ ( x _ {k - 1 } , f ( x _ {k - 1 } )) $ and $ ( x _ {k} , f ( x _ {k} )) $ has been carried out, and then $ x _ {k + 1 } $ is taken to be the zero of the linear interpolant. Both interpretations enable one to obtain a large number of multi-dimensional analogues of the secant method; some of these (but certainly not all) have the same order of (local) convergence $ p = ( 1 + \sqrt 5 )/2 $( see [3]).
References
[1] | R.P. Brent, "Algorithms for minimization without derivatives" , Prentice-Hall (1973) |
[2] | G.E. Forsythe, M.A. Malcolm, C.B. Moler, "Computer methods for mathematical computations" , Prentice-Hall (1977) |
[3] | J.M. Ortega, W.C. Rheinboldt, "Iterative solution of non-linear equations in several variables" , Acad. Press (1970) |
Comments
References
[a1] | F.B. Hildebrand, "Introduction to numerical analysis" , Dover, reprint (1987) pp. Chapt. 10 |
[a2] | W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, "Numerical recipes" , Cambridge Univ. Press (1986) pp. §9.2 |
Secant method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Secant_method&oldid=48637