Namespaces
Variants
Actions

Difference between revisions of "Secant method"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
A method for calculating zeros of continuous functions. Suppose that a zero <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s0836501.png" /> of a continuous function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s0836502.png" /> is contained in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s0836503.png" />; let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s0836504.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s0836505.png" /> be different points of this interval. The iteration formula of the secant method is
+
<!--
 +
s0836501.png
 +
$#A+1 = 31 n = 0
 +
$#C+1 = 31 : ~/encyclopedia/old_files/data/S083/S.0803650 Secant method
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s0836506.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
If the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s0836507.png" /> converges, then it must converge to a zero of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s0836508.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s0836509.png" /> has a continuous derivative on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365010.png" />, then the local convergence of the secant method to a simple root is super-linear. If the conditions on the smoothness of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365011.png" /> are strengthened, one can indicate the exact order of (local) convergence [[#References|[1]]]. Namely, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365012.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365013.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365014.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365015.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365016.png" />, one has
+
A method for calculating zeros of continuous functions. Suppose that a zero $  \alpha $
 +
of a continuous function  $  f ( x) $
 +
is contained in  $  [ a, b] $;
 +
let  $  x _ {0} $,
 +
$  x _ {1} $
 +
be different points of this interval. The iteration formula of the secant method is
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365017.png" /></td> </tr></table>
+
$$ \tag{1 }
 +
x _ {k + 1 }  = x _ {k} -
  
Here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365018.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365019.png" />.
+
\frac{f ( x _ {k} ) ( x _ {k - 1 }  - x _ {k} ) }{f ( x _ {k - 1 }  ) - f ( x _ {k} ) }
 +
,\ \
 +
k = 1, 2 ,\dots .
 +
$$
 +
 
 +
If the sequence  $  \{ x _ {k} \} $
 +
converges, then it must converge to a zero of  $  f $.  
 +
If  $  f $
 +
has a continuous derivative on  $  [ a, b] $,
 +
then the local convergence of the secant method to a simple root is super-linear. If the conditions on the smoothness of  $  f $
 +
are strengthened, one can indicate the exact order of (local) convergence [[#References|[1]]]. Namely, for  $  f \in C  ^ {2} [ a, b] $
 +
and $  \alpha $
 +
such that  $  f ( \alpha ) = 0 $,
 +
$  f ^ { \prime } ( \alpha ) \neq 0 $,
 +
$  f ^ { \prime\prime } ( \alpha ) \neq 0 $,
 +
one has
 +
 
 +
$$
 +
\lim\limits _ {k \rightarrow \infty } \
 +
 
 +
\frac{| x _ {k + 1 }  - \alpha | }{| x _ {k} - \alpha |  ^ {p} }
 +
  = A.
 +
$$
 +
 
 +
Here  $  p = ( \sqrt 5 + 1)/2 \approx 1.618 $,
 +
and  $  A = | f ^ { \prime\prime } ( \alpha )/2f ^ { \prime } ( \alpha ) | $.
  
 
Super-linear convergence of the secant method for smooth functions is very important since there is no need to calculate the derivatives at each step, only the new value of the function is calculated. Thus, for comparison, in Newton's method (cf. [[Newton method|Newton method]]), whose order of (local) convergence is 2, at each step one has to calculate the values of the function and its derivative; as a rule this is no less laborious than calculating two values of a function.
 
Super-linear convergence of the secant method for smooth functions is very important since there is no need to calculate the derivatives at each step, only the new value of the function is calculated. Thus, for comparison, in Newton's method (cf. [[Newton method|Newton method]]), whose order of (local) convergence is 2, at each step one has to calculate the values of the function and its derivative; as a rule this is no less laborious than calculating two values of a function.
  
Since the convergence of the secant method depends on the smoothness of the function and the choice of the initial approximation, in standard computer programs for computing zeros of continuous functions this method is combined with some method for which convergence is guaranteed, for example, the method of bisecting an interval. At each step in such a combined method a root <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365020.png" /> is located in an interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365021.png" /> at the ends of which the function changes sign (it is assumed that this condition also holds for the initial interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365022.png" />). By a certain test the next approximation is chosen either by (1) or by the formula for bisecting an interval. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365023.png" /> is smooth, then from some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365024.png" /> onwards the iterations proceed automatically by the secant method. It is possible to have an even more complicated combination of methods, for example, the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365025.png" /> algorithm (see [[#References|[2]]]) in which as well as the above-mentioned methods, the method of inverse quadratic interpolation is used. Sometimes by the secant method one means the iteration formula:
+
Since the convergence of the secant method depends on the smoothness of the function and the choice of the initial approximation, in standard computer programs for computing zeros of continuous functions this method is combined with some method for which convergence is guaranteed, for example, the method of bisecting an interval. At each step in such a combined method a root $  \alpha $
 +
is located in an interval $  [ a _ {k} , b _ {k} ] $
 +
at the ends of which the function changes sign (it is assumed that this condition also holds for the initial interval $  [ a, b] $).  
 +
By a certain test the next approximation is chosen either by (1) or by the formula for bisecting an interval. If $  f $
 +
is smooth, then from some $  k _ {0} $
 +
onwards the iterations proceed automatically by the secant method. It is possible to have an even more complicated combination of methods, for example, the $  \textrm{ ZEROIN } $
 +
algorithm (see [[#References|[2]]]) in which as well as the above-mentioned methods, the method of inverse quadratic interpolation is used. Sometimes by the secant method one means the iteration formula:
 +
 
 +
$$ \tag{2 }
 +
x _ {k + 1 }  =  x _ {k} -
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365026.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
\frac{f ( x _ {k} ) ( x _ {k} - x _ {0} ) }{f ( x _ {k} ) - f ( x _ {0} ) }
 +
,\ \
 +
k = 1, 2 ,\dots .
 +
$$
  
 
Another terminology for the method (2) is the method of false position, or regula falsi. This method only converges linearly.
 
Another terminology for the method (2) is the method of false position, or regula falsi. This method only converges linearly.
  
In generalizations of the secant method to the case of a system of equations it is possible to consider (1) in two ways. It can be assumed that (1) is obtained from the formula of Newton's method by a discrete approximation of the derivative. Another possibility is to assume that a linear interpolation for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365027.png" /> at points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365028.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365029.png" /> has been carried out, and then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365030.png" /> is taken to be the zero of the linear interpolant. Both interpretations enable one to obtain a large number of multi-dimensional analogues of the secant method; some of these (but certainly not all) have the same order of (local) convergence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083650/s08365031.png" /> (see [[#References|[3]]]).
+
In generalizations of the secant method to the case of a system of equations it is possible to consider (1) in two ways. It can be assumed that (1) is obtained from the formula of Newton's method by a discrete approximation of the derivative. Another possibility is to assume that a linear interpolation for $  f ( x) $
 +
at points $  ( x _ {k - 1 }  , f ( x _ {k - 1 }  )) $
 +
and $  ( x _ {k} , f ( x _ {k} )) $
 +
has been carried out, and then $  x _ {k + 1 }  $
 +
is taken to be the zero of the linear interpolant. Both interpretations enable one to obtain a large number of multi-dimensional analogues of the secant method; some of these (but certainly not all) have the same order of (local) convergence $  p = ( 1 + \sqrt 5 )/2 $(
 +
see [[#References|[3]]]).
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  R.P. Brent,  "Algorithms for minimization without derivatives" , Prentice-Hall  (1973)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  G.E. Forsythe,  M.A. Malcolm,  C.B. Moler,  "Computer methods for mathematical computations" , Prentice-Hall  (1977)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  J.M. Ortega,  W.C. Rheinboldt,  "Iterative solution of non-linear equations in several variables" , Acad. Press  (1970)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  R.P. Brent,  "Algorithms for minimization without derivatives" , Prentice-Hall  (1973)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  G.E. Forsythe,  M.A. Malcolm,  C.B. Moler,  "Computer methods for mathematical computations" , Prentice-Hall  (1977)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  J.M. Ortega,  W.C. Rheinboldt,  "Iterative solution of non-linear equations in several variables" , Acad. Press  (1970)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  F.B. Hildebrand,  "Introduction to numerical analysis" , Dover, reprint  (1987)  pp. Chapt. 10</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  W.H. Press,  B.P. Flannery,  S.A. Teukolsky,  W.T. Vetterling,  "Numerical recipes" , Cambridge Univ. Press  (1986)  pp. §9.2</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  F.B. Hildebrand,  "Introduction to numerical analysis" , Dover, reprint  (1987)  pp. Chapt. 10</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  W.H. Press,  B.P. Flannery,  S.A. Teukolsky,  W.T. Vetterling,  "Numerical recipes" , Cambridge Univ. Press  (1986)  pp. §9.2</TD></TR></table>

Latest revision as of 08:12, 6 June 2020


A method for calculating zeros of continuous functions. Suppose that a zero $ \alpha $ of a continuous function $ f ( x) $ is contained in $ [ a, b] $; let $ x _ {0} $, $ x _ {1} $ be different points of this interval. The iteration formula of the secant method is

$$ \tag{1 } x _ {k + 1 } = x _ {k} - \frac{f ( x _ {k} ) ( x _ {k - 1 } - x _ {k} ) }{f ( x _ {k - 1 } ) - f ( x _ {k} ) } ,\ \ k = 1, 2 ,\dots . $$

If the sequence $ \{ x _ {k} \} $ converges, then it must converge to a zero of $ f $. If $ f $ has a continuous derivative on $ [ a, b] $, then the local convergence of the secant method to a simple root is super-linear. If the conditions on the smoothness of $ f $ are strengthened, one can indicate the exact order of (local) convergence [1]. Namely, for $ f \in C ^ {2} [ a, b] $ and $ \alpha $ such that $ f ( \alpha ) = 0 $, $ f ^ { \prime } ( \alpha ) \neq 0 $, $ f ^ { \prime\prime } ( \alpha ) \neq 0 $, one has

$$ \lim\limits _ {k \rightarrow \infty } \ \frac{| x _ {k + 1 } - \alpha | }{| x _ {k} - \alpha | ^ {p} } = A. $$

Here $ p = ( \sqrt 5 + 1)/2 \approx 1.618 $, and $ A = | f ^ { \prime\prime } ( \alpha )/2f ^ { \prime } ( \alpha ) | $.

Super-linear convergence of the secant method for smooth functions is very important since there is no need to calculate the derivatives at each step, only the new value of the function is calculated. Thus, for comparison, in Newton's method (cf. Newton method), whose order of (local) convergence is 2, at each step one has to calculate the values of the function and its derivative; as a rule this is no less laborious than calculating two values of a function.

Since the convergence of the secant method depends on the smoothness of the function and the choice of the initial approximation, in standard computer programs for computing zeros of continuous functions this method is combined with some method for which convergence is guaranteed, for example, the method of bisecting an interval. At each step in such a combined method a root $ \alpha $ is located in an interval $ [ a _ {k} , b _ {k} ] $ at the ends of which the function changes sign (it is assumed that this condition also holds for the initial interval $ [ a, b] $). By a certain test the next approximation is chosen either by (1) or by the formula for bisecting an interval. If $ f $ is smooth, then from some $ k _ {0} $ onwards the iterations proceed automatically by the secant method. It is possible to have an even more complicated combination of methods, for example, the $ \textrm{ ZEROIN } $ algorithm (see [2]) in which as well as the above-mentioned methods, the method of inverse quadratic interpolation is used. Sometimes by the secant method one means the iteration formula:

$$ \tag{2 } x _ {k + 1 } = x _ {k} - \frac{f ( x _ {k} ) ( x _ {k} - x _ {0} ) }{f ( x _ {k} ) - f ( x _ {0} ) } ,\ \ k = 1, 2 ,\dots . $$

Another terminology for the method (2) is the method of false position, or regula falsi. This method only converges linearly.

In generalizations of the secant method to the case of a system of equations it is possible to consider (1) in two ways. It can be assumed that (1) is obtained from the formula of Newton's method by a discrete approximation of the derivative. Another possibility is to assume that a linear interpolation for $ f ( x) $ at points $ ( x _ {k - 1 } , f ( x _ {k - 1 } )) $ and $ ( x _ {k} , f ( x _ {k} )) $ has been carried out, and then $ x _ {k + 1 } $ is taken to be the zero of the linear interpolant. Both interpretations enable one to obtain a large number of multi-dimensional analogues of the secant method; some of these (but certainly not all) have the same order of (local) convergence $ p = ( 1 + \sqrt 5 )/2 $( see [3]).

References

[1] R.P. Brent, "Algorithms for minimization without derivatives" , Prentice-Hall (1973)
[2] G.E. Forsythe, M.A. Malcolm, C.B. Moler, "Computer methods for mathematical computations" , Prentice-Hall (1977)
[3] J.M. Ortega, W.C. Rheinboldt, "Iterative solution of non-linear equations in several variables" , Acad. Press (1970)

Comments

References

[a1] F.B. Hildebrand, "Introduction to numerical analysis" , Dover, reprint (1987) pp. Chapt. 10
[a2] W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, "Numerical recipes" , Cambridge Univ. Press (1986) pp. §9.2
How to Cite This Entry:
Secant method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Secant_method&oldid=16449
This article was adapted from an original article by Kh.D. Ikramov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article