Namespaces
Variants
Actions

Difference between revisions of "Fibonacci method"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
f0400101.png
 +
$#A+1 = 23 n = 0
 +
$#C+1 = 23 : ~/encyclopedia/old_files/data/F040/F.0400010 Fibonacci method
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
A variety of processes of one-dimensional search for an extremum of a function by means of successively narrowing the interval of uncertainty. The sole restriction imposed on the function in question
 
A variety of processes of one-dimensional search for an extremum of a function by means of successively narrowing the interval of uncertainty. The sole restriction imposed on the function in question
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f0400101.png" /></td> </tr></table>
+
$$
 +
f ( x) \ \
 +
( x \in ( a _ {0} , b _ {0} ) \subset  \mathbf R  ^ {1} )
 +
$$
  
is the requirement of strict unimodality on the given interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f0400102.png" />.
+
is the requirement of strict unimodality on the given interval $  ( a _ {0} , b _ {0} ) $.
  
In the successive narrowing process the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f0400103.png" /> are computed (or measured) at a number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f0400104.png" /> of test points that is bounded beforehand. As a result one obtains a sequence of narrowing intervals of uncertainty containing the required extremum:
+
In the successive narrowing process the values of f ( x) $
 +
are computed (or measured) at a number $  n $
 +
of test points that is bounded beforehand. As a result one obtains a sequence of narrowing intervals of uncertainty containing the required extremum:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f0400105.png" /></td> </tr></table>
+
$$
 +
( a _ {0} , b _ {0} )  \supset \dots \supset  ( a _ {n} , b _ {n} ).
 +
$$
  
In order to narrow the interval of uncertainty for an arbitrary strictly-unimodal function, one needs to know not less than two test values of it. In the Fibonacci method one chooses exactly two test values inside each current interval of uncertainty, symmetrically about the middle of the interval. Next, along with one of the test points one discards the end of the interval with the worst values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f0400106.png" />. One obtains <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f0400107.png" />, where in addition to the remaining old test point one symmetrically constructs a new one. Hence for the sequence of intervals
+
In order to narrow the interval of uncertainty for an arbitrary strictly-unimodal function, one needs to know not less than two test values of it. In the Fibonacci method one chooses exactly two test values inside each current interval of uncertainty, symmetrically about the middle of the interval. Next, along with one of the test points one discards the end of the interval with the worst values of f ( x) $.  
 +
One obtains $  ( a _ {i + 1 }  , b _ {i + 1 }  ) $,  
 +
where in addition to the remaining old test point one symmetrically constructs a new one. Hence for the sequence of intervals
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f0400108.png" /></td> </tr></table>
+
$$
 +
\Delta _ {i}  = \
 +
b _ {i} - a _ {i}  $$
  
 
follows the recurrence relation
 
follows the recurrence relation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f0400109.png" /></td> </tr></table>
+
$$
 +
\Delta _ {i}  = \
 +
\Delta _ {i + 1 }  +
 +
\Delta _ {i + 2 }  .
 +
$$
 +
 
 +
(It is also assumed above that the overlap condition  $  \Delta _ {1} < ( 2/3) \Delta _ {0} $
 +
is satisfied.)
 +
 
 +
The solution to the equation under the condition  $  \Delta _ {n + 1 }  = \Delta _ {n + 2 }  $
 +
gives
  
(It is also assumed above that the overlap condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f04001010.png" /> is satisfied.)
+
$$
 +
\Delta _ {i}  = \
  
The solution to the equation under the condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f04001011.png" /> gives
+
\frac{F _ {n + 3 - i }  }{F _ {n + 2 }  }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f04001012.png" /></td> </tr></table>
+
\Delta _ {0} ,\ \
 +
i = 2 \dots n,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f04001013.png" /> are the [[Fibonacci numbers|Fibonacci numbers]].
+
where $  F _ {k} $
 +
are the [[Fibonacci numbers|Fibonacci numbers]].
  
For the extremum point one finds <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f04001014.png" />.
+
For the extremum point one finds $  x _ { \mathop{\rm opt}  } \approx ( a _ {n} + b _ {n} )/2 $.
  
In the simplest version of the Fibonacci method (when one assumes that the test points and test values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f04001015.png" /> are determined absolutely accurately), in order to narrow the original interval of uncertainty from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f04001016.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f04001017.png" /> one needs to take a number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f04001018.png" /> of test points satisfying the inequality <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f04001019.png" />. Taking into account accuracy corrections to the determination of the values of the function, the above estimate becomes somewhat more complicated.
+
In the simplest version of the Fibonacci method (when one assumes that the test points and test values f ( x) $
 +
are determined absolutely accurately), in order to narrow the original interval of uncertainty from $  \Delta $
 +
to $  \epsilon $
 +
one needs to take a number $  n $
 +
of test points satisfying the inequality $  2 \Delta / \epsilon \leq  F _ {n + 2 }  $.  
 +
Taking into account accuracy corrections to the determination of the values of the function, the above estimate becomes somewhat more complicated.
  
 
The limit
 
The limit
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f04001020.png" /></td> </tr></table>
+
$$
 +
\tau  = \
 +
\lim\limits _ {n \rightarrow \infty } \
 +
 
 +
\frac{F _ {n} }{F _ {n + 1 }  }
 +
  = \
 +
{
 +
\frac{\sqrt 5 - 1 }{2}
 +
= \
 +
0.618  033  989 \dots
 +
$$
  
exists and gives the reason for introducing the golden section method — a version of the Fibonacci method such that for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f04001021.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f04001022.png" />, that is, the test points make a golden section of the current interval (cf. also [[Golden ratio|Golden ratio]]). An advantage of this method consists in the fact that the number of test points does not have to be planned in advance.
+
exists and gives the reason for introducing the golden section method — a version of the Fibonacci method such that for all $  i $,  
 +
$  \Delta _ {i + 1 }  = \tau \Delta _ {i} $,  
 +
that is, the test points make a golden section of the current interval (cf. also [[Golden ratio|Golden ratio]]). An advantage of this method consists in the fact that the number of test points does not have to be planned in advance.
  
Modifications of the Fibonacci method have been developed for functions of non-compact support, for shortening the calculations when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040010/f04001023.png" /> is equal at two test points, for increasing the stability of the calculation, etc.
+
Modifications of the Fibonacci method have been developed for functions of non-compact support, for shortening the calculations when f ( x) $
 +
is equal at two test points, for increasing the stability of the calculation, etc.
  
 
The Fibonacci method is significantly more efficient than that of dichotomy (see [[Dichotomy method|Dichotomy method]]). But for optimizing differentiable functions other methods are preferable (see [[Descent, method of|Descent, method of]]; [[Maximization and minimization of functions|Maximization and minimization of functions]]).
 
The Fibonacci method is significantly more efficient than that of dichotomy (see [[Dichotomy method|Dichotomy method]]). But for optimizing differentiable functions other methods are preferable (see [[Descent, method of|Descent, method of]]; [[Maximization and minimization of functions|Maximization and minimization of functions]]).
Line 41: Line 96:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  J. Kiefer,  "Sequential minimax search for a maximum"  ''Proc. Amer. Math. Soc.'' , '''4'''  (1953)  pp. 502–506</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  D.J. Wilde,  "Optimum seeking methods" , Prentice-Hall  (1964)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  F.P. Vasil'ev,  "Computational methods for solving extremum problems" , Moscow  (1980)  (In Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  J. Kiefer,  "Sequential minimax search for a maximum"  ''Proc. Amer. Math. Soc.'' , '''4'''  (1953)  pp. 502–506</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  D.J. Wilde,  "Optimum seeking methods" , Prentice-Hall  (1964)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  F.P. Vasil'ev,  "Computational methods for solving extremum problems" , Moscow  (1980)  (In Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  M. Avriel,  "Nonlinear programming" , Prentice-Hall  (1977)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  M. Avriel,  "Nonlinear programming" , Prentice-Hall  (1977)</TD></TR></table>

Latest revision as of 19:39, 5 June 2020


A variety of processes of one-dimensional search for an extremum of a function by means of successively narrowing the interval of uncertainty. The sole restriction imposed on the function in question

$$ f ( x) \ \ ( x \in ( a _ {0} , b _ {0} ) \subset \mathbf R ^ {1} ) $$

is the requirement of strict unimodality on the given interval $ ( a _ {0} , b _ {0} ) $.

In the successive narrowing process the values of $ f ( x) $ are computed (or measured) at a number $ n $ of test points that is bounded beforehand. As a result one obtains a sequence of narrowing intervals of uncertainty containing the required extremum:

$$ ( a _ {0} , b _ {0} ) \supset \dots \supset ( a _ {n} , b _ {n} ). $$

In order to narrow the interval of uncertainty for an arbitrary strictly-unimodal function, one needs to know not less than two test values of it. In the Fibonacci method one chooses exactly two test values inside each current interval of uncertainty, symmetrically about the middle of the interval. Next, along with one of the test points one discards the end of the interval with the worst values of $ f ( x) $. One obtains $ ( a _ {i + 1 } , b _ {i + 1 } ) $, where in addition to the remaining old test point one symmetrically constructs a new one. Hence for the sequence of intervals

$$ \Delta _ {i} = \ b _ {i} - a _ {i} $$

follows the recurrence relation

$$ \Delta _ {i} = \ \Delta _ {i + 1 } + \Delta _ {i + 2 } . $$

(It is also assumed above that the overlap condition $ \Delta _ {1} < ( 2/3) \Delta _ {0} $ is satisfied.)

The solution to the equation under the condition $ \Delta _ {n + 1 } = \Delta _ {n + 2 } $ gives

$$ \Delta _ {i} = \ \frac{F _ {n + 3 - i } }{F _ {n + 2 } } \Delta _ {0} ,\ \ i = 2 \dots n, $$

where $ F _ {k} $ are the Fibonacci numbers.

For the extremum point one finds $ x _ { \mathop{\rm opt} } \approx ( a _ {n} + b _ {n} )/2 $.

In the simplest version of the Fibonacci method (when one assumes that the test points and test values $ f ( x) $ are determined absolutely accurately), in order to narrow the original interval of uncertainty from $ \Delta $ to $ \epsilon $ one needs to take a number $ n $ of test points satisfying the inequality $ 2 \Delta / \epsilon \leq F _ {n + 2 } $. Taking into account accuracy corrections to the determination of the values of the function, the above estimate becomes somewhat more complicated.

The limit

$$ \tau = \ \lim\limits _ {n \rightarrow \infty } \ \frac{F _ {n} }{F _ {n + 1 } } = \ { \frac{\sqrt 5 - 1 }{2} } = \ 0.618 033 989 \dots $$

exists and gives the reason for introducing the golden section method — a version of the Fibonacci method such that for all $ i $, $ \Delta _ {i + 1 } = \tau \Delta _ {i} $, that is, the test points make a golden section of the current interval (cf. also Golden ratio). An advantage of this method consists in the fact that the number of test points does not have to be planned in advance.

Modifications of the Fibonacci method have been developed for functions of non-compact support, for shortening the calculations when $ f ( x) $ is equal at two test points, for increasing the stability of the calculation, etc.

The Fibonacci method is significantly more efficient than that of dichotomy (see Dichotomy method). But for optimizing differentiable functions other methods are preferable (see Descent, method of; Maximization and minimization of functions).

References

[1] J. Kiefer, "Sequential minimax search for a maximum" Proc. Amer. Math. Soc. , 4 (1953) pp. 502–506
[2] D.J. Wilde, "Optimum seeking methods" , Prentice-Hall (1964)
[3] F.P. Vasil'ev, "Computational methods for solving extremum problems" , Moscow (1980) (In Russian)

Comments

References

[a1] M. Avriel, "Nonlinear programming" , Prentice-Hall (1977)
How to Cite This Entry:
Fibonacci method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Fibonacci_method&oldid=13380
This article was adapted from an original article by Yu.P. IvanilovV.V. Okhrimenko (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article