# Difference between revisions of "Fibonacci method"

(Importing text file) |
Ulf Rehmann (talk | contribs) m (tex encoded by computer) |
||

Line 1: | Line 1: | ||

+ | <!-- | ||

+ | f0400101.png | ||

+ | $#A+1 = 23 n = 0 | ||

+ | $#C+1 = 23 : ~/encyclopedia/old_files/data/F040/F.0400010 Fibonacci method | ||

+ | Automatically converted into TeX, above some diagnostics. | ||

+ | Please remove this comment and the {{TEX|auto}} line below, | ||

+ | if TeX found to be correct. | ||

+ | --> | ||

+ | |||

+ | {{TEX|auto}} | ||

+ | {{TEX|done}} | ||

+ | |||

A variety of processes of one-dimensional search for an extremum of a function by means of successively narrowing the interval of uncertainty. The sole restriction imposed on the function in question | A variety of processes of one-dimensional search for an extremum of a function by means of successively narrowing the interval of uncertainty. The sole restriction imposed on the function in question | ||

− | + | $$ | |

+ | f ( x) \ \ | ||

+ | ( x \in ( a _ {0} , b _ {0} ) \subset \mathbf R ^ {1} ) | ||

+ | $$ | ||

− | is the requirement of strict unimodality on the given interval | + | is the requirement of strict unimodality on the given interval $ ( a _ {0} , b _ {0} ) $. |

− | In the successive narrowing process the values of | + | In the successive narrowing process the values of $ f ( x) $ |

+ | are computed (or measured) at a number $ n $ | ||

+ | of test points that is bounded beforehand. As a result one obtains a sequence of narrowing intervals of uncertainty containing the required extremum: | ||

− | + | $$ | |

+ | ( a _ {0} , b _ {0} ) \supset \dots \supset ( a _ {n} , b _ {n} ). | ||

+ | $$ | ||

− | In order to narrow the interval of uncertainty for an arbitrary strictly-unimodal function, one needs to know not less than two test values of it. In the Fibonacci method one chooses exactly two test values inside each current interval of uncertainty, symmetrically about the middle of the interval. Next, along with one of the test points one discards the end of the interval with the worst values of | + | In order to narrow the interval of uncertainty for an arbitrary strictly-unimodal function, one needs to know not less than two test values of it. In the Fibonacci method one chooses exactly two test values inside each current interval of uncertainty, symmetrically about the middle of the interval. Next, along with one of the test points one discards the end of the interval with the worst values of $ f ( x) $. |

+ | One obtains $ ( a _ {i + 1 } , b _ {i + 1 } ) $, | ||

+ | where in addition to the remaining old test point one symmetrically constructs a new one. Hence for the sequence of intervals | ||

− | + | $$ | |

+ | \Delta _ {i} = \ | ||

+ | b _ {i} - a _ {i} $$ | ||

follows the recurrence relation | follows the recurrence relation | ||

− | + | $$ | |

+ | \Delta _ {i} = \ | ||

+ | \Delta _ {i + 1 } + | ||

+ | \Delta _ {i + 2 } . | ||

+ | $$ | ||

+ | |||

+ | (It is also assumed above that the overlap condition $ \Delta _ {1} < ( 2/3) \Delta _ {0} $ | ||

+ | is satisfied.) | ||

+ | |||

+ | The solution to the equation under the condition $ \Delta _ {n + 1 } = \Delta _ {n + 2 } $ | ||

+ | gives | ||

− | + | $$ | |

+ | \Delta _ {i} = \ | ||

− | + | \frac{F _ {n + 3 - i } }{F _ {n + 2 } } | |

− | + | \Delta _ {0} ,\ \ | |

+ | i = 2 \dots n, | ||

+ | $$ | ||

− | where | + | where $ F _ {k} $ |

+ | are the [[Fibonacci numbers|Fibonacci numbers]]. | ||

− | For the extremum point one finds | + | For the extremum point one finds $ x _ { \mathop{\rm opt} } \approx ( a _ {n} + b _ {n} )/2 $. |

− | In the simplest version of the Fibonacci method (when one assumes that the test points and test values | + | In the simplest version of the Fibonacci method (when one assumes that the test points and test values $ f ( x) $ |

+ | are determined absolutely accurately), in order to narrow the original interval of uncertainty from $ \Delta $ | ||

+ | to $ \epsilon $ | ||

+ | one needs to take a number $ n $ | ||

+ | of test points satisfying the inequality $ 2 \Delta / \epsilon \leq F _ {n + 2 } $. | ||

+ | Taking into account accuracy corrections to the determination of the values of the function, the above estimate becomes somewhat more complicated. | ||

The limit | The limit | ||

− | + | $$ | |

+ | \tau = \ | ||

+ | \lim\limits _ {n \rightarrow \infty } \ | ||

+ | |||

+ | \frac{F _ {n} }{F _ {n + 1 } } | ||

+ | = \ | ||

+ | { | ||

+ | \frac{\sqrt 5 - 1 }{2} | ||

+ | } = \ | ||

+ | 0.618 033 989 \dots | ||

+ | $$ | ||

− | exists and gives the reason for introducing the golden section method — a version of the Fibonacci method such that for all | + | exists and gives the reason for introducing the golden section method — a version of the Fibonacci method such that for all $ i $, |

+ | $ \Delta _ {i + 1 } = \tau \Delta _ {i} $, | ||

+ | that is, the test points make a golden section of the current interval (cf. also [[Golden ratio|Golden ratio]]). An advantage of this method consists in the fact that the number of test points does not have to be planned in advance. | ||

− | Modifications of the Fibonacci method have been developed for functions of non-compact support, for shortening the calculations when | + | Modifications of the Fibonacci method have been developed for functions of non-compact support, for shortening the calculations when $ f ( x) $ |

+ | is equal at two test points, for increasing the stability of the calculation, etc. | ||

The Fibonacci method is significantly more efficient than that of dichotomy (see [[Dichotomy method|Dichotomy method]]). But for optimizing differentiable functions other methods are preferable (see [[Descent, method of|Descent, method of]]; [[Maximization and minimization of functions|Maximization and minimization of functions]]). | The Fibonacci method is significantly more efficient than that of dichotomy (see [[Dichotomy method|Dichotomy method]]). But for optimizing differentiable functions other methods are preferable (see [[Descent, method of|Descent, method of]]; [[Maximization and minimization of functions|Maximization and minimization of functions]]). | ||

Line 41: | Line 96: | ||

====References==== | ====References==== | ||

<table><TR><TD valign="top">[1]</TD> <TD valign="top"> J. Kiefer, "Sequential minimax search for a maximum" ''Proc. Amer. Math. Soc.'' , '''4''' (1953) pp. 502–506</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> D.J. Wilde, "Optimum seeking methods" , Prentice-Hall (1964)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> F.P. Vasil'ev, "Computational methods for solving extremum problems" , Moscow (1980) (In Russian)</TD></TR></table> | <table><TR><TD valign="top">[1]</TD> <TD valign="top"> J. Kiefer, "Sequential minimax search for a maximum" ''Proc. Amer. Math. Soc.'' , '''4''' (1953) pp. 502–506</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> D.J. Wilde, "Optimum seeking methods" , Prentice-Hall (1964)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> F.P. Vasil'ev, "Computational methods for solving extremum problems" , Moscow (1980) (In Russian)</TD></TR></table> | ||

− | |||

− | |||

====Comments==== | ====Comments==== | ||

− | |||

====References==== | ====References==== | ||

<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> M. Avriel, "Nonlinear programming" , Prentice-Hall (1977)</TD></TR></table> | <table><TR><TD valign="top">[a1]</TD> <TD valign="top"> M. Avriel, "Nonlinear programming" , Prentice-Hall (1977)</TD></TR></table> |

## Latest revision as of 19:39, 5 June 2020

A variety of processes of one-dimensional search for an extremum of a function by means of successively narrowing the interval of uncertainty. The sole restriction imposed on the function in question

$$ f ( x) \ \ ( x \in ( a _ {0} , b _ {0} ) \subset \mathbf R ^ {1} ) $$

is the requirement of strict unimodality on the given interval $ ( a _ {0} , b _ {0} ) $.

In the successive narrowing process the values of $ f ( x) $ are computed (or measured) at a number $ n $ of test points that is bounded beforehand. As a result one obtains a sequence of narrowing intervals of uncertainty containing the required extremum:

$$ ( a _ {0} , b _ {0} ) \supset \dots \supset ( a _ {n} , b _ {n} ). $$

In order to narrow the interval of uncertainty for an arbitrary strictly-unimodal function, one needs to know not less than two test values of it. In the Fibonacci method one chooses exactly two test values inside each current interval of uncertainty, symmetrically about the middle of the interval. Next, along with one of the test points one discards the end of the interval with the worst values of $ f ( x) $. One obtains $ ( a _ {i + 1 } , b _ {i + 1 } ) $, where in addition to the remaining old test point one symmetrically constructs a new one. Hence for the sequence of intervals

$$ \Delta _ {i} = \ b _ {i} - a _ {i} $$

follows the recurrence relation

$$ \Delta _ {i} = \ \Delta _ {i + 1 } + \Delta _ {i + 2 } . $$

(It is also assumed above that the overlap condition $ \Delta _ {1} < ( 2/3) \Delta _ {0} $ is satisfied.)

The solution to the equation under the condition $ \Delta _ {n + 1 } = \Delta _ {n + 2 } $ gives

$$ \Delta _ {i} = \ \frac{F _ {n + 3 - i } }{F _ {n + 2 } } \Delta _ {0} ,\ \ i = 2 \dots n, $$

where $ F _ {k} $ are the Fibonacci numbers.

For the extremum point one finds $ x _ { \mathop{\rm opt} } \approx ( a _ {n} + b _ {n} )/2 $.

In the simplest version of the Fibonacci method (when one assumes that the test points and test values $ f ( x) $ are determined absolutely accurately), in order to narrow the original interval of uncertainty from $ \Delta $ to $ \epsilon $ one needs to take a number $ n $ of test points satisfying the inequality $ 2 \Delta / \epsilon \leq F _ {n + 2 } $. Taking into account accuracy corrections to the determination of the values of the function, the above estimate becomes somewhat more complicated.

The limit

$$ \tau = \ \lim\limits _ {n \rightarrow \infty } \ \frac{F _ {n} }{F _ {n + 1 } } = \ { \frac{\sqrt 5 - 1 }{2} } = \ 0.618 033 989 \dots $$

exists and gives the reason for introducing the golden section method — a version of the Fibonacci method such that for all $ i $, $ \Delta _ {i + 1 } = \tau \Delta _ {i} $, that is, the test points make a golden section of the current interval (cf. also Golden ratio). An advantage of this method consists in the fact that the number of test points does not have to be planned in advance.

Modifications of the Fibonacci method have been developed for functions of non-compact support, for shortening the calculations when $ f ( x) $ is equal at two test points, for increasing the stability of the calculation, etc.

The Fibonacci method is significantly more efficient than that of dichotomy (see Dichotomy method). But for optimizing differentiable functions other methods are preferable (see Descent, method of; Maximization and minimization of functions).

#### References

[1] | J. Kiefer, "Sequential minimax search for a maximum" Proc. Amer. Math. Soc. , 4 (1953) pp. 502–506 |

[2] | D.J. Wilde, "Optimum seeking methods" , Prentice-Hall (1964) |

[3] | F.P. Vasil'ev, "Computational methods for solving extremum problems" , Moscow (1980) (In Russian) |

#### Comments

#### References

[a1] | M. Avriel, "Nonlinear programming" , Prentice-Hall (1977) |

**How to Cite This Entry:**

Fibonacci method.

*Encyclopedia of Mathematics.*URL: http://encyclopediaofmath.org/index.php?title=Fibonacci_method&oldid=13380