Namespaces
Variants
Actions

Steepest descent, method of

From Encyclopedia of Mathematics
Revision as of 19:10, 6 April 2012 by Ulf Rehmann (talk | contribs) (tex/msc/mr/zbl)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

2020 Mathematics Subject Classification: Primary: 65-XX [MSN][ZBL]


The method of steepest decent is a special instance of the method of descent (cf. Descent, method of), when the direction $g^k$ of descent is chosen as the direction opposite to $\mathrm{grad} f(x^k)$. The formulas of the method of steepest descent are

$$x^{k+1} = x^k - \def\a{\alpha} \a_k f'(x^k),\quad k = 0,1,\dots ,$$ where the parameters $\{\a_k\}$ are determined by the condition that the function $f$ has maximum decrease at each step. If $f$ is twice continuously-differentiable and its matrix of second derivatives $f''$ satisfies the inequality

$$m\|y\|^2 \le (f''(x)y,y)\le M \|y\|^2$$ for any $x,y$, with constants $M\ge m > 0$, then (see [KaAk], [PsDa]) the sequence $\{x^k\}$ converges to a solution $x^*$ of the problem of minimizing $f$, the convergence rate being that of a geometric progression with quotient $q<1$.

The method of steepest descent has been widely applied to the solution of systems of linear algebraic equations $Ax = f$ with a Hermitian or positive-definite matrix $A$. In the real symmetric case, the problem of solving this system is equivalent to finding a vector $x^*$ in an $n$-dimensional vector space minimizing the functional

$$F(X)=\frac{1}{2} (Ax,x) -(x,f).\tag{$*$}$$ Applied to (*), the method of steepest descent takes the form

$$x^{k+1} = x^k - \a_k(Ax^k -f), \quad k = 0,1,\dots ,$$ where the value of $\a_k$ is determined by minimization of the functional (*), according to the formula

$$\a_k = \frac{(\xi^k,\xi^k)}{(A\xi^k,\xi^k)},\quad \xi^k = Ax^k - f.$$ If the spectrum of $A$ lies in an interval $[m,M]$, $M\ge m > 0$, on the real axis, then the sequence $\{x^k\}$ converges to the solution $x^*$ at the rate of a geometric progression with quotient $q=(M-m)/(M+m)$.

The method of steepest descent can be applied to solve an operator equation $Au=f$ with a self-adjoint positive-definite bounded operator $A$. If $A$ does not satisfy these conditions, the problem can be symmetrized, reduced to the problem

$$A^* Au = A^*f,$$ and then one can apply the method of steepest descent (see also Minimal discrepancy method).

References

[Ba] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations", MIR (1977) (Translated from Russian) MR0362811 Zbl 0524.65001
[FaFa] D.K. Faddeev, V.N. Faddeeva, "Computational methods of linear algebra", Freeman (1963) (Translated from Russian) MR0161454 MR0158519 Zbl 0451.65015
[GoLo] G.H. Golub, C.F. van Loan, "Matrix computations", Johns Hopkins Univ. Press (1989) MR1002570 Zbl 0733.65016
[Ka] L.V. Kantorovich, "On the method of steepest descent" Dokl. Akad. Nauk SSSR, 56 : 3 (1947) pp. 233–236 (In Russian)
[KaAk] L.V. Kantorovich, G.P. Akilov, "Functional analysis", Pergamon (1982) (Translated from Russian) MR0664597 Zbl 0484.46003
[PsDa] B.N. Pshenichnyi, Yu.M. Danilin, "Numerical methods in extremal problems", MIR (1978) (Translated from Russian) MR0502886
How to Cite This Entry:
Steepest descent, method of. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Steepest_descent,_method_of&oldid=24253
This article was adapted from an original article by Yu.A. Kuznetsov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article