Bellman equation

From Encyclopedia of Mathematics
Revision as of 16:55, 7 February 2011 by (talk) (Importing text file)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

A partial differential equation of a special type to solve a problem of optimal control. If the solution of Cauchy's problem for the Bellman equation can be found, the optimal solution of the original problem is readily obtained.

A recurrence relation for the solution of a discrete problem of optimal control. The method for obtaining the optimal solution with the aid of Bellman's equation is known as dynamic programming.


[1] R. Bellman, "Dynamic programming" , Princeton Univ. Press (1957)


The Bellman equation for continuous-time optimal control problems is also often called the dynamic programming equation. Cf. the article Optimality, sufficient conditions for for examples and more details. There is also a variant for stochastic optimal control problems.


[a1] W.H. Fleming, R.W. Rishel, "Deterministic and stochastic optimal control" , Springer (1975)
How to Cite This Entry:
Bellman equation. V.G. Karmanov (originator), Encyclopedia of Mathematics. URL:
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098