Bellman equation
A partial differential equation of a special type to solve a problem of optimal control. If the solution of Cauchy's problem for the Bellman equation can be found, the optimal solution of the original problem is readily obtained.
A recurrence relation for the solution of a discrete problem of optimal control. The method for obtaining the optimal solution with the aid of Bellman's equation is known as dynamic programming.
References
[1] | R. Bellman, "Dynamic programming" , Princeton Univ. Press (1957) |
Comments
The Bellman equation for continuous-time optimal control problems is also often called the dynamic programming equation. Cf. the article Optimality, sufficient conditions for for examples and more details. There is also a variant for stochastic optimal control problems.
References
[a1] | W.H. Fleming, R.W. Rishel, "Deterministic and stochastic optimal control" , Springer (1975) |
Bellman equation. V.G. Karmanov (originator), Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Bellman_equation&oldid=11588