# Hamilton function

Hamiltonian

A function introduced by W. Hamilton (1834) to describe the motion of mechanical systems. It is used, beginning with the work of C.G.J. Jacobi (1837), in the classical calculus of variations to represent the Euler equation in canonical form. Let $L ( t, x, \dot{x} )$ be the Lagrange function of a mechanical system or the integrand in the problem of minimization of the functional

$$J ( x) = \int\limits L ( t, x , \dot{x} ) dt$$

of the classical calculus of variations, where $x= ( x _ {1} \dots x _ {n} )$, $\mathop{\rm det} \| L _ {\dot{x} \dot{x} } \| \neq 0$. The Hamilton function is the Legendre transform of $L$ with respect to the variables $\dot{x}$ or, in other words,

$$H ( t, x, p) = ( p \mid \dot{x} ) - L ( t, x, \dot{x} ),$$

where $\dot{x}$ is expressed in terms of $p$ by the relation $p = L _ {\dot{x} }$ and $( p \mid \dot{x} )$ is the scalar product of the vectors $p = ( p _ {1} \dots p _ {n} )$ and $\dot{x}$. If the Hamilton function is used, the Euler equation

$$- \frac{d L _ {x} dot }{dt } + L _ {x} = 0$$

(also known as the Lagrange equation (cf. Lagrange equations (in mechanics)) in problems of classical mechanics) is written in the form of a system of first-order equations:

$$- \dot{p} = \ \frac{\partial H }{\partial x } ,\ \ \dot{x} = \ \frac{\partial H }{\partial p } .$$

These equations are called the Hamilton equations, the Hamiltonian system and also the canonical system. The Hamilton–Jacobi equations for the action function (cf. Hamilton–Jacobi theory) can be written in terms of a Hamilton function.

In problems of optimal control a Hamilton function is determined as follows. One has to find a minimum of the functional

$$J = \int\limits _ { t _ {0} } ^ { {t _ 1 } } f ^ { 0 } ( t, x, u) dt$$

under the differential constraints

$$\dot{x} ^ {i} = \ f ^ { i } ( t, x, u),$$

for given boundary conditions and with constraints on the control $u \in U$. Here $x = ( x ^ {1} \dots x ^ {n} )$ is an $n$- dimensional vector of phase coordinates, $u = ( u ^ {1} \dots u ^ {m} )$ is an $m$- dimensional control vector and $U$ is a closed set of admissible values of $u$. The Hamilton function in this problem has the form

$$H ( t, x, \psi , u ) = \ \psi _ {0} f ^ { 0 } ( t, x, u ) + \sum _ {i = 1 } ^ { n } \psi _ {i} f ^ { i } ( t, x, u ),$$

where $\psi _ {0} = \textrm{ const } \leq 0$, and $\psi _ {1} \dots \psi _ {n}$ are conjugate variables (Lagrange multipliers, momenta) analogous to the canonical variables $p _ {i}$ mentioned above. If $( x _ {0} , u _ {0} )$ is a minimum in the above problem and $\psi _ {0} \neq 0$( $\psi _ {0}$ may then be considered as equal to $- 1$), then

$$H ( t, x _ {0} ( t), p ( t), u _ {0} ( t)) = \ \left . ( p \mid f ) \right | _ {x _ {0,} u _ {0} } - \left . f ^ { 0 } \right | _ {x _ {0} , u _ {0} } ,$$

where

$$- \dot{p} = \ \left . \frac{\partial H }{\partial x } \right | _ {x _ {0} , u _ {0} } .$$

The expression obtained for the Hamilton function has the same structure as in the classical calculus of variations. According to the Pontryagin maximum principle, the Euler equations for the optimal control problem may be written using a Hamilton function as follows:

$$\dot{x} ^ {i} = \ \frac{\partial H }{\partial \psi _ {i} } ,\ \ \dot \psi _ {i} = \ - \frac{\partial H }{\partial x ^ {i} } ,\ \ i = 1 \dots n.$$

The optimal control $u$ for each $t$ should constitute a maximum of the Hamiltonian:

$$H ( t, x, \psi , u) = \ \max _ {v \in U } H ( t, x, \psi , v).$$

#### References

 [1] G.A. Bliss, "Lectures on the calculus of variations" , Chicago Univ. Press (1947) [2] L.S. Pontryagin, V.G. Boltayanskii, R.V. Gamkrelidze, E.F. Mishchenko, "The mathematical theory of optimal processes" , Wiley (1962) (Translated from Russian)