Hamilton-Jacobi theory
A branch of classical variational calculus and analytical mechanics in which the task of finding extremals (or the task of integrating a Hamiltonian system of equations) is reduced to the integration of a first-order partial differential equation — the so-called Hamilton–Jacobi equation. The fundamentals of the Hamilton–Jacobi theory were developed by W. Hamilton in the 1820s for problems in wave optics and geometrical optics. In 1834 Hamilton extended his ideas to problems in dynamics, and C.G.J. Jacobi (1837) applied the method to the general problems of classical variational calculus.
The starting points of the Hamilton–Jacobi theory were established in the 17th century by P. Fermat and Chr. Huygens, who used the subject of geometrical optics for this purpose (cf. Fermat principle; Huygens principle). Below the footsteps of Hamilton are followed and the problem of propagation of light through an inhomogeneous (but, for the sake of simplicity, isotropic) medium, is considered where $ v( x) $ is the local velocity of light at a point $ x $. According to Fermat's principle, light propagates from point to point in an inhomogeneous medium in shortest possible time. Let $ x _ {0} \in E $ be the starting point, and let $ W( x) $ be the shortest possible time for the light to traverse the distance from $ x _ {0} $ to $ x $. The function $ W( x) $ is known as the eikonal or the optical length of the path. It is assumed that during a short time $ dt $ the light travels from the point $ x $ to the point $ x + dx $. According to the Huygens principle, light will travel, apart from small magnitudes of a higher order, along the normal to the level surface of the function $ W ( x) $. Thus, the equation
$$ W \left ( x + \frac{W ^ \prime ( x) }{| W ^ \prime ( x) | } v ( x) dt \right ) = \ W ( x) + dt + o ( dt) $$
is satisfied, and the Hamilton–Jacobi equation for problems in geometrical optics follows:
$$ | W ^ \prime ( x) | ^ {2} = \ { \frac{1}{v ^ {2} ( x) } } \ \iff \ \ \sum _ {i = 1 } ^ { 3 } \left ( \frac{\partial W ( x) }{\partial x _ {i} } \right ) ^ {2} = \ { \frac{1}{v ^ {2} ( x) } } . $$
In analytical mechanics the role of Fermat's principle is played by the variational Hamilton–Ostrogradski principle, while the role of the eikonal is played by the action functional, i.e. by the integral
$$ \tag{1 } S ( t, x) = \ \int\limits _ \gamma L dt,\ \ x = ( x _ {1} \dots x _ {n} ), $$
along a trajectory $ \gamma $ connecting a given point $ ( t _ {0} , x _ {0} ) $ with the point $ ( t, x) $, where $ L $ is the Lagrange function of the mechanical system.
It was suggested by Jacobi that a function resembling the action functional (1) should be used in solving all problems of classical variational calculus. The extremals of the problem $ \int L d t \rightarrow \inf $ issuing from the point $ ( t _ {0} , x _ {0} ) $ intersect the level surface of the principal function transversally (cf. Transversality condition); the form of the differential of the action functional
$$ dS = ( p \mid dx) - H dt $$
is deduced from this condition. Here $ p = L _ {\dot{x} } $, and $ H = p \dot{x} - L $ is the Hamilton function (see also Legendre transform).
The last-mentioned relation yields the following equation for the function $ S $:
$$ \tag{2 } \frac{\partial S }{\partial t } + H \left ( t, x,\ \frac{\partial S }{\partial x } \right ) = 0. $$
This is the Hamilton–Jacobi equation.
The most important result of the Hamilton–Jacobi theory is Jacobi's theorem, which states that a complete integral of equation (2), i.e. the solution $ S ( t, x, \alpha ) $ of this equation, which will depend on the parameters $ \alpha = ( \alpha _ {1} \dots \alpha _ {n} ) $( provided that $ \mathop{\rm det} | \partial ^ {2} S / \partial x \partial \alpha | \neq 0 $), makes it possible to obtain the complete integral of the equation for the Euler functional (1) or, which is the same thing, of the Hamiltonian system connected with this functional, by the formulas $ \partial S / \partial x = p $, $ \partial S / \partial \alpha = \beta $. The application of Jacobi's theorem to the integration of Hamiltonian systems is usually based on the method of separation of variables in special coordinates.
Despite the fact that the integration of partial differential equations is usually more difficult than solving ordinary equations, the Hamilton–Jacobi theory proved to be a powerful tool in the study of problems of optics, mechanics and geometry. The essence of Huygens' principle was used by R. Bellman in solving problems on optimal control.
See also Hilbert invariant integral.
References
[1] | , Variational principles of mechanics , Moscow (1959) (In Russian) |
[2] | L.A. Pars, "A treatise on analytical dynamics" , Heinemann , London (1965) |
[3] | V.I. Arnol'd, "Mathematical methods of classical mechanics" , Springer (1978) (Translated from Russian) |
[4] | N.I. Akhiezer, "The calculus of variations" , Blaisdell (1962) (Translated from Russian) |
Comments
In optimal control the Hamilton–Jacobi equation takes, for instance, the form
$$ \frac{\partial V }{\partial t } + H \left ( t , x , \frac{\partial V }{\partial x } \right ) = 0 ,\ \ V ( t _ {1} , x ) = \phi ( t _ {1} , x ) ,\ \ t _ {0} \leq t \leq t _ {1} , $$
where
$$ H \left ( t , x , \frac{\partial V }{\partial x } \right ) = \min \left \{ {\left ( \frac{\partial V }{\partial x } , f ( t , x , u ) \right ) + f ^ { 0 } ( t , x , u ) } : {u \in U } \right \} . $$
Cf., for instance, Optimal synthesis control. In this setting it is often referred to as the Bellman equation (especially in the engineering literature) or the Hamilton–Jacobi–Bellman equation. There is also a version for optimal stochastic control, cf. Controlled stochastic process. Because classical (everywhere $ C ^ {1} $) solutions of the Hamilton–Jacobi equation often do not exist, it becomes necessary to consider various kinds of generalized solutions, such as viscosity solutions.
References
[a1] | H. Goldstein, "Classical mechanics" , Addison-Wesley (1950) |
[a2] | P.L. Lions, "Generalized solutions of Hamilton–Jacobi equations" , Pitman (1982) |
[a3] | W.H. Fleming, R.W. Rishel, "Deterministic and stochastic optimal control" , Springer (1975) |
[a4] | P.L. Lions, "On the Hamilton–Jacobi–Bellman equations" Acta. Appl. Math. , 1 (1983) pp. 17–41 |
[a5] | S.H. Benton jr., "The Hamilton–Jacobi equation: a global approach" , Acad. Press (1977) |
Hamilton-Jacobi theory. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Hamilton-Jacobi_theory&oldid=22543