Namespaces
Variants
Actions

Difference between revisions of "Variational calculus"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (MR/ZBL numbers added)
m (tex encoded by computer)
Line 1: Line 1:
 +
<!--
 +
v0961901.png
 +
$#A+1 = 160 n = 0
 +
$#C+1 = 160 : ~/encyclopedia/old_files/data/V096/V.0906190 Variational calculus,
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''calculus of variations''
 
''calculus of variations''
  
Line 5: Line 17:
 
The following scheme describes a rather wide range of problems of classical variational calculus. It is required to minimize the functional
 
The following scheme describes a rather wide range of problems of classical variational calculus. It is required to minimize the functional
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v0961901.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
J ( x)  = \int\limits _ { T } f ( t, x  ( t), \dot{x} ( t)) dt,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v0961902.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v0961903.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v0961904.png" />,
+
where $  T \subset  \mathbf R  ^ {m} $,  
 +
$  t = ( t _ {1} \dots t _ {m} ) $,  
 +
$  x = ( x  ^ {1} \dots x  ^ {n} ) $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v0961905.png" /></td> </tr></table>
+
$$
 +
\dot{x}  = \left (
 +
\frac{\partial  x  ^ {i} }{\partial  t _ {0} }
 +
\right ) ,\ \
 +
f: \mathbf R  ^ {m} \times \mathbf R  ^ {n} \times \mathbf R  ^ {mn}  \rightarrow  \mathbf R ,
 +
$$
  
 
subject to the constraints described by equations of the type
 
subject to the constraints described by equations of the type
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v0961906.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
\left .
  
and by certain boundary conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v0961907.png" />. Problems of this type are known as Lagrange problems (cf. [[Lagrange problem|Lagrange problem]]). Other types of problems considered are the [[Mayer problem|Mayer problem]], the [[Bolza problem|Bolza problem]], etc.
+
and by certain boundary conditions $  x\mid  _ {\partial  T }  \in \Gamma $.  
 +
Problems of this type are known as Lagrange problems (cf. [[Lagrange problem|Lagrange problem]]). Other types of problems considered are the [[Mayer problem|Mayer problem]], the [[Bolza problem|Bolza problem]], etc.
  
The most elementary question in classical variational calculus is the simplest problem in variational calculus, in which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v0961908.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v0961909.png" /> in (1) are one-dimensional, the constraints (2) are absent and the boundary conditions are fixed:
+
The most elementary question in classical variational calculus is the simplest problem in variational calculus, in which $  t $
 +
and $  x $
 +
in (1) are one-dimensional, the constraints (2) are absent and the boundary conditions are fixed:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619010.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
J ( x)  = \
 +
\int\limits _ { t _ {0} } ^ { {t _ 1 } }
 +
L ( t, x, \dot{x} )  dt  \rightarrow  \inf ; \ \
 +
x ( t _ {0} ) = x _ {0} ,\ \
 +
x ( t _ {1} ) = x _ {1} .
 +
$$
  
 
This type includes the [[Brachistochrone|brachistochrone]] problem, or the problem of curves of minimum time of descent. This problem is usually considered to be the starting point in the history of the calculus of variations.
 
This type includes the [[Brachistochrone|brachistochrone]] problem, or the problem of curves of minimum time of descent. This problem is usually considered to be the starting point in the history of the calculus of variations.
Line 30: Line 61:
 
Euler (1768) proposed a method for the approximate (numerical) solution of problems in variational calculus, which received the name of Euler's method of polygonal lines. This marked the beginning of the study of numerically solving extremum problems. Euler's method was the first representative of a large class of methods known as direct methods of variational calculus. These methods are based on reducing the problem of finding the extremum of a functional to that of finding the extremum of a function of several variables.
 
Euler (1768) proposed a method for the approximate (numerical) solution of problems in variational calculus, which received the name of Euler's method of polygonal lines. This marked the beginning of the study of numerically solving extremum problems. Euler's method was the first representative of a large class of methods known as direct methods of variational calculus. These methods are based on reducing the problem of finding the extremum of a functional to that of finding the extremum of a function of several variables.
  
Problem (3) may be solved by Euler's method of polygonal lines as follows. The interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619011.png" /> is subdivided into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619012.png" /> equal parts by means of points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619013.png" />. Let the values of the function at these points be <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619014.png" />, respectively. Each set of points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619015.png" /> defines some polygonal line. The problem may now be formulated as follows: Out of all possible polygonal lines connecting the points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619016.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619017.png" />, to find the line for which the functional (1) assumes an extremal value. The value of the derivative <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619018.png" /> on the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619019.png" /> will be <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619020.png" />. The functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619021.png" /> becomes a function of a finite number of variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619022.png" />:
+
Problem (3) may be solved by Euler's method of polygonal lines as follows. The interval $  [ t _ {0} , t _ {1} ] $
 +
is subdivided into $  N $
 +
equal parts by means of points $  t _ {0} = t _ {0} , \tau _ {1} = t _ {0} + \tau \dots \tau _ {N} = t _ {0} + N \tau = t _ {1} $.  
 +
Let the values of the function at these points be $  x _ {0} , x _ {1} \dots x _ {N} $,  
 +
respectively. Each set of points $  ( \tau _ {0} , x _ {0} ), \dots ( \tau _ {N} , x _ {N} ) $
 +
defines some polygonal line. The problem may now be formulated as follows: Out of all possible polygonal lines connecting the points $  ( \tau _ {0} , x _ {0} ) $
 +
and $  ( \tau _ {N} , x _ {N} ) $,  
 +
to find the line for which the functional (1) assumes an extremal value. The value of the derivative $  \dot{x} $
 +
on the interval $  [ \tau _ {i} , \tau _ {i+ 1 }  ] $
 +
will be $  {\dot{x} } _ {i} = ( x _ {i+ 1 }  - x _ {i} )/ \tau $.  
 +
The functional $  J( x) $
 +
becomes a function of a finite number of variables $  x _ {i} $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619023.png" /></td> </tr></table>
+
$$
 +
J ( x)  \sim  J ( x _ {0} \dots x _ {n} ),
 +
$$
  
and problem (3) is reduced to the problem of finding the extremum of the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619024.png" />. In order that Euler's line realizing the extremum of this function approximate the solution of problem (3) with a high accuracy, the number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619025.png" /> should, as a rule, be sufficiently large. The labour involved in the computations which must be performed to find the extremum of the function (3) is so large that  "manual"  computations are very difficult. For this reason, direct methods were ruled out in basic studies of variational calculus for a long time.
+
and problem (3) is reduced to the problem of finding the extremum of the function $  J( x _ {0} \dots x _ {n} ) $.  
 +
In order that Euler's line realizing the extremum of this function approximate the solution of problem (3) with a high accuracy, the number $  N $
 +
should, as a rule, be sufficiently large. The labour involved in the computations which must be performed to find the extremum of the function (3) is so large that  "manual"  computations are very difficult. For this reason, direct methods were ruled out in basic studies of variational calculus for a long time.
  
 
Direct methods began to be much more extensively studied in the 20th century. At first, new methods were proposed to reduce the problem to finding the extremum of a function in a finite number of variables. These ideas may be clarified by taking, as an example, the minimization of the functional (3) subject to the condition
 
Direct methods began to be much more extensively studied in the 20th century. At first, new methods were proposed to reduce the problem to finding the extremum of a function in a finite number of variables. These ideas may be clarified by taking, as an example, the minimization of the functional (3) subject to the condition
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619026.png" /></td> </tr></table>
+
$$
 +
x ( t _ {0} )  = x ( t _ {1} )  = 0.
 +
$$
  
 
Consider the solution of this problem in the form
 
Consider the solution of this problem in the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619027.png" /></td> </tr></table>
+
$$
 +
x ( t)  = \sum _ {n = 1 } ^ { N }  a _ {n} \phi _ {n} ( t),
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619028.png" /> is some system of functions satisfying the conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619029.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619030.png" />. The functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619031.png" /> becomes a function of the coefficients, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619032.png" />, and the problem is reduced to finding the extremum of this function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619033.png" /> variables. Under certain conditions imposed on the system of functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619034.png" />, the solution of the problem tends to that of problem (3) as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619035.png" /> (cf. [[Galerkin method|Galerkin method]]).
+
where $  \{ \phi _ {n} ( t) \} $
 +
is some system of functions satisfying the conditions $  \phi _ {i} ( t _ {0} ) = \phi _ {i} ( t _ {1} ) = 0 $,  
 +
$  i = 1 \dots N $.  
 +
The functional $  J( x) $
 +
becomes a function of the coefficients, $  J( x) \sim J( a _ {1} \dots a _ {N} ) $,  
 +
and the problem is reduced to finding the extremum of this function of $  N $
 +
variables. Under certain conditions imposed on the system of functions $  \{ \phi _ {n} \} $,  
 +
the solution of the problem tends to that of problem (3) as $  N \rightarrow \infty $(
 +
cf. [[Galerkin method|Galerkin method]]).
  
 
==The method of variations.==
 
==The method of variations.==
A second direction of study is the study of necessary and sufficient conditions to be satisfied by the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619036.png" /> realizing the extremum of the functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619037.png" />. The principal method for finding necessary conditions is the method of variations. Construct some function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619038.png" /> in one way or another. How to test whether or not this function is a solution of the variational problem (3)? An answer to this question was first given by Euler in 1744. The answer as formulated below involves the concept, introduced by Lagrange in 1762, of the variation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619039.png" /> of the functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619040.png" /> (hence the name  "variational calculus" ; cf. [[Variation|Variation]]; [[Variation of a functional|Variation of a functional]]).
+
A second direction of study is the study of necessary and sufficient conditions to be satisfied by the function $  x( t) $
 +
realizing the extremum of the functional $  J( x) $.  
 +
The principal method for finding necessary conditions is the method of variations. Construct some function $  x( t) $
 +
in one way or another. How to test whether or not this function is a solution of the variational problem (3)? An answer to this question was first given by Euler in 1744. The answer as formulated below involves the concept, introduced by Lagrange in 1762, of the variation $  \delta J $
 +
of the functional $  J $(
 +
hence the name  "variational calculus" ; cf. [[Variation|Variation]]; [[Variation of a functional|Variation of a functional]]).
  
 
For the simplest problem of variational calculus this variation is defined as:
 
For the simplest problem of variational calculus this variation is defined as:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619041.png" /></td> </tr></table>
+
$$
 +
\delta J ( x, h)  = \
 +
\int\limits _ { t _ {0} } ^ { {t _ 1 } }
 +
\left . \left (
 +
\frac{\partial  L }{\partial  x }
 +
-
 +
{
 +
\frac{d}{dt}
 +
}
 +
\frac{\partial  L }{\partial  \dot{x} }
 +
 
 +
\right ) \right | _ {x ( t) }
 +
h ( t)  dt,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619042.png" /> is an arbitrary smooth function satisfying the conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619043.png" />. The condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619044.png" /> is necessary for the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619045.png" /> to realize an extremum of the functional (3). Hence — and also from the expression for the variation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619046.png" /> — one may conclude that, for the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619047.png" /> to constitute an extremum of (3), it must satisfy the following second-order differential equation:
+
where $  h( t) $
 +
is an arbitrary smooth function satisfying the conditions $  h( t _ {0} ) = h( t _ {1} ) = 0 $.  
 +
The condition $  \delta J = 0 $
 +
is necessary for the function $  x( t) $
 +
to realize an extremum of the functional (3). Hence — and also from the expression for the variation $  \delta J $—  
 +
one may conclude that, for the function $  x( t) $
 +
to constitute an extremum of (3), it must satisfy the following second-order differential equation:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619048.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
  
The above equation is known as the [[Euler equation|Euler equation]], while the integral curves of this family are said to be the extremals of the variational problem under consideration. A function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619049.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619050.png" /> attains an extremum necessarily represents a solution of the boundary value problem <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619051.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619052.png" /> for equation (4). One has thus obtained a second method for solving the extremal problem. The boundary value problem for the Euler equation is solved (in regular cases the number of such solutions will be finite), after which for each one of the solutions which have been obtained supplementary restrictions are tested in order to find out if there are curves which are a solution of the initial problem. However, a significant drawback of this method is that it does not furnish universal methods for solving boundary value problems for ordinary (non-linear) differential equations.
+
\frac{\partial  L }{\partial  x }
 +
-
 +
{
 +
\frac{d}{dt}
 +
}
  
Variational problems with mobile ends are very often encountered. For instance, in the simplest problems, the points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619053.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619054.png" /> may move along given curves. In problems with mobile ends the condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619055.png" /> implies supplementary conditions to be satisfied by the mobile ends — the so-called [[Transversality condition|transversality condition]] which, in conjunction with the boundary conditions, yields a closed system of conditions for the boundary value problem.
+
\frac{\partial  L }{\partial  \dot{x} }
 +
  =  0.
 +
$$
 +
 
 +
The above equation is known as the [[Euler equation|Euler equation]], while the integral curves of this family are said to be the extremals of the variational problem under consideration. A function  $  x( t) $
 +
for which  $  J( x) $
 +
attains an extremum necessarily represents a solution of the boundary value problem  $  x( t _ {0} ) = x _ {0} $,
 +
$  x( t _ {1} ) = x _ {1} $
 +
for equation (4). One has thus obtained a second method for solving the extremal problem. The boundary value problem for the Euler equation is solved (in regular cases the number of such solutions will be finite), after which for each one of the solutions which have been obtained supplementary restrictions are tested in order to find out if there are curves which are a solution of the initial problem. However, a significant drawback of this method is that it does not furnish universal methods for solving boundary value problems for ordinary (non-linear) differential equations.
 +
 
 +
Variational problems with mobile ends are very often encountered. For instance, in the simplest problems, the points $  x( t _ {0} ) $
 +
and $  x( t _ {1} ) $
 +
may move along given curves. In problems with mobile ends the condition $  \delta J = 0 $
 +
implies supplementary conditions to be satisfied by the mobile ends — the so-called [[Transversality condition|transversality condition]] which, in conjunction with the boundary conditions, yields a closed system of conditions for the boundary value problem.
  
 
The principal results concerning the simplest problem of variational calculus are applied to the general case of functionals of the type
 
The principal results concerning the simplest problem of variational calculus are applied to the general case of functionals of the type
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619056.png" /></td> </tr></table>
+
$$
 +
J ( x)  = \
 +
\int\limits _ { t _ {0} } ^ { {t _ 1 } }
 +
F \left ( x ( t), {
 +
\frac{dx}{dt}
 +
} \dots
 +
\frac{d  ^ {s} x }{dt  ^ {s} }
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619057.png" /> is a vector function of arbitrary dimension [[#References|[3]]].
+
\right )  dt,
 +
$$
 +
 
 +
where $  x( t) $
 +
is a vector function of arbitrary dimension [[#References|[3]]].
  
 
==Lagrange's problem.==
 
==Lagrange's problem.==
Euler and Lagrange also studied problems on a conditional extremum. The simplest class of problems of this type is the class of so-called isoperimetric problems (cf. [[Isoperimetric problem|Isoperimetric problem]]). For the case of one-dimensional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619058.png" />, Lagrange stated the class of problems (1) and (2), and obtained an analogue of Euler's equations, involving the so-called Lagrange multipliers for these problems. Such an analogue may also be obtained for the most general case of problems (1) and (2). The Lagrange problem assumed special importance in the mid-20th century in connection with the creation of the mathematical theory of optimal control (cf. [[Optimal control, mathematical theory of|Optimal control, mathematical theory of]]). Below the main results concerning the Lagrange problem are given, in terms of this theory. These were obtained by L.S. Pontryagin and his school.
+
Euler and Lagrange also studied problems on a conditional extremum. The simplest class of problems of this type is the class of so-called isoperimetric problems (cf. [[Isoperimetric problem|Isoperimetric problem]]). For the case of one-dimensional $  t $,  
 +
Lagrange stated the class of problems (1) and (2), and obtained an analogue of Euler's equations, involving the so-called Lagrange multipliers for these problems. Such an analogue may also be obtained for the most general case of problems (1) and (2). The Lagrange problem assumed special importance in the mid-20th century in connection with the creation of the mathematical theory of optimal control (cf. [[Optimal control, mathematical theory of|Optimal control, mathematical theory of]]). Below the main results concerning the Lagrange problem are given, in terms of this theory. These were obtained by L.S. Pontryagin and his school.
  
Consider the following case: In problems (1) and (2), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619059.png" /> is one-dimensional and the system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619060.png" /> may be solved for some components of the last-named variables. The resulting problem is that of minimizing the functional
+
Consider the following case: In problems (1) and (2), $  t $
 +
is one-dimensional and the system $  \phi ( t, x, \dot{x} ) $
 +
may be solved for some components of the last-named variables. The resulting problem is that of minimizing the functional
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619061.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
J ( x, u)  = \int\limits _ { t _ {0} } ^ { {t _ 1 } } F ( t, x, u) dt
 +
$$
  
 
under the differential constraint
 
under the differential constraint
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619062.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
$$ \tag{6 }
 +
\dot{x}  = f ( t, x, u)
 +
$$
  
 
and the boundary conditions
 
and the boundary conditions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619063.png" /></td> <td valign="top" style="width:5%;text-align:right;">(7)</td></tr></table>
+
$$ \tag{7 }
 +
( x ( t _ {0} ), x ( t _ {1} )) \in  E.
 +
$$
  
In equations (5)–(7), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619064.png" /> is a vector function known as the phase vector, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619065.png" /> is a vector function known as the control, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619066.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619067.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619068.png" />.
+
In equations (5)–(7), $  x = ( x  ^ {1} \dots x  ^ {n} ) $
 +
is a vector function known as the phase vector, $  u = ( u  ^ {1} \dots u  ^ {m} ) $
 +
is a vector function known as the control, $  F: \mathbf R \times {\mathbf R  ^ {n} } \times {\mathbf R  ^ {m} } \rightarrow \mathbf R $,  
 +
$  f: \mathbf R \times {\mathbf R  ^ {n} } \times {\mathbf R  ^ {m} } \rightarrow {\mathbf R  ^ {n} } $,  
 +
$  E \subset  {\mathbf R  ^ {2n} } $.
  
 
The fixed conditions of problem (3) may serve as an example of boundary conditions of the type (7). In optimal control problems certain  "non-classical"  conditions such as
 
The fixed conditions of problem (3) may serve as an example of boundary conditions of the type (7). In optimal control problems certain  "non-classical"  conditions such as
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619069.png" /></td> <td valign="top" style="width:5%;text-align:right;">(8)</td></tr></table>
+
$$ \tag{8 }
 +
u ( t) \in  U  \subset  \mathbf R  ^ {m}
 +
$$
  
 
are imposed in addition to conditions (6) and (7).
 
are imposed in addition to conditions (6) and (7).
  
 
==Weak and strong extrema.==
 
==Weak and strong extrema.==
Two topologies are usually distinguished in variational calculus — a strong and a weak topology and, correspondingly, one defines strong and weak extrema. For instance, as applied to problem (3) one says that the curve <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619070.png" /> realizes a weak minimum if it is possible to find an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619071.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619072.png" /> for all continuously-differentiable functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619073.png" /> satisfying the conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619074.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619075.png" /> and
+
Two topologies are usually distinguished in variational calculus — a strong and a weak topology and, correspondingly, one defines strong and weak extrema. For instance, as applied to problem (3) one says that the curve $  {x _ {0} } ( t) $
 +
realizes a weak minimum if it is possible to find an $  \epsilon > 0 $
 +
such that $  J( x) \geq  J ( {x _ {0} } ) $
 +
for all continuously-differentiable functions $  x( t) $
 +
satisfying the conditions $  x( {t _ {0} } ) = {x _ {0} } ( {t _ {0} } ) $,  
 +
$  x( {t _ {1} } ) = {x _ {0} } ( {t _ {1} } ) $
 +
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619076.png" /></td> </tr></table>
+
$$
 +
\max _ {t \in [ t _ {0} , t _ {1} ] }  | x ( t) - x _ {0} ( t) | +
 +
\max _ {t \in [ t _ {0} , t _ {1} ] } \
 +
| \dot{x} ( t) - \dot{x} _ {0} ( t) |  < \epsilon .
 +
$$
  
In other words, this fixes the proximity not only of the phase variables, but also of the speeds (controls). One says that a function gives a strong extremum if it is possible to find an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619077.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619078.png" /> for all permissible absolutely-continuous functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619079.png" /> (for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619080.png" /> exists) satisfying the conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619081.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619082.png" /> and
+
In other words, this fixes the proximity not only of the phase variables, but also of the speeds (controls). One says that a function gives a strong extremum if it is possible to find an $  \epsilon > 0 $
 +
such that $  J( x) \geq  J( x _ {0} ) $
 +
for all permissible absolutely-continuous functions $  x( t) $(
 +
for which $  J( x) $
 +
exists) satisfying the conditions $  x( t _ {0} ) = {x _ {0} } ( t _ {0} ) $,  
 +
$  x( t _ {1} ) = {x _ {0} } ( {t _ {1} } ) $
 +
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619083.png" /></td> </tr></table>
+
$$
 +
\max _ {t \in [ t _ {0} , t _ {1} ] } \
 +
| x ( t) - x _ {0} ( t) |  \leq  \epsilon .
 +
$$
  
 
This equation merely represents proximity of the phase variables.
 
This equation merely represents proximity of the phase variables.
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619084.png" /> realizes a strong extremum, it realizes a fortiori a weak extremum as well; accordingly, conditions sufficient for a strong extremum are also sufficient for a weak one. Conversely, if a weak extremum is absent, so is a strong one, i.e. necessary conditions for a weak extremum are also necessary for a strong extremum.
+
If $  x _ {0} ( t) $
 +
realizes a strong extremum, it realizes a fortiori a weak extremum as well; accordingly, conditions sufficient for a strong extremum are also sufficient for a weak one. Conversely, if a weak extremum is absent, so is a strong one, i.e. necessary conditions for a weak extremum are also necessary for a strong extremum.
  
 
==Necessary and sufficient conditions for an extremum.==
 
==Necessary and sufficient conditions for an extremum.==
Euler's equation, which was discussed above, is a necessary condition for a weak extremum. In the late 1950s, Pontryagin postulated a maximum principle for the problem (5)–(8) which is a necessary condition for a strong extremum (cf. [[Pontryagin maximum principle|Pontryagin maximum principle]]). This maximum principle states that if a pair <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619085.png" /> supplies a strong extremum in the problem (5)–(8), there exist a vector function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619086.png" /> and a number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619087.png" /> such that the relations
+
Euler's equation, which was discussed above, is a necessary condition for a weak extremum. In the late 1950s, Pontryagin postulated a maximum principle for the problem (5)–(8) which is a necessary condition for a strong extremum (cf. [[Pontryagin maximum principle|Pontryagin maximum principle]]). This maximum principle states that if a pair $  ( x, u) $
 +
supplies a strong extremum in the problem (5)–(8), there exist a vector function $  \psi $
 +
and a number $  \lambda _ {0} $
 +
such that the relations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619088.png" /></td> <td valign="top" style="width:5%;text-align:right;">(9)</td></tr></table>
+
$$ \tag{9 }
 +
\left .
  
are satisfied for the Hamilton function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619089.png" />.
+
are satisfied for the Hamilton function $  H( t, x, \psi , u , {\lambda _ {0} } ) = ( \psi , f  ) - {\lambda _ {0} } F $.
  
If Pontryagin's maximum principle is applied to the problem (3), it follows that a necessary condition for a curve <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619090.png" /> to yield a strong minimum in the problem (3) is that it be an extremal (i.e. satisfy Euler's equation (4)) and satisfy the necessary Weierstrass condition (cf. [[Weierstrass conditions (for a variational extremum)|Weierstrass conditions (for a variational extremum)]])
+
If Pontryagin's maximum principle is applied to the problem (3), it follows that a necessary condition for a curve $  x( t) $
 +
to yield a strong minimum in the problem (3) is that it be an extremal (i.e. satisfy Euler's equation (4)) and satisfy the necessary Weierstrass condition (cf. [[Weierstrass conditions (for a variational extremum)|Weierstrass conditions (for a variational extremum)]])
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619091.png" /></td> <td valign="top" style="width:5%;text-align:right;">(10)</td></tr></table>
+
$$ \tag{10 }
 +
{\mathcal E} ( t, x ( t), \dot{x} ( t), \xi ) \geq  0 \ \
 +
\textrm{ for }  \textrm{ all }  t \in [ t _ {0} , t _ {1} ],\
 +
\xi \in \mathbf R ,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619092.png" /></td> </tr></table>
+
$$
 +
{\mathcal E} ( t, x, \dot{x} , \xi )  = \
 +
L ( t, x, \xi ) -
 +
L ( t, x, \dot{x} ) - ( ( \xi - \dot{x} )
 +
L _ {x} dot ( t, x, \dot{x} ) )
 +
$$
  
 
is the so-called Weierstrass function.
 
is the so-called Weierstrass function.
  
In addition to conditions of the type (4) and (10), which are local (i.e. which can be verified at each point of the extremal), there is also a global necessary condition, related to the behaviour of the set of extremals in a neighbourhood of the given extremal (cf. [[Jacobi condition|Jacobi condition]]). For problem (3) Jacobi's condition may be formulated as follows. For the extremal <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619093.png" /> to supply a minimum in the problem (3), it is necessary that the solution of the [[Jacobi equation|Jacobi equation]]
+
In addition to conditions of the type (4) and (10), which are local (i.e. which can be verified at each point of the extremal), there is also a global necessary condition, related to the behaviour of the set of extremals in a neighbourhood of the given extremal (cf. [[Jacobi condition|Jacobi condition]]). For problem (3) Jacobi's condition may be formulated as follows. For the extremal $  x( t) $
 +
to supply a minimum in the problem (3), it is necessary that the solution of the [[Jacobi equation|Jacobi equation]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619094.png" /></td> <td valign="top" style="width:5%;text-align:right;">(11)</td></tr></table>
+
$$ \tag{11 }
 +
- {
 +
\frac{d}{dt}
 +
} \left (
 +
\left .  
 +
\frac{\partial  ^ {2} L }{\partial  {\dot{x} }  ^ {2} }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619095.png" /></td> </tr></table>
+
\right | _ {x ( t) }  {
 +
\frac{d}{dt}
 +
} h ( t) \right ) +
 +
$$
  
with the boundary conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619096.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619097.png" />, does not have zeros in the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619098.png" />. The zeros of the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v09619099.png" /> of equation (11) are said to be points conjugate with the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190100.png" />. Thus, Jacobi's condition means that the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190101.png" /> does not contain points which are conjugate with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190102.png" />.
+
$$
 +
+
 +
\left . \left (  
 +
\frac{\partial  ^ {2} L }{\partial  x  ^ {2} }
  
Necessary conditions for a weak minimum, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190103.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190104.png" />, are strict analogues of the minimum conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190105.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190106.png" /> for functions of one variable. If the (strong) [[Legendre condition|Legendre condition]] is met, the Jacobi condition is a necessary condition for the second variation to be non-negative. This leads to the following result: For a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190107.png" /> to realize a weak minimum of the functional (3) it is necessary: a) that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190108.png" /> satisfies Euler's equation; b) that the Legendre condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190109.png" /> is satisfied; and c) that the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190110.png" /> does not contain points conjugate with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190111.png" /> (if the strong Legendre condition is satisfied).
+
- {
 +
\frac{d}{dt}
 +
}
 +
\frac{\partial  ^ {2} L }{\partial  x \partial
 +
\dot{x} }
 +
\right ) \right | _ {x ( t) }  h ( t) = 0
 +
$$
  
Sufficient conditions for a weak minimum are as follows: The function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190112.png" /> must be an extremal on which the strong Legendre condition is met, and the semi-interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190113.png" /> must not contain points conjugate with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190114.png" />. For a curve <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190115.png" /> to yield a strong maximum it is sufficient that the sufficient Weierstrass condition, as well as the sufficient conditions for a weak minimum formulated above, be satisfied.
+
with the boundary conditions  $  h( t _ {0} ) = 0 $,
 +
$  {\dot{h} } ( t _ {0} ) \neq 0 $,
 +
does not have zeros in the interval  $  ( t _ {0} , t _ {1} ) $.
 +
The zeros of the solution  $  h( t) $
 +
of equation (11) are said to be points conjugate with the point  $  t _ {0} $.
 +
Thus, Jacobi's condition means that the interval  $  ( t _ {0} , t _ {1} ) $
 +
does not contain points which are conjugate with  $  t _ {0} $.
 +
 
 +
Necessary conditions for a weak minimum,  $  \delta J = 0 $,
 +
$  \delta  ^ {2} J \geq  0 $,
 +
are strict analogues of the minimum conditions  $  f ^ { \prime } ( x) = 0 $,
 +
$  f ^ { \prime\prime } ( x) \geq  0 $
 +
for functions of one variable. If the (strong) [[Legendre condition|Legendre condition]] is met, the Jacobi condition is a necessary condition for the second variation to be non-negative. This leads to the following result: For a function  $  x( t) $
 +
to realize a weak minimum of the functional (3) it is necessary: a) that  $  x( t) $
 +
satisfies Euler's equation; b) that the Legendre condition  $  ( \partial  ^ {2} L / \partial  \dot{x}  ^ {2} ) \mid  _ {x(} t) > 0 $
 +
is satisfied; and c) that the interval  $  ( t _ {0} , t _ {1} ) $
 +
does not contain points conjugate with  $  t _ {0} $(
 +
if the strong Legendre condition is satisfied).
 +
 
 +
Sufficient conditions for a weak minimum are as follows: The function $  x( t) $
 +
must be an extremal on which the strong Legendre condition is met, and the semi-interval $  ( t _ {0} , t _ {1} ] $
 +
must not contain points conjugate with $  t _ {0} $.  
 +
For a curve $  x( t) $
 +
to yield a strong maximum it is sufficient that the sufficient Weierstrass condition, as well as the sufficient conditions for a weak minimum formulated above, be satisfied.
  
 
==Problems in optimal control.==
 
==Problems in optimal control.==
One of the principal directions in the development of the calculus of variations is that of non-classical problems much like the problem (5)–(8) formulated above. Problems of this kind have a major practical significance. For instance, let (6) describe the motion of some dynamic object, say a space ship. The control — the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190116.png" /> — is the thrust of its motor. The initial location of the space ship is some orbit, while its final position is an orbit of different radius. The functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190117.png" /> describes the fuel consumption involved in the performance of such a maneuver. The problem (5)–(7) may then be applied to this situation as follows: Determine the law governing the variation of the thrust exerted by the motor of the space ship required to perform the transition from one orbit to the other within a given period of time so as to minimize the fuel consumption. This must be done subject to the control constraints: the thrust of the motor must not exceed a certain given value; the turning angle is also bounded. Thus, the components of the thrust, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190118.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190119.png" />, are in this case subject to the constraints
+
One of the principal directions in the development of the calculus of variations is that of non-classical problems much like the problem (5)–(8) formulated above. Problems of this kind have a major practical significance. For instance, let (6) describe the motion of some dynamic object, say a space ship. The control — the vector $  u $—  
 +
is the thrust of its motor. The initial location of the space ship is some orbit, while its final position is an orbit of different radius. The functional $  J $
 +
describes the fuel consumption involved in the performance of such a maneuver. The problem (5)–(7) may then be applied to this situation as follows: Determine the law governing the variation of the thrust exerted by the motor of the space ship required to perform the transition from one orbit to the other within a given period of time so as to minimize the fuel consumption. This must be done subject to the control constraints: the thrust of the motor must not exceed a certain given value; the turning angle is also bounded. Thus, the components of the thrust, $  u  ^ {i} $,
 +
$  i = 1, 2, 3 $,  
 +
are in this case subject to the constraints
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190120.png" /></td> </tr></table>
+
$$
 +
a _ {i}  ^ {-}  \leq  u  ^ {i}  \leq  a _ {i}  ^ {+} ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190121.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190122.png" /> are given numbers.
+
where $  a _ {i}  ^ {-} $
 +
and $  a _ {i}  ^ {+} $
 +
are given numbers.
  
 
A large number of problems can be reduced to the Lagrange problem, subject to a supplementary restriction of the type (8). Such problems are known as problems of optimal control. It would be desirable to develop a special apparatus for the theory of optimal control. Pontryagin's maximum principle may be said to be such an apparatus.
 
A large number of problems can be reduced to the Lagrange problem, subject to a supplementary restriction of the type (8). Such problems are known as problems of optimal control. It would be desirable to develop a special apparatus for the theory of optimal control. Pontryagin's maximum principle may be said to be such an apparatus.
  
Another approach to these problems in optimal control theory is also possible. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190123.png" /> be the value of the functional (5) along an optimal solution from a point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190124.png" /> to a point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190125.png" />. For the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190126.png" /> to be an optimal control in such a case it is necessary (and also sufficient in certain cases) that the partial differential equation
+
Another approach to these problems in optimal control theory is also possible. Let $  S( t, x) $
 +
be the value of the functional (5) along an optimal solution from a point $  ( t _ {0} , x _ {0} ) $
 +
to a point $  ( t, x) $.  
 +
For the function $  u( t) $
 +
to be an optimal control in such a case it is necessary (and also sufficient in certain cases) that the partial differential equation
 +
 
 +
$$
 +
 
 +
\frac{\partial  S }{\partial  t }
 +
+
 +
\min _ {u \in U } \left (
 +
\left (
 +
\frac{\partial  S }{\partial  x }
 +
\right )  ^  \prime  ,\
 +
( f ( t, x ( t), u( t)) -
 +
F ( t, x ( t), u ( t) )) \right )  =  0 ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190127.png" /></td> </tr></table>
+
known as Bellman's equation (cf. [[Dynamic programming|Dynamic programming]]), holds. In problems in classical variational calculus the function  $  S( t, x) $(
 +
the action integral) must satisfy the Hamilton–Jacobi equation
  
known as Bellman's equation (cf. [[Dynamic programming|Dynamic programming]]), holds. In problems in classical variational calculus the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190128.png" /> (the action integral) must satisfy the Hamilton–Jacobi equation
+
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190129.png" /></td> </tr></table>
+
\frac{\partial  S }{\partial  t }
 +
+
 +
H \left ( t, x,
 +
\frac{\partial  S }{\partial  x }
 +
\right )  = 0,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190130.png" /> is the [[Hamilton function|Hamilton function]]. In problem (3) the function is the [[Legendre transform|Legendre transform]] with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190131.png" /> of the integrand <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190132.png" />. The Hamilton–Jacobi theory is a powerful tool in the study of numerous variational problems connected with classical mechanics.
+
where $  H $
 +
is the [[Hamilton function|Hamilton function]]. In problem (3) the function is the [[Legendre transform|Legendre transform]] with respect to $  \dot{x} $
 +
of the integrand $  L( t, x, {\dot{x} } ) $.  
 +
The Hamilton–Jacobi theory is a powerful tool in the study of numerous variational problems connected with classical mechanics.
  
 
The connection between variational calculus and the theory of partial differential equations was discovered as early as the 19th century. It was shown by P.G.L. Dirichlet that solving boundary value problems for the Laplace equation is equivalent to solving some variational problem. Consider, for example, a given linear operator equation
 
The connection between variational calculus and the theory of partial differential equations was discovered as early as the 19th century. It was shown by P.G.L. Dirichlet that solving boundary value problems for the Laplace equation is equivalent to solving some variational problem. Consider, for example, a given linear operator equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190133.png" /></td> <td valign="top" style="width:5%;text-align:right;">(12)</td></tr></table>
+
$$ \tag{12 }
 +
A x  = f,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190134.png" /> is some function of two independent variables which vanishes on a closed curve <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190135.png" />. Subject to assumptions which are natural in a certain class of physical problems, the problem of finding the solution of equation (12) is equivalent to finding the minimum of the functional
+
where $  x( \xi , \eta ) $
 +
is some function of two independent variables which vanishes on a closed curve $  \Gamma $.  
 +
Subject to assumptions which are natural in a certain class of physical problems, the problem of finding the solution of equation (12) is equivalent to finding the minimum of the functional
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190136.png" /></td> <td valign="top" style="width:5%;text-align:right;">(13)</td></tr></table>
+
$$ \tag{13 }
 +
J ( x)  = {\int\limits \int\limits } _  \Omega  A _ {xx}  d \xi  d \eta - 2
 +
{\int\limits \int\limits } _  \Omega  fx  d \xi  d \eta ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190137.png" /> is the domain bounded by the curve <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190138.png" />. Equation (12) is in this case the Euler equation for the functional (13).
+
where $  \Omega $
 +
is the domain bounded by the curve $  \Gamma $.  
 +
Equation (12) is in this case the Euler equation for the functional (13).
  
The reduction of problem (12) to (13) is possible if, for example, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190139.png" /> is a positive-definite self-adjoint operator. The connection between problems involving partial differential equations and variational problems makes it possible, in particular, to establish the truth of various existence and uniqueness theorems; it played an important part in the crystallization of the concept of a generalized solution. Such a reduction is very important in numerical mathematics as well, since direct methods of variational calculus can be employed to solve boundary value problems in the theory of partial differential equations.
+
The reduction of problem (12) to (13) is possible if, for example, $  A $
 +
is a positive-definite self-adjoint operator. The connection between problems involving partial differential equations and variational problems makes it possible, in particular, to establish the truth of various existence and uniqueness theorems; it played an important part in the crystallization of the concept of a generalized solution. Such a reduction is very important in numerical mathematics as well, since direct methods of variational calculus can be employed to solve boundary value problems in the theory of partial differential equations.
  
 
==Qualitative methods.==
 
==Qualitative methods.==
These methods make it possible to solve problems on the existence and uniqueness of solutions, as well as on the qualitative features of (families of) extremals. It was established in the 20th century that the number of solutions of a variational problem depends on the properties of the space on which the functional has been defined. For instance, if the functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190140.png" /> is defined on all possible smooth curves on a torus which connect two given points, or on all possible closed curves in a surface which is topologically equivalent to a torus, the number of critical elements — curves on which the variation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190141.png" /> — is infinite in both cases. L.A. Lyusternik and L.G. Shnirel'man [[#References|[7]]] showed that on every surface which is topologically equivalent to a sphere there exist at least three closed self-intersecting geodesics of different lengths; if the lengths of only two of these geodesics are equal, there exists an infinite number of closed geodesics of equal length. Such problems indicate a close connection between variational calculus and the qualitative theory of differential equations and topology. The development of functional analysis made a substantial contribution to the study of qualitative methods. See also [[Variational calculus in the large|Variational calculus in the large]].
+
These methods make it possible to solve problems on the existence and uniqueness of solutions, as well as on the qualitative features of (families of) extremals. It was established in the 20th century that the number of solutions of a variational problem depends on the properties of the space on which the functional has been defined. For instance, if the functional $  J $
 +
is defined on all possible smooth curves on a torus which connect two given points, or on all possible closed curves in a surface which is topologically equivalent to a torus, the number of critical elements — curves on which the variation $  \delta J = 0 $—  
 +
is infinite in both cases. L.A. Lyusternik and L.G. Shnirel'man [[#References|[7]]] showed that on every surface which is topologically equivalent to a sphere there exist at least three closed self-intersecting geodesics of different lengths; if the lengths of only two of these geodesics are equal, there exists an infinite number of closed geodesics of equal length. Such problems indicate a close connection between variational calculus and the qualitative theory of differential equations and topology. The development of functional analysis made a substantial contribution to the study of qualitative methods. See also [[Variational calculus in the large|Variational calculus in the large]].
  
 
==Connection between variational calculus and the theory of cones.==
 
==Connection between variational calculus and the theory of cones.==
The scope of problems studied in variational calculus keeps increasing. In particular, there is much interest in functionals <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190142.png" /> of a very general type defined on sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190143.png" /> of elements of normed spaces. The concept of variation is difficult to introduce into problems of this kind, and another kind of apparatus has to be utilized. This proved to be the theory of cones in Banach spaces. Consider, for example, the problem of minimizing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190144.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190145.png" /> is an element of a closed set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190146.png" />. The cone <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190147.png" /> is the set of non-zero vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190148.png" /> that can be put into correspondence with a positive number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190149.png" /> so that the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190150.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190151.png" />. The cone <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190152.png" /> is the set of non-zero vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190153.png" /> that can be put into correspondence with a positive <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190154.png" /> so that
+
The scope of problems studied in variational calculus keeps increasing. In particular, there is much interest in functionals $  J( x) $
 +
of a very general type defined on sets $  G _ {k} $
 +
of elements of normed spaces. The concept of variation is difficult to introduce into problems of this kind, and another kind of apparatus has to be utilized. This proved to be the theory of cones in Banach spaces. Consider, for example, the problem of minimizing $  f( x) $,  
 +
where $  x $
 +
is an element of a closed set $  G $.  
 +
The cone $  {\Gamma _ {G} } ( x _ {0} ) $
 +
is the set of non-zero vectors $  e $
 +
that can be put into correspondence with a positive number $  \lambda _ {e}  ^ {*} $
 +
so that the vector $  x = {x _ {0} } + {\lambda e } \in G $
 +
for all $  \lambda \in ( 0, {\lambda _ {e}  ^ {*} } ) $.  
 +
The cone $  {\Gamma _ {f} } ( x _ {0} ) $
 +
is the set of non-zero vectors $  e $
 +
that can be put into correspondence with a positive $  \lambda _ {e}  ^ {*} $
 +
so that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190155.png" /></td> </tr></table>
+
$$
 +
f ( x _ {0} + \lambda e)  \geq  f ( x _ {0} )
 +
$$
  
for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190156.png" />. For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190157.png" /> to realize the minimum of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190158.png" />, the intersection of the cones <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190159.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096190/v096190160.png" /> must be empty. This condition is just as elementary as that of vanishing of the variation, but not all the results which follow from it can be obtained by classical methods of variational calculus. It makes it possible to tackle much more complicated problems, such as in studies on extremal values of non-differentiable functionals [[#References|[6]]].
+
for all $  \lambda \in [ 0, {\lambda _ {e}  ^ {*} } ] $.  
 +
For $  x _ {0} $
 +
to realize the minimum of $  f( x) $,  
 +
the intersection of the cones $  {\Gamma _ {G} } ( x _ {0} ) $
 +
and $  {\Gamma _ {f} } ( x _ {0} ) $
 +
must be empty. This condition is just as elementary as that of vanishing of the variation, but not all the results which follow from it can be obtained by classical methods of variational calculus. It makes it possible to tackle much more complicated problems, such as in studies on extremal values of non-differentiable functionals [[#References|[6]]].
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  V.I. Smirnov,  "A course of higher mathematics" , '''4''' , Addison-Wesley  (1964)  (Translated from Russian)  {{MR|0182690}} {{MR|0182688}} {{MR|0182687}} {{MR|0177069}} {{MR|0168707}} {{ZBL|0122.29703}} {{ZBL|0121.25904}} {{ZBL|0118.28402}} {{ZBL|0117.03404}} </TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  M.A. Lavrent'ev,  L.A. Lyusternik,  "A course in variational calculus" , Moscow-Leningrad  (1950)  (In Russian)  {{MR|}} {{ZBL|}} </TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  G.A. Bliss,  "Lectures on the calculus of variations" , Chicago Univ. Press  (1947)  {{MR|0017881}} {{ZBL|0036.34401}} </TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  S.G. [S.G. Mikhlin] Michlin,  "Variationsmethoden der mathematischen Physik" , Akademie Verlag  (1962)  (Translated from Russian)  {{MR|0141248}} {{ZBL|0098.36909}} </TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  L.S. Pontryagin,  V.G. Boltayanskii,  R.V. Gamkrelidze,  E.F. Mishchenko,  "The mathematical theory of optimal processes" , Wiley  (1962)  (Translated from Russian)  {{MR|0166036}} {{MR|0166037}} {{MR|0166038}} {{ZBL|0102.32001}} </TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  B.N. Pshenichnyi,  "Necessary conditions of an extremum" , Interscience  (1962)  (Translated from Russian)  {{MR|}} {{ZBL|}} </TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  L.A. Lyusternik,  L.G. [L.G. Shnirel'man] Schnirelmann,  "Méthode topologiques dans les problèmes variationelles" , Hermann  (1934)  (Translated from Russian)  {{MR|}} {{ZBL|}} </TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  V.I. Smirnov,  "A course of higher mathematics" , '''4''' , Addison-Wesley  (1964)  (Translated from Russian)  {{MR|0182690}} {{MR|0182688}} {{MR|0182687}} {{MR|0177069}} {{MR|0168707}} {{ZBL|0122.29703}} {{ZBL|0121.25904}} {{ZBL|0118.28402}} {{ZBL|0117.03404}} </TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  M.A. Lavrent'ev,  L.A. Lyusternik,  "A course in variational calculus" , Moscow-Leningrad  (1950)  (In Russian)  {{MR|}} {{ZBL|}} </TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  G.A. Bliss,  "Lectures on the calculus of variations" , Chicago Univ. Press  (1947)  {{MR|0017881}} {{ZBL|0036.34401}} </TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  S.G. [S.G. Mikhlin] Michlin,  "Variationsmethoden der mathematischen Physik" , Akademie Verlag  (1962)  (Translated from Russian)  {{MR|0141248}} {{ZBL|0098.36909}} </TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  L.S. Pontryagin,  V.G. Boltayanskii,  R.V. Gamkrelidze,  E.F. Mishchenko,  "The mathematical theory of optimal processes" , Wiley  (1962)  (Translated from Russian)  {{MR|0166036}} {{MR|0166037}} {{MR|0166038}} {{ZBL|0102.32001}} </TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  B.N. Pshenichnyi,  "Necessary conditions of an extremum" , Interscience  (1962)  (Translated from Russian)  {{MR|}} {{ZBL|}} </TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  L.A. Lyusternik,  L.G. [L.G. Shnirel'man] Schnirelmann,  "Méthode topologiques dans les problèmes variationelles" , Hermann  (1934)  (Translated from Russian)  {{MR|}} {{ZBL|}} </TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  A.R.M. Noton,  "Introduction to variational methods in control engineering" , Pergamon  (1965)  {{MR|}} {{ZBL|0145.34101}} </TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  W.H. Fleming,  R.W. Rishel,  "Deterministic and stochastic optimal control" , Springer  (1975)  {{MR|0454768}} {{ZBL|0323.49001}} </TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  L.E. [L.E. El'sgol'ts] Elsgolc,  "Calculus of variations" , Pergamon  (1961)  (Translated from Russian)  {{MR|0344552}} {{MR|0279361}} {{MR|0209534}} {{MR|1532560}} {{MR|0133032}} {{MR|0098996}} {{MR|0051448}} {{ZBL|0101.32001}} </TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  R.T. Rockafellar,  "The theory of subgradients and its applications to problems of optimization. Convex and nonconvex functions" , Heldermann  (1981)  {{MR|0623763}} {{ZBL|0462.90052}} </TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  A.R.M. Noton,  "Introduction to variational methods in control engineering" , Pergamon  (1965)  {{MR|}} {{ZBL|0145.34101}} </TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  W.H. Fleming,  R.W. Rishel,  "Deterministic and stochastic optimal control" , Springer  (1975)  {{MR|0454768}} {{ZBL|0323.49001}} </TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  L.E. [L.E. El'sgol'ts] Elsgolc,  "Calculus of variations" , Pergamon  (1961)  (Translated from Russian)  {{MR|0344552}} {{MR|0279361}} {{MR|0209534}} {{MR|1532560}} {{MR|0133032}} {{MR|0098996}} {{MR|0051448}} {{ZBL|0101.32001}} </TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  R.T. Rockafellar,  "The theory of subgradients and its applications to problems of optimization. Convex and nonconvex functions" , Heldermann  (1981)  {{MR|0623763}} {{ZBL|0462.90052}} </TD></TR></table>

Revision as of 08:27, 6 June 2020


calculus of variations

The branch of mathematics in which one studies methods for obtaining extrema of functionals which depend on the choice of one or several functions subject to constraints of various kinds (phase, differential, integral, etc.) imposed on these functions. This is the framework of the problems which are still known as problems of classical variational calculus. The term "variational calculus" has a broader sense also, viz., a branch of the theory of extremal problems in which the extrema are studied by the "method of variations" (cf. Variation), i.e. by the method of small perturbations of the arguments and functionals; such problems, in the wider sense, are opposite to discrete optimization problems.

The following scheme describes a rather wide range of problems of classical variational calculus. It is required to minimize the functional

$$ \tag{1 } J ( x) = \int\limits _ { T } f ( t, x ( t), \dot{x} ( t)) dt, $$

where $ T \subset \mathbf R ^ {m} $, $ t = ( t _ {1} \dots t _ {m} ) $, $ x = ( x ^ {1} \dots x ^ {n} ) $,

$$ \dot{x} = \left ( \frac{\partial x ^ {i} }{\partial t _ {0} } \right ) ,\ \ f: \mathbf R ^ {m} \times \mathbf R ^ {n} \times \mathbf R ^ {mn} \rightarrow \mathbf R , $$

subject to the constraints described by equations of the type

$$ \tag{2 } \left . and by certain boundary conditions $ x\mid _ {\partial T } \in \Gamma $. Problems of this type are known as Lagrange problems (cf. [[Lagrange problem|Lagrange problem]]). Other types of problems considered are the [[Mayer problem|Mayer problem]], the [[Bolza problem|Bolza problem]], etc. The most elementary question in classical variational calculus is the simplest problem in variational calculus, in which $ t $ and $ x $ in (1) are one-dimensional, the constraints (2) are absent and the boundary conditions are fixed: $$ \tag{3 } J ( x) = \ \int\limits _ { t _ {0} } ^ { {t _ 1 } } L ( t, x, \dot{x} ) dt \rightarrow \inf ; \ \ x ( t _ {0} ) = x _ {0} ,\ \ x ( t _ {1} ) = x _ {1} . $$ This type includes the [[Brachistochrone|brachistochrone]] problem, or the problem of curves of minimum time of descent. This problem is usually considered to be the starting point in the history of the calculus of variations. The theoretical foundations of classical variational calculus were laid in the 18th century by L. Euler and J.L. Lagrange. They also discovered the important connections of this discipline with mechanics and physics. Many specific problems (on geodesics, surfaces of revolution, isoperimetric problems, etc.) were solved during the first stage of development of this theory — mainly owing to the work of G. Leibniz, Jacob, and Johann Bernoulli, Euler and Lagrange. Variational calculus deals with algorithmic methods for finding extrema, methods of arriving at necessary and sufficient conditions, conditions which ensure the existence of an extremum, qualitative problems, etc. Direct methods occupy an important place among the algorithmic methods for finding extrema. =='"`UNIQ--h-0--QINU`"'Direct methods.== Euler (1768) proposed a method for the approximate (numerical) solution of problems in variational calculus, which received the name of Euler's method of polygonal lines. This marked the beginning of the study of numerically solving extremum problems. Euler's method was the first representative of a large class of methods known as direct methods of variational calculus. These methods are based on reducing the problem of finding the extremum of a functional to that of finding the extremum of a function of several variables. Problem (3) may be solved by Euler's method of polygonal lines as follows. The interval $ [ t _ {0} , t _ {1} ] $ is subdivided into $ N $ equal parts by means of points $ t _ {0} = t _ {0} , \tau _ {1} = t _ {0} + \tau \dots \tau _ {N} = t _ {0} + N \tau = t _ {1} $. Let the values of the function at these points be $ x _ {0} , x _ {1} \dots x _ {N} $, respectively. Each set of points $ ( \tau _ {0} , x _ {0} ), \dots ( \tau _ {N} , x _ {N} ) $ defines some polygonal line. The problem may now be formulated as follows: Out of all possible polygonal lines connecting the points $ ( \tau _ {0} , x _ {0} ) $ and $ ( \tau _ {N} , x _ {N} ) $, to find the line for which the functional (1) assumes an extremal value. The value of the derivative $ \dot{x} $ on the interval $ [ \tau _ {i} , \tau _ {i+ 1 } ] $ will be $ {\dot{x} } _ {i} = ( x _ {i+ 1 } - x _ {i} )/ \tau $. The functional $ J( x) $ becomes a function of a finite number of variables $ x _ {i} $: $$ J ( x) \sim J ( x _ {0} \dots x _ {n} ), $$ and problem (3) is reduced to the problem of finding the extremum of the function $ J( x _ {0} \dots x _ {n} ) $. In order that Euler's line realizing the extremum of this function approximate the solution of problem (3) with a high accuracy, the number $ N $ should, as a rule, be sufficiently large. The labour involved in the computations which must be performed to find the extremum of the function (3) is so large that "manual" computations are very difficult. For this reason, direct methods were ruled out in basic studies of variational calculus for a long time. Direct methods began to be much more extensively studied in the 20th century. At first, new methods were proposed to reduce the problem to finding the extremum of a function in a finite number of variables. These ideas may be clarified by taking, as an example, the minimization of the functional (3) subject to the condition $$ x ( t _ {0} ) = x ( t _ {1} ) = 0. $$ Consider the solution of this problem in the form $$ x ( t) = \sum _ {n = 1 } ^ { N } a _ {n} \phi _ {n} ( t), $$ where $ \{ \phi _ {n} ( t) \} $ is some system of functions satisfying the conditions $ \phi _ {i} ( t _ {0} ) = \phi _ {i} ( t _ {1} ) = 0 $, $ i = 1 \dots N $. The functional $ J( x) $ becomes a function of the coefficients, $ J( x) \sim J( a _ {1} \dots a _ {N} ) $, and the problem is reduced to finding the extremum of this function of $ N $ variables. Under certain conditions imposed on the system of functions $ \{ \phi _ {n} \} $, the solution of the problem tends to that of problem (3) as $ N \rightarrow \infty $( cf. [[Galerkin method|Galerkin method]]). =='"`UNIQ--h-1--QINU`"'The method of variations.== A second direction of study is the study of necessary and sufficient conditions to be satisfied by the function $ x( t) $ realizing the extremum of the functional $ J( x) $. The principal method for finding necessary conditions is the method of variations. Construct some function $ x( t) $ in one way or another. How to test whether or not this function is a solution of the variational problem (3)? An answer to this question was first given by Euler in 1744. The answer as formulated below involves the concept, introduced by Lagrange in 1762, of the variation $ \delta J $ of the functional $ J $( hence the name "variational calculus" ; cf. [[Variation|Variation]]; [[Variation of a functional|Variation of a functional]]). For the simplest problem of variational calculus this variation is defined as: $$ \delta J ( x, h) = \ \int\limits _ { t _ {0} } ^ { {t _ 1 } } \left . \left ( \frac{\partial L }{\partial x }

-

{ \frac{d}{dt}

} 

\frac{\partial L }{\partial \dot{x} }

\right ) \right | _ {x ( t) } h ( t) dt, $$ where $ h( t) $ is an arbitrary smooth function satisfying the conditions $ h( t _ {0} ) = h( t _ {1} ) = 0 $. The condition $ \delta J = 0 $ is necessary for the function $ x( t) $ to realize an extremum of the functional (3). Hence — and also from the expression for the variation $ \delta J $— one may conclude that, for the function $ x( t) $ to constitute an extremum of (3), it must satisfy the following second-order differential equation: $$ \tag{4 }

\frac{\partial L }{\partial x }

-

{ \frac{d}{dt}

}

\frac{\partial L }{\partial \dot{x} }

 =  0.

$$ The above equation is known as the [[Euler equation|Euler equation]], while the integral curves of this family are said to be the extremals of the variational problem under consideration. A function $ x( t) $ for which $ J( x) $ attains an extremum necessarily represents a solution of the boundary value problem $ x( t _ {0} ) = x _ {0} $, $ x( t _ {1} ) = x _ {1} $ for equation (4). One has thus obtained a second method for solving the extremal problem. The boundary value problem for the Euler equation is solved (in regular cases the number of such solutions will be finite), after which for each one of the solutions which have been obtained supplementary restrictions are tested in order to find out if there are curves which are a solution of the initial problem. However, a significant drawback of this method is that it does not furnish universal methods for solving boundary value problems for ordinary (non-linear) differential equations. Variational problems with mobile ends are very often encountered. For instance, in the simplest problems, the points $ x( t _ {0} ) $ and $ x( t _ {1} ) $ may move along given curves. In problems with mobile ends the condition $ \delta J = 0 $ implies supplementary conditions to be satisfied by the mobile ends — the so-called [[Transversality condition|transversality condition]] which, in conjunction with the boundary conditions, yields a closed system of conditions for the boundary value problem. The principal results concerning the simplest problem of variational calculus are applied to the general case of functionals of the type $$ J ( x) = \ \int\limits _ { t _ {0} } ^ { {t _ 1 } } F \left ( x ( t), { \frac{dx}{dt}

} \dots 

\frac{d ^ {s} x }{dt ^ {s} }

\right ) dt, $$ where $ x( t) $ is a vector function of arbitrary dimension [[#References|[3]]]. =='"`UNIQ--h-2--QINU`"'Lagrange's problem.== Euler and Lagrange also studied problems on a conditional extremum. The simplest class of problems of this type is the class of so-called isoperimetric problems (cf. [[Isoperimetric problem|Isoperimetric problem]]). For the case of one-dimensional $ t $, Lagrange stated the class of problems (1) and (2), and obtained an analogue of Euler's equations, involving the so-called Lagrange multipliers for these problems. Such an analogue may also be obtained for the most general case of problems (1) and (2). The Lagrange problem assumed special importance in the mid-20th century in connection with the creation of the mathematical theory of optimal control (cf. [[Optimal control, mathematical theory of|Optimal control, mathematical theory of]]). Below the main results concerning the Lagrange problem are given, in terms of this theory. These were obtained by L.S. Pontryagin and his school. Consider the following case: In problems (1) and (2), $ t $ is one-dimensional and the system $ \phi ( t, x, \dot{x} ) $ may be solved for some components of the last-named variables. The resulting problem is that of minimizing the functional $$ \tag{5 } J ( x, u) = \int\limits _ { t _ {0} } ^ { {t _ 1 } } F ( t, x, u) dt $$ under the differential constraint $$ \tag{6 } \dot{x} = f ( t, x, u) $$ and the boundary conditions $$ \tag{7 } ( x ( t _ {0} ), x ( t _ {1} )) \in E. $$ In equations (5)–(7), $ x = ( x ^ {1} \dots x ^ {n} ) $ is a vector function known as the phase vector, $ u = ( u ^ {1} \dots u ^ {m} ) $ is a vector function known as the control, $ F: \mathbf R \times {\mathbf R ^ {n} } \times {\mathbf R ^ {m} } \rightarrow \mathbf R $, $ f: \mathbf R \times {\mathbf R ^ {n} } \times {\mathbf R ^ {m} } \rightarrow {\mathbf R ^ {n} } $, $ E \subset {\mathbf R ^ {2n} } $. The fixed conditions of problem (3) may serve as an example of boundary conditions of the type (7). In optimal control problems certain "non-classical" conditions such as $$ \tag{8 } u ( t) \in U \subset \mathbf R ^ {m} $$ are imposed in addition to conditions (6) and (7). =='"`UNIQ--h-3--QINU`"'Weak and strong extrema.== Two topologies are usually distinguished in variational calculus — a strong and a weak topology and, correspondingly, one defines strong and weak extrema. For instance, as applied to problem (3) one says that the curve $ {x _ {0} } ( t) $ realizes a weak minimum if it is possible to find an $ \epsilon > 0 $ such that $ J( x) \geq J ( {x _ {0} } ) $ for all continuously-differentiable functions $ x( t) $ satisfying the conditions $ x( {t _ {0} } ) = {x _ {0} } ( {t _ {0} } ) $, $ x( {t _ {1} } ) = {x _ {0} } ( {t _ {1} } ) $ and $$ \max _ {t \in [ t _ {0} , t _ {1} ] } | x ( t) - x _ {0} ( t) | + \max _ {t \in [ t _ {0} , t _ {1} ] } \ | \dot{x} ( t) - \dot{x} _ {0} ( t) | < \epsilon . $$ In other words, this fixes the proximity not only of the phase variables, but also of the speeds (controls). One says that a function gives a strong extremum if it is possible to find an $ \epsilon > 0 $ such that $ J( x) \geq J( x _ {0} ) $ for all permissible absolutely-continuous functions $ x( t) $( for which $ J( x) $ exists) satisfying the conditions $ x( t _ {0} ) = {x _ {0} } ( t _ {0} ) $, $ x( t _ {1} ) = {x _ {0} } ( {t _ {1} } ) $ and $$ \max _ {t \in [ t _ {0} , t _ {1} ] } \ | x ( t) - x _ {0} ( t) | \leq \epsilon . $$ This equation merely represents proximity of the phase variables. If $ x _ {0} ( t) $ realizes a strong extremum, it realizes a fortiori a weak extremum as well; accordingly, conditions sufficient for a strong extremum are also sufficient for a weak one. Conversely, if a weak extremum is absent, so is a strong one, i.e. necessary conditions for a weak extremum are also necessary for a strong extremum. =='"`UNIQ--h-4--QINU`"'Necessary and sufficient conditions for an extremum.== Euler's equation, which was discussed above, is a necessary condition for a weak extremum. In the late 1950s, Pontryagin postulated a maximum principle for the problem (5)–(8) which is a necessary condition for a strong extremum (cf. [[Pontryagin maximum principle|Pontryagin maximum principle]]). This maximum principle states that if a pair $ ( x, u) $ supplies a strong extremum in the problem (5)–(8), there exist a vector function $ \psi $ and a number $ \lambda _ {0} $ such that the relations $$ \tag{9 } \left .

are satisfied for the Hamilton function $ H( t, x, \psi , u , {\lambda _ {0} } ) = ( \psi , f ) - {\lambda _ {0} } F $.

If Pontryagin's maximum principle is applied to the problem (3), it follows that a necessary condition for a curve $ x( t) $ to yield a strong minimum in the problem (3) is that it be an extremal (i.e. satisfy Euler's equation (4)) and satisfy the necessary Weierstrass condition (cf. Weierstrass conditions (for a variational extremum))

$$ \tag{10 } {\mathcal E} ( t, x ( t), \dot{x} ( t), \xi ) \geq 0 \ \ \textrm{ for } \textrm{ all } t \in [ t _ {0} , t _ {1} ],\ \xi \in \mathbf R , $$

where

$$ {\mathcal E} ( t, x, \dot{x} , \xi ) = \ L ( t, x, \xi ) - L ( t, x, \dot{x} ) - ( ( \xi - \dot{x} ) L _ {x} dot ( t, x, \dot{x} ) ) $$

is the so-called Weierstrass function.

In addition to conditions of the type (4) and (10), which are local (i.e. which can be verified at each point of the extremal), there is also a global necessary condition, related to the behaviour of the set of extremals in a neighbourhood of the given extremal (cf. Jacobi condition). For problem (3) Jacobi's condition may be formulated as follows. For the extremal $ x( t) $ to supply a minimum in the problem (3), it is necessary that the solution of the Jacobi equation

$$ \tag{11 } - { \frac{d}{dt} } \left ( \left . \frac{\partial ^ {2} L }{\partial {\dot{x} } ^ {2} } \right | _ {x ( t) } { \frac{d}{dt} } h ( t) \right ) + $$

$$ + \left . \left ( \frac{\partial ^ {2} L }{\partial x ^ {2} } - { \frac{d}{dt} } \frac{\partial ^ {2} L }{\partial x \partial \dot{x} } \right ) \right | _ {x ( t) } h ( t) = 0 $$

with the boundary conditions $ h( t _ {0} ) = 0 $, $ {\dot{h} } ( t _ {0} ) \neq 0 $, does not have zeros in the interval $ ( t _ {0} , t _ {1} ) $. The zeros of the solution $ h( t) $ of equation (11) are said to be points conjugate with the point $ t _ {0} $. Thus, Jacobi's condition means that the interval $ ( t _ {0} , t _ {1} ) $ does not contain points which are conjugate with $ t _ {0} $.

Necessary conditions for a weak minimum, $ \delta J = 0 $, $ \delta ^ {2} J \geq 0 $, are strict analogues of the minimum conditions $ f ^ { \prime } ( x) = 0 $, $ f ^ { \prime\prime } ( x) \geq 0 $ for functions of one variable. If the (strong) Legendre condition is met, the Jacobi condition is a necessary condition for the second variation to be non-negative. This leads to the following result: For a function $ x( t) $ to realize a weak minimum of the functional (3) it is necessary: a) that $ x( t) $ satisfies Euler's equation; b) that the Legendre condition $ ( \partial ^ {2} L / \partial \dot{x} ^ {2} ) \mid _ {x(} t) > 0 $ is satisfied; and c) that the interval $ ( t _ {0} , t _ {1} ) $ does not contain points conjugate with $ t _ {0} $( if the strong Legendre condition is satisfied).

Sufficient conditions for a weak minimum are as follows: The function $ x( t) $ must be an extremal on which the strong Legendre condition is met, and the semi-interval $ ( t _ {0} , t _ {1} ] $ must not contain points conjugate with $ t _ {0} $. For a curve $ x( t) $ to yield a strong maximum it is sufficient that the sufficient Weierstrass condition, as well as the sufficient conditions for a weak minimum formulated above, be satisfied.

Problems in optimal control.

One of the principal directions in the development of the calculus of variations is that of non-classical problems much like the problem (5)–(8) formulated above. Problems of this kind have a major practical significance. For instance, let (6) describe the motion of some dynamic object, say a space ship. The control — the vector $ u $— is the thrust of its motor. The initial location of the space ship is some orbit, while its final position is an orbit of different radius. The functional $ J $ describes the fuel consumption involved in the performance of such a maneuver. The problem (5)–(7) may then be applied to this situation as follows: Determine the law governing the variation of the thrust exerted by the motor of the space ship required to perform the transition from one orbit to the other within a given period of time so as to minimize the fuel consumption. This must be done subject to the control constraints: the thrust of the motor must not exceed a certain given value; the turning angle is also bounded. Thus, the components of the thrust, $ u ^ {i} $, $ i = 1, 2, 3 $, are in this case subject to the constraints

$$ a _ {i} ^ {-} \leq u ^ {i} \leq a _ {i} ^ {+} , $$

where $ a _ {i} ^ {-} $ and $ a _ {i} ^ {+} $ are given numbers.

A large number of problems can be reduced to the Lagrange problem, subject to a supplementary restriction of the type (8). Such problems are known as problems of optimal control. It would be desirable to develop a special apparatus for the theory of optimal control. Pontryagin's maximum principle may be said to be such an apparatus.

Another approach to these problems in optimal control theory is also possible. Let $ S( t, x) $ be the value of the functional (5) along an optimal solution from a point $ ( t _ {0} , x _ {0} ) $ to a point $ ( t, x) $. For the function $ u( t) $ to be an optimal control in such a case it is necessary (and also sufficient in certain cases) that the partial differential equation

$$ \frac{\partial S }{\partial t } + \min _ {u \in U } \left ( \left ( \frac{\partial S }{\partial x } \right ) ^ \prime ,\ ( f ( t, x ( t), u( t)) - F ( t, x ( t), u ( t) )) \right ) = 0 , $$

known as Bellman's equation (cf. Dynamic programming), holds. In problems in classical variational calculus the function $ S( t, x) $( the action integral) must satisfy the Hamilton–Jacobi equation

$$ \frac{\partial S }{\partial t } + H \left ( t, x, \frac{\partial S }{\partial x } \right ) = 0, $$

where $ H $ is the Hamilton function. In problem (3) the function is the Legendre transform with respect to $ \dot{x} $ of the integrand $ L( t, x, {\dot{x} } ) $. The Hamilton–Jacobi theory is a powerful tool in the study of numerous variational problems connected with classical mechanics.

The connection between variational calculus and the theory of partial differential equations was discovered as early as the 19th century. It was shown by P.G.L. Dirichlet that solving boundary value problems for the Laplace equation is equivalent to solving some variational problem. Consider, for example, a given linear operator equation

$$ \tag{12 } A x = f, $$

where $ x( \xi , \eta ) $ is some function of two independent variables which vanishes on a closed curve $ \Gamma $. Subject to assumptions which are natural in a certain class of physical problems, the problem of finding the solution of equation (12) is equivalent to finding the minimum of the functional

$$ \tag{13 } J ( x) = {\int\limits \int\limits } _ \Omega A _ {xx} d \xi d \eta - 2 {\int\limits \int\limits } _ \Omega fx d \xi d \eta , $$

where $ \Omega $ is the domain bounded by the curve $ \Gamma $. Equation (12) is in this case the Euler equation for the functional (13).

The reduction of problem (12) to (13) is possible if, for example, $ A $ is a positive-definite self-adjoint operator. The connection between problems involving partial differential equations and variational problems makes it possible, in particular, to establish the truth of various existence and uniqueness theorems; it played an important part in the crystallization of the concept of a generalized solution. Such a reduction is very important in numerical mathematics as well, since direct methods of variational calculus can be employed to solve boundary value problems in the theory of partial differential equations.

Qualitative methods.

These methods make it possible to solve problems on the existence and uniqueness of solutions, as well as on the qualitative features of (families of) extremals. It was established in the 20th century that the number of solutions of a variational problem depends on the properties of the space on which the functional has been defined. For instance, if the functional $ J $ is defined on all possible smooth curves on a torus which connect two given points, or on all possible closed curves in a surface which is topologically equivalent to a torus, the number of critical elements — curves on which the variation $ \delta J = 0 $— is infinite in both cases. L.A. Lyusternik and L.G. Shnirel'man [7] showed that on every surface which is topologically equivalent to a sphere there exist at least three closed self-intersecting geodesics of different lengths; if the lengths of only two of these geodesics are equal, there exists an infinite number of closed geodesics of equal length. Such problems indicate a close connection between variational calculus and the qualitative theory of differential equations and topology. The development of functional analysis made a substantial contribution to the study of qualitative methods. See also Variational calculus in the large.

Connection between variational calculus and the theory of cones.

The scope of problems studied in variational calculus keeps increasing. In particular, there is much interest in functionals $ J( x) $ of a very general type defined on sets $ G _ {k} $ of elements of normed spaces. The concept of variation is difficult to introduce into problems of this kind, and another kind of apparatus has to be utilized. This proved to be the theory of cones in Banach spaces. Consider, for example, the problem of minimizing $ f( x) $, where $ x $ is an element of a closed set $ G $. The cone $ {\Gamma _ {G} } ( x _ {0} ) $ is the set of non-zero vectors $ e $ that can be put into correspondence with a positive number $ \lambda _ {e} ^ {*} $ so that the vector $ x = {x _ {0} } + {\lambda e } \in G $ for all $ \lambda \in ( 0, {\lambda _ {e} ^ {*} } ) $. The cone $ {\Gamma _ {f} } ( x _ {0} ) $ is the set of non-zero vectors $ e $ that can be put into correspondence with a positive $ \lambda _ {e} ^ {*} $ so that

$$ f ( x _ {0} + \lambda e) \geq f ( x _ {0} ) $$

for all $ \lambda \in [ 0, {\lambda _ {e} ^ {*} } ] $. For $ x _ {0} $ to realize the minimum of $ f( x) $, the intersection of the cones $ {\Gamma _ {G} } ( x _ {0} ) $ and $ {\Gamma _ {f} } ( x _ {0} ) $ must be empty. This condition is just as elementary as that of vanishing of the variation, but not all the results which follow from it can be obtained by classical methods of variational calculus. It makes it possible to tackle much more complicated problems, such as in studies on extremal values of non-differentiable functionals [6].

References

[1] V.I. Smirnov, "A course of higher mathematics" , 4 , Addison-Wesley (1964) (Translated from Russian) MR0182690 MR0182688 MR0182687 MR0177069 MR0168707 Zbl 0122.29703 Zbl 0121.25904 Zbl 0118.28402 Zbl 0117.03404
[2] M.A. Lavrent'ev, L.A. Lyusternik, "A course in variational calculus" , Moscow-Leningrad (1950) (In Russian)
[3] G.A. Bliss, "Lectures on the calculus of variations" , Chicago Univ. Press (1947) MR0017881 Zbl 0036.34401
[4] S.G. [S.G. Mikhlin] Michlin, "Variationsmethoden der mathematischen Physik" , Akademie Verlag (1962) (Translated from Russian) MR0141248 Zbl 0098.36909
[5] L.S. Pontryagin, V.G. Boltayanskii, R.V. Gamkrelidze, E.F. Mishchenko, "The mathematical theory of optimal processes" , Wiley (1962) (Translated from Russian) MR0166036 MR0166037 MR0166038 Zbl 0102.32001
[6] B.N. Pshenichnyi, "Necessary conditions of an extremum" , Interscience (1962) (Translated from Russian)
[7] L.A. Lyusternik, L.G. [L.G. Shnirel'man] Schnirelmann, "Méthode topologiques dans les problèmes variationelles" , Hermann (1934) (Translated from Russian)

Comments

References

[a1] A.R.M. Noton, "Introduction to variational methods in control engineering" , Pergamon (1965) Zbl 0145.34101
[a2] W.H. Fleming, R.W. Rishel, "Deterministic and stochastic optimal control" , Springer (1975) MR0454768 Zbl 0323.49001
[a3] L.E. [L.E. El'sgol'ts] Elsgolc, "Calculus of variations" , Pergamon (1961) (Translated from Russian) MR0344552 MR0279361 MR0209534 MR1532560 MR0133032 MR0098996 MR0051448 Zbl 0101.32001
[a4] R.T. Rockafellar, "The theory of subgradients and its applications to problems of optimization. Convex and nonconvex functions" , Heldermann (1981) MR0623763 Zbl 0462.90052
How to Cite This Entry:
Variational calculus. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Variational_calculus&oldid=28276
This article was adapted from an original article by N.N. Moiseev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article