Namespaces
Variants
Actions

Difference between revisions of "Optimality, sufficient conditions for"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
o0685301.png
 +
$#A+1 = 61 n = 0
 +
$#C+1 = 61 : ~/encyclopedia/old_files/data/O068/O.0608530 Optimality, sufficient conditions for
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
Conditions which ensure the optimality of a given solution of a problem of the variational calculus in a chosen class of comparison curves.
 
Conditions which ensure the optimality of a given solution of a problem of the variational calculus in a chosen class of comparison curves.
  
Sufficient conditions for optimality of a weak minimum (see [[#References|[1]]]): For a curve <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o0685301.png" /> to provide a weak minimum for the functional
+
Sufficient conditions for optimality of a weak minimum (see [[#References|[1]]]): For a curve $  \overline{y}\; ( x) $
 +
to provide a weak minimum for the functional
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o0685302.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
J( y)  = \int\limits _ { x _ {0} } ^ { {x _ 1 } } F( x, y, y  ^  \prime  ) dx ,
 +
$$
  
 
given the boundary conditions
 
given the boundary conditions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o0685303.png" /></td> </tr></table>
+
$$
 +
y( x _ {0} )  = y _ {0} ,\ \
 +
y( x _ {1} )  = y _ {1} ,
 +
$$
  
 
it is sufficient that the following conditions are fulfilled.
 
it is sufficient that the following conditions are fulfilled.
  
1) The curve <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o0685304.png" /> must be an extremal, i.e. it must satisfy the [[Euler equation|Euler equation]]
+
1) The curve $  \overline{y}\; ( x) $
 +
must be an extremal, i.e. it must satisfy the [[Euler equation|Euler equation]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o0685305.png" /></td> </tr></table>
+
$$
 +
F _ {y} -  
 +
\frac{d}{dx}
 +
F _ {y  ^  \prime  }  = 0.
 +
$$
  
2) Along the curve <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o0685306.png" />, including its ends, the strong [[Legendre condition|Legendre condition]]
+
2) Along the curve $  \overline{y}\; ( x) $,  
 +
including its ends, the strong [[Legendre condition|Legendre condition]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o0685307.png" /></td> </tr></table>
+
$$
 +
F _ {y  ^  \prime  y  ^  \prime  } ( x, y, y  ^  \prime  )  > 0
 +
$$
  
 
must be fulfilled.
 
must be fulfilled.
  
3) The curve <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o0685308.png" /> must satisfy the strong [[Jacobi condition|Jacobi condition]], which requires that the solution of the Jacobi equation
+
3) The curve $  \overline{y}\; ( x) $
 +
must satisfy the strong [[Jacobi condition|Jacobi condition]], which requires that the solution of the Jacobi equation
 +
 
 +
$$ \tag{2 }
 +
\left ( F _ {yy} -
 +
\frac{d}{dx}
 +
F _ {yy  ^  \prime  } \right ) \eta -
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o0685309.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
\frac{d}{dx}
 +
( F _ {y  ^  \prime  y  ^  \prime  }
 +
\eta  ^  \prime  )  = 0
 +
$$
  
 
with initial conditions
 
with initial conditions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853010.png" /></td> </tr></table>
+
$$
 +
\eta ( x _ {0} )  = 0,\ \
 +
\eta  ^  \prime  ( x _ {0} )  = 1 ,
 +
$$
  
must not vanish at the points of the right-closed interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853011.png" />.
+
must not vanish at the points of the right-closed interval $  x _ {0} < x \leq  x _ {1} $.
  
The coefficients of the Jacobi equation (2), which is a linear differential equation of the second order, are calculated along the extremal <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853012.png" /> and represent known functions in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853013.png" />.
+
The coefficients of the Jacobi equation (2), which is a linear differential equation of the second order, are calculated along the extremal $  \overline{y}\; ( x) $
 +
and represent known functions in $  x $.
  
 
For a strong minimum it is sufficient that the following extra condition be fulfilled, next to those mentioned above.
 
For a strong minimum it is sufficient that the following extra condition be fulfilled, next to those mentioned above.
  
4) There exists a neighbourhood of the curve <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853014.png" /> such that at each point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853015.png" /> of it, and for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853016.png" />, the inequality
+
4) There exists a neighbourhood of the curve $  \overline{y}\; ( x) $
 +
such that at each point $  ( x, y) $
 +
of it, and for any $  y  ^  \prime  $,  
 +
the inequality
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853017.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
{\mathcal E} ( x, y, u( x, y), y  ^  \prime  ) \geq  0
 +
$$
  
 
hold, where
 
hold, where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853018.png" /></td> </tr></table>
+
$$
 +
{\mathcal E} ( x , y , u , y  ^  \prime  )  = F( x, y, y  ^  \prime  ) -
 +
F( x, y, u) +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853019.png" /></td> </tr></table>
+
$$
 +
- ( y  ^  \prime  - u) F _ {y  ^  \prime  } ( x, y, u)
 +
$$
  
is the Weierstrass function, while <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853020.png" /> is the slope of the field of extremals surrounding <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853021.png" />.
+
is the Weierstrass function, while $  u( x, y) $
 +
is the slope of the field of extremals surrounding $  \overline{y}\; ( x) $.
  
At the extremal <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853022.png" />, condition (3) takes the form
+
At the extremal $  \overline{y}\; ( x) $,  
 +
condition (3) takes the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853023.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
{\mathcal E} ( x, \overline{y}\; , \overline{y}\; {}  ^  \prime  , y  ^  \prime  )  = \
 +
F( x, \overline{y}\; , y  ^  \prime  ) - F( x, \overline{y}\; , \overline{y}\; {}  ^  \prime  ) +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853024.png" /></td> </tr></table>
+
$$
 +
- ( y  ^  \prime  - \overline{y}\; {}  ^  \prime  ) F _ {y  ^  \prime  } ( x, \overline{y}\; , \overline{y}\; {}  ^  \prime  )  \geq  0.
 +
$$
  
 
Condition (4) is necessary for a strong minimum; it is called the Weierstrass necessary condition (cf. [[Weierstrass conditions (for a variational extremum)|Weierstrass conditions (for a variational extremum)]]). Thus, unlike the sufficient conditions for a weak minimum, which require certain strong necessary conditions to be fulfilled at the points of the extremal, the sufficient conditions of a strong minimum require that the Weierstrass necessary condition be fulfilled in a neighbourhood of the extremal. In general, it is impossible to weaken the formulation of the sufficient conditions for a strong minimum once the requirement that the Weierstrass condition be fulfilled in a neighbourhood of the extremal has been replaced by the strong Weierstrass condition (condition (4) with strict inequality) at the points of the extremal (see [[#References|[1]]]).
 
Condition (4) is necessary for a strong minimum; it is called the Weierstrass necessary condition (cf. [[Weierstrass conditions (for a variational extremum)|Weierstrass conditions (for a variational extremum)]]). Thus, unlike the sufficient conditions for a weak minimum, which require certain strong necessary conditions to be fulfilled at the points of the extremal, the sufficient conditions of a strong minimum require that the Weierstrass necessary condition be fulfilled in a neighbourhood of the extremal. In general, it is impossible to weaken the formulation of the sufficient conditions for a strong minimum once the requirement that the Weierstrass condition be fulfilled in a neighbourhood of the extremal has been replaced by the strong Weierstrass condition (condition (4) with strict inequality) at the points of the extremal (see [[#References|[1]]]).
Line 59: Line 115:
 
Let a problem of optimal control be posed in which the minimum of the functional
 
Let a problem of optimal control be posed in which the minimum of the functional
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853025.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
= \int\limits _ { 0 } ^ { {t _ 1} } f ^ { 0 } ( x, u) dt,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853026.png" /></td> </tr></table>
+
$$
 +
f ^ { 0 } : \mathbf R  ^ {n} \times \mathbf R  ^ {p}  \rightarrow  \mathbf R ,
 +
$$
  
 
has to be determined, given the conditions
 
has to be determined, given the conditions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853027.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
$$ \tag{6 }
 +
\dot{x}  = f( x, u),\  f: \mathbf R  ^ {n} \times \mathbf R  ^ {p}  \rightarrow \
 +
\mathbf R  ^ {n} ,
 +
$$
 +
 
 +
$$ \tag{7 }
 +
x( 0)  = x _ {0} ,\  x( t _ {1} ) =  x _ {1} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853028.png" /></td> <td valign="top" style="width:5%;text-align:right;">(7)</td></tr></table>
+
$$ \tag{8 }
 +
u  \in  U ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853029.png" /></td> <td valign="top" style="width:5%;text-align:right;">(8)</td></tr></table>
+
where  $  U $
 +
is a given closed set in  $  p $-
 +
dimensional space.
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853030.png" /> is a given closed set in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853031.png" />-dimensional space.
+
Using the method of [[Dynamic programming|dynamic programming]] [[#References|[3]]], sufficient conditions for optimality are formulated in the following way. For a control  $  u( t) $
 +
to be an optimal control in the problem (5)–(8), it is sufficient that:
  
Using the method of [[Dynamic programming|dynamic programming]] [[#References|[3]]], sufficient conditions for optimality are formulated in the following way. For a control <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853032.png" /> to be an optimal control in the problem (5)–(8), it is sufficient that:
+
a) there exists a continuous function  $  S( x) $
 +
which has continuous partial derivatives for all  $  x $
 +
with the possible exception of a certain piecewise-smooth set of dimension less than  $  n $,
 +
which vanishes at the end-point  $  x _ {1} $,
 +
$  S( x _ {1} ) = 0 $,
 +
and which satisfies the [[Bellman equation|Bellman equation]]
  
a) there exists a continuous function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853033.png" /> which has continuous partial derivatives for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853034.png" /> with the possible exception of a certain piecewise-smooth set of dimension less than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853035.png" />, which vanishes at the end-point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853036.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853037.png" />, and which satisfies the [[Bellman equation|Bellman equation]]
+
$$ \tag{9 }
 +
\max _ {u \in U } \
 +
\left (
 +
\frac{\partial  S }{\partial x }
 +
f( x, u) - f ^ { 0 }
 +
( x, u) \right )  = 0;
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853038.png" /></td> <td valign="top" style="width:5%;text-align:right;">(9)</td></tr></table>
+
b)  $  u( t) = v( x( t)) $
 +
if  $  0 \leq  t \leq  t _ {1} $,
 +
where  $  v( x) $
 +
is a synthesizing function (cf. also [[Optimal synthesis control|Optimal synthesis control]]), which can be defined using the Bellman equation:
  
b) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853039.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853040.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853041.png" /> is a synthesizing function (cf. also [[Optimal synthesis control|Optimal synthesis control]]), which can be defined using the Bellman equation:
+
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853042.png" /></td> </tr></table>
+
\frac{\partial  S }{\partial  x }
 +
f( x, v( x)) - f ^ { 0 }
 +
( x, v( x)) =
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853043.png" /></td> </tr></table>
+
$$
 +
= \
 +
\max _ {u \in U }  \left (
 +
\frac{\partial  S }{\partial  x }
 +
f( x, u) - f ^ { 0 } ( x, u) \right )  = 0.
 +
$$
  
In reality, when using the method of dynamic programming, a stronger result is obtained: Sufficient conditions for optimality for a set of different controls which transfer a phase point from an arbitrary initial state to a given final state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853044.png" />.
+
In reality, when using the method of dynamic programming, a stronger result is obtained: Sufficient conditions for optimality for a set of different controls which transfer a phase point from an arbitrary initial state to a given final state $  x _ {1} $.
  
In the more general case of a non-autonomous system, i.e. the integrand and the vector-function of right-hand sides also depend on the time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853045.png" />, the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853046.png" /> will depend on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853047.png" /> and a term <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853048.png" /> must be added to the left-hand side of equation (9). There is a proof (see [[#References|[4]]]) which assumes the condition of continuous differentiability of the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853049.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853050.png" />, which is very restrictive and is not fulfilled in most problems although it is usually supposed.
+
In the more general case of a non-autonomous system, i.e. the integrand and the vector-function of right-hand sides also depend on the time $  t $,  
 +
the function $  S $
 +
will depend on $  t $
 +
and a term $  \partial  S/ \partial  t $
 +
must be added to the left-hand side of equation (9). There is a proof (see [[#References|[4]]]) which assumes the condition of continuous differentiability of the function $  S( x) $
 +
for all $  x $,  
 +
which is very restrictive and is not fulfilled in most problems although it is usually supposed.
  
Sufficient conditions for optimality can be constructed on the basis of the [[Pontryagin maximum principle|Pontryagin maximum principle]]. If in a region <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853051.png" /> of the phase space a regular synthesis is realized (cf. [[Optimal synthesis control|Optimal synthesis control]]), then all trajectories obtained using the maximum principle when constructing the regular synthesis are optimal in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853052.png" />.
+
Sufficient conditions for optimality can be constructed on the basis of the [[Pontryagin maximum principle|Pontryagin maximum principle]]. If in a region $  G $
 +
of the phase space a regular synthesis is realized (cf. [[Optimal synthesis control|Optimal synthesis control]]), then all trajectories obtained using the maximum principle when constructing the regular synthesis are optimal in $  G $.
  
 
The definition of a regular synthesis, although rather cumbersome, does not essentially impose any particular restrictions on the problem (5)–(8).
 
The definition of a regular synthesis, although rather cumbersome, does not essentially impose any particular restrictions on the problem (5)–(8).
  
There is another approach to the construction of sufficient conditions for optimality (see ). Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853053.png" /> be a continuous function with continuous partial derivatives for all admissible <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853054.png" /> belonging to a given region <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853055.png" />, and let
+
There is another approach to the construction of sufficient conditions for optimality (see ). Let $  \phi ( x) $
 +
be a continuous function with continuous partial derivatives for all admissible $  x $
 +
belonging to a given region $  G $,  
 +
and let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853056.png" /></td> <td valign="top" style="width:5%;text-align:right;">(10)</td></tr></table>
+
$$ \tag{10 }
 +
R( x, u)  = \phi _ {x} f( x, u) - f ^ { 0 } ( x, u).
 +
$$
  
For the pair <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853057.png" /> to establish an absolute minimum in the problem (5)–(8), it is sufficient that there exists a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853058.png" /> such that
+
For the pair $  \overline{u}\; ( t), \overline{x}\; ( t) $
 +
to establish an absolute minimum in the problem (5)–(8), it is sufficient that there exists a function $  \phi ( x) $
 +
such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853059.png" /></td> <td valign="top" style="width:5%;text-align:right;">(11)</td></tr></table>
+
$$ \tag{11 }
 +
R ( \overline{x}\; , \overline{u}\; = \max _ {\begin{array}{c}
 +
x \in G \\
 +
u \in U
 +
\end{array}
 +
} \
 +
R( x, u).
 +
$$
  
 
Corresponding variations of the above-mentioned formulation of the sufficient conditions for optimality are permissible for the more general cases of non-autonomous systems, problems with Mayer- and Bolza-type functions (see [[Bolza problem|Bolza problem]]), as well as for optimal sliding regimes (see [[Optimal sliding regime|Optimal sliding regime]] and ).
 
Corresponding variations of the above-mentioned formulation of the sufficient conditions for optimality are permissible for the more general cases of non-autonomous systems, problems with Mayer- and Bolza-type functions (see [[Bolza problem|Bolza problem]]), as well as for optimal sliding regimes (see [[Optimal sliding regime|Optimal sliding regime]] and ).
Line 107: Line 222:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  M.A. Lavrent'ev,  L.A. Lyusternik,  "A course in variational calculus" , Moscow-Leningrad  (1950)  (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  G.A. Bliss,  "Lectures on the calculus of variations" , Chicago Univ. Press  (1947)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  R. Bellman,  "Dynamic programming" , Princeton Univ. Press  (1957)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  V.G. Boltyanskii,  "Mathematical methods of optimal control" , Holt, Rinehart &amp; Winston  (1971)  (Translated from Russian)</TD></TR><TR><TD valign="top">[5a]</TD> <TD valign="top">  V.F. Krotov,  "Methods for solving variational problems on the basis of sufficient conditions for an absolute minimum I"  ''Automat. Remote Control'' , '''23'''  (1963)  pp. 1473–1484  ''Avtomat. i Telemekh.'' , '''23''' :  12  (1962)  pp. 1571–1583</TD></TR><TR><TD valign="top">[5b]</TD> <TD valign="top">  V.F. Krotov,  "Methods for solving variational problems on the basis of sufficient conditions for an absolute minimum II"  ''Automat. Remote Control'' , '''24'''  (1963)  pp. 539–553  ''Avtomat. i Telemekh.'' , '''24''' :  5  (1963)  pp. 581–598</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  A.G. Butkovskii,  "Theory of optimal control of systems with distributed parameters" , Moscow  (1965)  (In Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  M.A. Lavrent'ev,  L.A. Lyusternik,  "A course in variational calculus" , Moscow-Leningrad  (1950)  (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  G.A. Bliss,  "Lectures on the calculus of variations" , Chicago Univ. Press  (1947)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  R. Bellman,  "Dynamic programming" , Princeton Univ. Press  (1957)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  V.G. Boltyanskii,  "Mathematical methods of optimal control" , Holt, Rinehart &amp; Winston  (1971)  (Translated from Russian)</TD></TR><TR><TD valign="top">[5a]</TD> <TD valign="top">  V.F. Krotov,  "Methods for solving variational problems on the basis of sufficient conditions for an absolute minimum I"  ''Automat. Remote Control'' , '''23'''  (1963)  pp. 1473–1484  ''Avtomat. i Telemekh.'' , '''23''' :  12  (1962)  pp. 1571–1583</TD></TR><TR><TD valign="top">[5b]</TD> <TD valign="top">  V.F. Krotov,  "Methods for solving variational problems on the basis of sufficient conditions for an absolute minimum II"  ''Automat. Remote Control'' , '''24'''  (1963)  pp. 539–553  ''Avtomat. i Telemekh.'' , '''24''' :  5  (1963)  pp. 581–598</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  A.G. Butkovskii,  "Theory of optimal control of systems with distributed parameters" , Moscow  (1965)  (In Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
One distinguishes fixed end-time and free end-time problems. The problem expressed by the equations (5)–(8) is a free end-time one; the end-time is determined by the moment that the state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853060.png" /> equals the final state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068530/o06853061.png" />.
+
One distinguishes fixed end-time and free end-time problems. The problem expressed by the equations (5)–(8) is a free end-time one; the end-time is determined by the moment that the state $  x ( t) $
 +
equals the final state $  x _ {1} $.
  
 
For additional references see also [[Optimal control, mathematical theory of|Optimal control, mathematical theory of]].
 
For additional references see also [[Optimal control, mathematical theory of|Optimal control, mathematical theory of]].

Latest revision as of 08:04, 6 June 2020


Conditions which ensure the optimality of a given solution of a problem of the variational calculus in a chosen class of comparison curves.

Sufficient conditions for optimality of a weak minimum (see [1]): For a curve $ \overline{y}\; ( x) $ to provide a weak minimum for the functional

$$ \tag{1 } J( y) = \int\limits _ { x _ {0} } ^ { {x _ 1 } } F( x, y, y ^ \prime ) dx , $$

given the boundary conditions

$$ y( x _ {0} ) = y _ {0} ,\ \ y( x _ {1} ) = y _ {1} , $$

it is sufficient that the following conditions are fulfilled.

1) The curve $ \overline{y}\; ( x) $ must be an extremal, i.e. it must satisfy the Euler equation

$$ F _ {y} - \frac{d}{dx} F _ {y ^ \prime } = 0. $$

2) Along the curve $ \overline{y}\; ( x) $, including its ends, the strong Legendre condition

$$ F _ {y ^ \prime y ^ \prime } ( x, y, y ^ \prime ) > 0 $$

must be fulfilled.

3) The curve $ \overline{y}\; ( x) $ must satisfy the strong Jacobi condition, which requires that the solution of the Jacobi equation

$$ \tag{2 } \left ( F _ {yy} - \frac{d}{dx} F _ {yy ^ \prime } \right ) \eta - \frac{d}{dx} ( F _ {y ^ \prime y ^ \prime } \eta ^ \prime ) = 0 $$

with initial conditions

$$ \eta ( x _ {0} ) = 0,\ \ \eta ^ \prime ( x _ {0} ) = 1 , $$

must not vanish at the points of the right-closed interval $ x _ {0} < x \leq x _ {1} $.

The coefficients of the Jacobi equation (2), which is a linear differential equation of the second order, are calculated along the extremal $ \overline{y}\; ( x) $ and represent known functions in $ x $.

For a strong minimum it is sufficient that the following extra condition be fulfilled, next to those mentioned above.

4) There exists a neighbourhood of the curve $ \overline{y}\; ( x) $ such that at each point $ ( x, y) $ of it, and for any $ y ^ \prime $, the inequality

$$ \tag{3 } {\mathcal E} ( x, y, u( x, y), y ^ \prime ) \geq 0 $$

hold, where

$$ {\mathcal E} ( x , y , u , y ^ \prime ) = F( x, y, y ^ \prime ) - F( x, y, u) + $$

$$ - ( y ^ \prime - u) F _ {y ^ \prime } ( x, y, u) $$

is the Weierstrass function, while $ u( x, y) $ is the slope of the field of extremals surrounding $ \overline{y}\; ( x) $.

At the extremal $ \overline{y}\; ( x) $, condition (3) takes the form

$$ \tag{4 } {\mathcal E} ( x, \overline{y}\; , \overline{y}\; {} ^ \prime , y ^ \prime ) = \ F( x, \overline{y}\; , y ^ \prime ) - F( x, \overline{y}\; , \overline{y}\; {} ^ \prime ) + $$

$$ - ( y ^ \prime - \overline{y}\; {} ^ \prime ) F _ {y ^ \prime } ( x, \overline{y}\; , \overline{y}\; {} ^ \prime ) \geq 0. $$

Condition (4) is necessary for a strong minimum; it is called the Weierstrass necessary condition (cf. Weierstrass conditions (for a variational extremum)). Thus, unlike the sufficient conditions for a weak minimum, which require certain strong necessary conditions to be fulfilled at the points of the extremal, the sufficient conditions of a strong minimum require that the Weierstrass necessary condition be fulfilled in a neighbourhood of the extremal. In general, it is impossible to weaken the formulation of the sufficient conditions for a strong minimum once the requirement that the Weierstrass condition be fulfilled in a neighbourhood of the extremal has been replaced by the strong Weierstrass condition (condition (4) with strict inequality) at the points of the extremal (see [1]).

For non-classical variational problems, examined in the mathematical theory of optimal control (cf. Optimal control, mathematical theory of), several approaches to the establishment of sufficient conditions for optimality of an absolute extremum exist.

Let a problem of optimal control be posed in which the minimum of the functional

$$ \tag{5 } J = \int\limits _ { 0 } ^ { {t _ 1} } f ^ { 0 } ( x, u) dt, $$

$$ f ^ { 0 } : \mathbf R ^ {n} \times \mathbf R ^ {p} \rightarrow \mathbf R , $$

has to be determined, given the conditions

$$ \tag{6 } \dot{x} = f( x, u),\ f: \mathbf R ^ {n} \times \mathbf R ^ {p} \rightarrow \ \mathbf R ^ {n} , $$

$$ \tag{7 } x( 0) = x _ {0} ,\ x( t _ {1} ) = x _ {1} , $$

$$ \tag{8 } u \in U , $$

where $ U $ is a given closed set in $ p $- dimensional space.

Using the method of dynamic programming [3], sufficient conditions for optimality are formulated in the following way. For a control $ u( t) $ to be an optimal control in the problem (5)–(8), it is sufficient that:

a) there exists a continuous function $ S( x) $ which has continuous partial derivatives for all $ x $ with the possible exception of a certain piecewise-smooth set of dimension less than $ n $, which vanishes at the end-point $ x _ {1} $, $ S( x _ {1} ) = 0 $, and which satisfies the Bellman equation

$$ \tag{9 } \max _ {u \in U } \ \left ( \frac{\partial S }{\partial x } f( x, u) - f ^ { 0 } ( x, u) \right ) = 0; $$

b) $ u( t) = v( x( t)) $ if $ 0 \leq t \leq t _ {1} $, where $ v( x) $ is a synthesizing function (cf. also Optimal synthesis control), which can be defined using the Bellman equation:

$$ \frac{\partial S }{\partial x } f( x, v( x)) - f ^ { 0 } ( x, v( x)) = $$

$$ = \ \max _ {u \in U } \left ( \frac{\partial S }{\partial x } f( x, u) - f ^ { 0 } ( x, u) \right ) = 0. $$

In reality, when using the method of dynamic programming, a stronger result is obtained: Sufficient conditions for optimality for a set of different controls which transfer a phase point from an arbitrary initial state to a given final state $ x _ {1} $.

In the more general case of a non-autonomous system, i.e. the integrand and the vector-function of right-hand sides also depend on the time $ t $, the function $ S $ will depend on $ t $ and a term $ \partial S/ \partial t $ must be added to the left-hand side of equation (9). There is a proof (see [4]) which assumes the condition of continuous differentiability of the function $ S( x) $ for all $ x $, which is very restrictive and is not fulfilled in most problems although it is usually supposed.

Sufficient conditions for optimality can be constructed on the basis of the Pontryagin maximum principle. If in a region $ G $ of the phase space a regular synthesis is realized (cf. Optimal synthesis control), then all trajectories obtained using the maximum principle when constructing the regular synthesis are optimal in $ G $.

The definition of a regular synthesis, although rather cumbersome, does not essentially impose any particular restrictions on the problem (5)–(8).

There is another approach to the construction of sufficient conditions for optimality (see ). Let $ \phi ( x) $ be a continuous function with continuous partial derivatives for all admissible $ x $ belonging to a given region $ G $, and let

$$ \tag{10 } R( x, u) = \phi _ {x} f( x, u) - f ^ { 0 } ( x, u). $$

For the pair $ \overline{u}\; ( t), \overline{x}\; ( t) $ to establish an absolute minimum in the problem (5)–(8), it is sufficient that there exists a function $ \phi ( x) $ such that

$$ \tag{11 } R ( \overline{x}\; , \overline{u}\; ) = \max _ {\begin{array}{c} x \in G \\ u \in U \end{array} } \ R( x, u). $$

Corresponding variations of the above-mentioned formulation of the sufficient conditions for optimality are permissible for the more general cases of non-autonomous systems, problems with Mayer- and Bolza-type functions (see Bolza problem), as well as for optimal sliding regimes (see Optimal sliding regime and ).

Research has been carried out into variational problems with functionals in the form of multiple integrals and with differential constraints in the form of partial differential equations, in which functions of several variables occur (see [6]).

References

[1] M.A. Lavrent'ev, L.A. Lyusternik, "A course in variational calculus" , Moscow-Leningrad (1950) (In Russian)
[2] G.A. Bliss, "Lectures on the calculus of variations" , Chicago Univ. Press (1947)
[3] R. Bellman, "Dynamic programming" , Princeton Univ. Press (1957)
[4] V.G. Boltyanskii, "Mathematical methods of optimal control" , Holt, Rinehart & Winston (1971) (Translated from Russian)
[5a] V.F. Krotov, "Methods for solving variational problems on the basis of sufficient conditions for an absolute minimum I" Automat. Remote Control , 23 (1963) pp. 1473–1484 Avtomat. i Telemekh. , 23 : 12 (1962) pp. 1571–1583
[5b] V.F. Krotov, "Methods for solving variational problems on the basis of sufficient conditions for an absolute minimum II" Automat. Remote Control , 24 (1963) pp. 539–553 Avtomat. i Telemekh. , 24 : 5 (1963) pp. 581–598
[6] A.G. Butkovskii, "Theory of optimal control of systems with distributed parameters" , Moscow (1965) (In Russian)

Comments

One distinguishes fixed end-time and free end-time problems. The problem expressed by the equations (5)–(8) is a free end-time one; the end-time is determined by the moment that the state $ x ( t) $ equals the final state $ x _ {1} $.

For additional references see also Optimal control, mathematical theory of.

References

[a1] L. Markus, "Foundations of optimal control theory" , Wiley (1967)
How to Cite This Entry:
Optimality, sufficient conditions for. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Optimality,_sufficient_conditions_for&oldid=13932
This article was adapted from an original article by I.B. Vapnyarskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article