Namespaces
Variants
Actions

Difference between revisions of "Optimal programming control"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
A solution of a problem in the mathematical theory of optimal control (cf. [[Optimal control, mathematical theory of|Optimal control, mathematical theory of]]), in which the control <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o0684601.png" /> is formed as a function of time (whereby it is also supposed that, during the process, no information, apart from that which is given at the very beginning, is obtained). In this way, an optimal programming control is formed through a priori information on the system and cannot be corrected, unlike an [[Optimal synthesis control|optimal synthesis control]].
+
<!--
 +
o0684601.png
 +
$#A+1 = 69 n = 0
 +
$#C+1 = 69 : ~/encyclopedia/old_files/data/O068/O.0608460 Optimal programming control
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
 +
A solution of a problem in the mathematical theory of optimal control (cf. [[Optimal control, mathematical theory of|Optimal control, mathematical theory of]]), in which the control $  u= u( t) $
 +
is formed as a function of time (whereby it is also supposed that, during the process, no information, apart from that which is given at the very beginning, is obtained). In this way, an optimal programming control is formed through a priori information on the system and cannot be corrected, unlike an [[Optimal synthesis control|optimal synthesis control]].
  
 
The problem of existence of solutions to a problem of optimal programming control breaks down into two questions: explaining the possibility of realizing the aim of the control with the given constraints (the existence of an admissible control which realizes the aim of the control) and establishing the solvability of an extremum problem — attainability of the (as a rule, relative) extremum — in the aforementioned class of admissible controls (the existence of an optimal control).
 
The problem of existence of solutions to a problem of optimal programming control breaks down into two questions: explaining the possibility of realizing the aim of the control with the given constraints (the existence of an admissible control which realizes the aim of the control) and establishing the solvability of an extremum problem — attainability of the (as a rule, relative) extremum — in the aforementioned class of admissible controls (the existence of an optimal control).
Line 5: Line 18:
 
It is particularly important, in relation to the first question, to study the property of controllability of a system. For a system
 
It is particularly important, in relation to the first question, to study the property of controllability of a system. For a system
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o0684602.png" /></td> </tr></table>
+
$$
  
it signifies the existence in a given class <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o0684603.png" /> of admissible control functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o0684604.png" /> which transfer a phase point (see [[Pontryagin maximum principle|Pontryagin maximum principle]]) from any given starting position <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o0684605.png" /> to any given final position <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o0684606.png" /> (for a fixed or free time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o0684607.png" />, dependent on the formulation of the problem). The necessary and sufficient conditions for controllability (or for complete controllability) are known in computable form for linear systems
+
\frac{dx}{dt}
 +
  = f( t, x, u)
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o0684608.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
it signifies the existence in a given class $  U = \{ u( \cdot ) \} $
 +
of admissible control functions  $  u( t) $
 +
which transfer a phase point (see [[Pontryagin maximum principle|Pontryagin maximum principle]]) from any given starting position  $  x( t _ {0} ) = x  ^ {0} \in \mathbf R  ^ {n} $
 +
to any given final position  $  x( t _ {1} ) = x  ^ {1} \in \mathbf R  ^ {n} $(
 +
for a fixed or free time  $  T = t _ {1} - t _ {0} $,
 +
dependent on the formulation of the problem). The necessary and sufficient conditions for controllability (or for complete controllability) are known in computable form for linear systems
  
with analytic or periodic coefficients (these are simplest when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o0684609.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846010.png" />). For general linear systems the question of solvability of the problem of transfer from one convex set onto another (with convex constraints on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846011.png" />) has also been completely solved. In non-linear systems, only local conditions of controllability (in a small neighbourhood of the given trajectory) or conditions for particular classes of systems (see [[#References|[2]]], [[#References|[4]]], [[#References|[5]]]) are known. The property of controllability has also been studied for numerous generalizations relating, in particular, to special classes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846012.png" /> (for example, the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846013.png" /> of all bounded, piecewise-continuous controls <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846014.png" />), for controllability of some of the coordinates, or for more general classes of systems, including infinite-dimensional ones.
+
$$ \tag{1 }
 +
\dot{x}  =  A( t) x + B( t) u,\ \
 +
x \in \mathbf R  ^ {n} , u \in \mathbf R  ^ {p} ,
 +
$$
 +
 
 +
with analytic or periodic coefficients (these are simplest when $  A \equiv \textrm{ const } $,  
 +
$  B \equiv \textrm{ const } $).  
 +
For general linear systems the question of solvability of the problem of transfer from one convex set onto another (with convex constraints on $  u , x $)  
 +
has also been completely solved. In non-linear systems, only local conditions of controllability (in a small neighbourhood of the given trajectory) or conditions for particular classes of systems (see [[#References|[2]]], [[#References|[4]]], [[#References|[5]]]) are known. The property of controllability has also been studied for numerous generalizations relating, in particular, to special classes $  U $(
 +
for example, the set $  U $
 +
of all bounded, piecewise-continuous controls $  u( t) $),  
 +
for controllability of some of the coordinates, or for more general classes of systems, including infinite-dimensional ones.
  
 
The question of the existence of an optimal control is in general related to a compactness property, in some topology, of minimizing sequences of controls or trajectories, and to the property of semi-continuity in the corresponding variables of the minimizing functionals. For a system
 
The question of the existence of an optimal control is in general related to a compactness property, in some topology, of minimizing sequences of controls or trajectories, and to the property of semi-continuity in the corresponding variables of the minimizing functionals. For a system
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846015.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
\dot{x}  = f( t, x, u),\ \
 +
t _ {0} \leq  t \leq  t _ {1} ,\ \
 +
x \in \mathbf R  ^ {n} ,\  u \in \mathbf R  ^ {p} ,
 +
$$
  
 
under the constraints
 
under the constraints
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846016.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
u  \in  U  \subseteq  \mathbf R  ^ {p}
 +
$$
  
 
the first of these properties is associated with convexity of the set
 
the first of these properties is associated with convexity of the set
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846017.png" /></td> </tr></table>
+
$$
 +
f( t, x, U)  = \{ {f( t, x, u) } : {u \in U } \}
 +
,
 +
$$
  
while the second (for integral functionals) is linked with convexity in the corresponding values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846018.png" />. The lack of these properties is compensated for by broadening the initial variational problems. So, the non-convexity of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846019.png" /> can be compensated for by the introduction of sliding systems: Generalized solutions of ordinary differential equations, generated by control-measures given on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846020.png" /> and creating an effect of  "convexification"  (see [[#References|[6]]], [[#References|[7]]]; cf. also [[Optimal sliding regime|Optimal sliding regime]]). The absence of convexity in integral functionals <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846021.png" /> is compensated for by imbedding the problem in a more general one, with a new functional which is a convex minorant of the previous one, and imbedding the solution of the new problem in a broader class of controls (see [[#References|[9]]]). In the instances shown, the existence of an optimal control often follows from the existence of an admissible control.
+
while the second (for integral functionals) is linked with convexity in the corresponding values of $  J( x( \cdot ), u( \cdot )) $.  
 +
The lack of these properties is compensated for by broadening the initial variational problems. So, the non-convexity of $  f( t, x, u) $
 +
can be compensated for by the introduction of sliding systems: Generalized solutions of ordinary differential equations, generated by control-measures given on $  U $
 +
and creating an effect of  "convexification"  (see [[#References|[6]]], [[#References|[7]]]; cf. also [[Optimal sliding regime|Optimal sliding regime]]). The absence of convexity in integral functionals $  J( x( \cdot ), u( \cdot )) $
 +
is compensated for by imbedding the problem in a more general one, with a new functional which is a convex minorant of the previous one, and imbedding the solution of the new problem in a broader class of controls (see [[#References|[9]]]). In the instances shown, the existence of an optimal control often follows from the existence of an admissible control.
  
 
The theory of necessary conditions for an extremum is most developed in problems of optimal programming control. The Pontryagin maximum principle has served as a basic result in this case, as it includes necessary conditions for a strong extremum in a problem of optimal control.
 
The theory of necessary conditions for an extremum is most developed in problems of optimal programming control. The Pontryagin maximum principle has served as a basic result in this case, as it includes necessary conditions for a strong extremum in a problem of optimal control.
  
General methods for obtaining necessary conditions in extremum problems were created and are used effectively for problems of optimal programming control with more complex constraints (such as phase, functional, minimax, mixed, etc.). They are based, in one way or another, on theorems of separability of convex cones (see [[#References|[9]]], [[#References|[10]]]). For example, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846022.png" /> be a vector space, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846023.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846024.png" />, be a given functional, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846025.png" /> be a set from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846026.png" />, let
+
General methods for obtaining necessary conditions in extremum problems were created and are used effectively for problems of optimal programming control with more complex constraints (such as phase, functional, minimax, mixed, etc.). They are based, in one way or another, on theorems of separability of convex cones (see [[#References|[9]]], [[#References|[10]]]). For example, let $  E $
 +
be a vector space, let $  f( x) $,  
 +
$  x \in E $,  
 +
be a given functional, let $  Q _ {i} $
 +
be a set from $  E $,  
 +
let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846027.png" /></td> </tr></table>
+
$$
 +
= \cap _ { 1 } ^ { n }  Q _ {i} ,
 +
$$
  
let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846028.png" /> be a point at which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846029.png" /> reaches its minimum on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846030.png" />, and let
+
let $  x  ^ {0} \in Q $
 +
be a point at which $  f( x) $
 +
reaches its minimum on $  Q $,  
 +
and let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846031.png" /></td> </tr></table>
+
$$
 +
Q _ {0}  = \{ {x } : {f( x) < f( x  ^ {0} ) } \}
 +
.
 +
$$
  
The essence of one wide-spread general method is that every one of the sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846032.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846033.png" />, is approximated in a neighbourhood of the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846034.png" /> by a convex cone <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846035.png" /> with vertex at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846036.png" /> (the  "descent cone"  for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846037.png" />; the cone of  "admissible directions"  for constraints of inequality type; the cone of  "tangent directions"  for equality-type constraints, including differential links, etc.). A necessary condition for a minimum is now that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846038.png" /> is the single point common to all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846039.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846040.png" />, and, consequently, that the cones are  "separable"  (see [[#References|[8]]]). An analytic form is added to the latter  "geometric"  condition and, where possible, is put into an appropriate form, for example, using a Hamilton function. Dependent on the initial constraints, as well as on the class of applicable variations, necessary conditions can take both a form analogous to the Pontryagin maximum principle and a form of a local (linearized) maximum principle (of a condition of a weak extremum with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846041.png" />). The realization of this method thus depends on the possibility of describing the cones <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846042.png" /> analytically. Their effective description has been achieved for sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846043.png" /> defined by smooth functions which satisfy certain supplementary regularity conditions at the examined point or by convex functions (see [[#References|[9]]], [[#References|[10]]]). In principle, this method also permits generalizations to the case of non-smooth constraints, including differential ones. Here, for example, the concept of a subdifferential of a convex function and its generalization can be used, when convexity is lacking (see [[#References|[11]]], [[#References|[12]]]).
+
The essence of one wide-spread general method is that every one of the sets $  Q _ {i} $,  
 +
$  i = 0 \dots n $,  
 +
is approximated in a neighbourhood of the point $  x  ^ {0} $
 +
by a convex cone $  K _ {i} $
 +
with vertex at $  x  ^ {0} $(
 +
the  "descent cone"  for $  Q _ {0} $;  
 +
the cone of  "admissible directions"  for constraints of inequality type; the cone of  "tangent directions"  for equality-type constraints, including differential links, etc.). A necessary condition for a minimum is now that $  x  ^ {0} $
 +
is the single point common to all $  K _ {i} $,  
 +
$  i = 0 \dots n $,  
 +
and, consequently, that the cones are  "separable"  (see [[#References|[8]]]). An analytic form is added to the latter  "geometric"  condition and, where possible, is put into an appropriate form, for example, using a Hamilton function. Dependent on the initial constraints, as well as on the class of applicable variations, necessary conditions can take both a form analogous to the Pontryagin maximum principle and a form of a local (linearized) maximum principle (of a condition of a weak extremum with respect to $  u $).  
 +
The realization of this method thus depends on the possibility of describing the cones $  K _ {i} $
 +
analytically. Their effective description has been achieved for sets $  Q _ {i} $
 +
defined by smooth functions which satisfy certain supplementary regularity conditions at the examined point or by convex functions (see [[#References|[9]]], [[#References|[10]]]). In principle, this method also permits generalizations to the case of non-smooth constraints, including differential ones. Here, for example, the concept of a subdifferential of a convex function and its generalization can be used, when convexity is lacking (see [[#References|[11]]], [[#References|[12]]]).
  
 
Conditions of the first order, analogous to the Pontryagin maximum principle, are known for solutions in the class of generalized function-measures (the so-called integral maximum principle), for controllable systems which can be described by differential equations with a perturbed argument, partial differential equations, evolution equations in a Banach space, differential equations on manifolds, recurrence difference equations, etc. (see [[#References|[1]]], [[#References|[6]]], [[#References|[7]]], [[#References|[13]]]–[[#References|[16]]]).
 
Conditions of the first order, analogous to the Pontryagin maximum principle, are known for solutions in the class of generalized function-measures (the so-called integral maximum principle), for controllable systems which can be described by differential equations with a perturbed argument, partial differential equations, evolution equations in a Banach space, differential equations on manifolds, recurrence difference equations, etc. (see [[#References|[1]]], [[#References|[6]]], [[#References|[7]]], [[#References|[13]]]–[[#References|[16]]]).
  
From the necessary conditions for an extremum given in a problem of optimal control follow the well-known necessary first-order conditions of the classical variational calculus. In particular, in a two-point boundary value problem for the systems (2) and (3), where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846044.png" /> is an open set, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846045.png" /> is a standard integral functional, the Pontryagin principle implies the Weierstrass necessary condition for an extremum in the classical variational calculus.
+
From the necessary conditions for an extremum given in a problem of optimal control follow the well-known necessary first-order conditions of the classical variational calculus. In particular, in a two-point boundary value problem for the systems (2) and (3), where $  U $
 +
is an open set, $  J( x( \cdot ), u( \cdot )) $
 +
is a standard integral functional, the Pontryagin principle implies the Weierstrass necessary condition for an extremum in the classical variational calculus.
  
Methods are being developed in the theory of optimal control to obtain necessary higher-order conditions (especially, of the second order) for non-classical variational problems (see [[#References|[19]]]). Interest in higher-order conditions has largely been related to the study of degenerate problems of optimal control, leading to the so-called special controls, which do not have adequate analogues in the classical theory. For example, in the Pontryagin principle, the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846046.png" /> can either lead to a whole family of controls, each of which satisfies the maximum principle, or does not depend on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846047.png" /> at all (in which case any of the admissible values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846048.png" /> satisfies the Pontryagin principle). This situation is quite characteristic of a whole series of applied control problems in space. In the given case, the isolation of an optimal control already requires a mastering of the first-order extremals (the so-called Pontryagin extremals) and the use for them of necessary second-order optimality conditions (or, in general, those of higher order). Different forms of necessary conditions have been obtained here by the use of special classes of  "non-classical"  variations (for example, a  "bundle"  of needle variations, etc.). The realization of special controls is also often connected with the use of sliding systems (see [[#References|[17]]], [[#References|[18]]]).
+
Methods are being developed in the theory of optimal control to obtain necessary higher-order conditions (especially, of the second order) for non-classical variational problems (see [[#References|[19]]]). Interest in higher-order conditions has largely been related to the study of degenerate problems of optimal control, leading to the so-called special controls, which do not have adequate analogues in the classical theory. For example, in the Pontryagin principle, the function $  H( t, \psi , x, u) $
 +
can either lead to a whole family of controls, each of which satisfies the maximum principle, or does not depend on $  u $
 +
at all (in which case any of the admissible values of $  u $
 +
satisfies the Pontryagin principle). This situation is quite characteristic of a whole series of applied control problems in space. In the given case, the isolation of an optimal control already requires a mastering of the first-order extremals (the so-called Pontryagin extremals) and the use for them of necessary second-order optimality conditions (or, in general, those of higher order). Different forms of necessary conditions have been obtained here by the use of special classes of  "non-classical"  variations (for example, a  "bundle"  of needle variations, etc.). The realization of special controls is also often connected with the use of sliding systems (see [[#References|[17]]], [[#References|[18]]]).
  
 
The theory of sufficient conditions of optimality has not been examined in much detail. Results are known which relate to conditions of local optimality and contain, among other simple requirements, conditions of non-degeneracy of the variational system and constraints on the properties of the Hessian of the right-hand sides, calculated along an admissible trajectory for the corresponding ordinary differential equation. Another group of sufficient conditions is based on the method of dynamic programming and its relation to the theory of the maximum principle (see [[#References|[8]]]). There are also formalisms which lead to sufficient conditions for an absolute minimum, based on the idea of broadening variational problems. Their domain of practical applicability embraces special classes of problems with convexity criteria and degenerate problems of optimal control (see [[#References|[18]]]).
 
The theory of sufficient conditions of optimality has not been examined in much detail. Results are known which relate to conditions of local optimality and contain, among other simple requirements, conditions of non-degeneracy of the variational system and constraints on the properties of the Hessian of the right-hand sides, calculated along an admissible trajectory for the corresponding ordinary differential equation. Another group of sufficient conditions is based on the method of dynamic programming and its relation to the theory of the maximum principle (see [[#References|[8]]]). There are also formalisms which lead to sufficient conditions for an absolute minimum, based on the idea of broadening variational problems. Their domain of practical applicability embraces special classes of problems with convexity criteria and degenerate problems of optimal control (see [[#References|[18]]]).
  
The complete solution of a problem of optimal programming control (necessary and sufficient conditions of optimality) is known for linear systems (1) when both the functionals and constraints on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846049.png" /> are convex (in a number of cases certain extra conditions have to be fulfilled here). The concept of duality, used in convex analysis, has demonstrated a particular extremal property of the trajectories of the system, described by conjugate variables in the Pontryagin maximum principle. This has enabled one to reduce the boundary value problem arising from the application of necessary conditions of general type to a solution of a simpler dual extremal problem. Within the framework of this approach the theory of linear systems with impulse controls has been developed, which simulates objects subject to instantaneous influences (percussion, explosive, impulse), and which is formalized by the use of differential equations in generalized functions with corresponding orders of singularity. The method of attainability domains (see [[#References|[2]]], [[#References|[3]]]) has been put to effective use, especially in the theory of game systems.
+
The complete solution of a problem of optimal programming control (necessary and sufficient conditions of optimality) is known for linear systems (1) when both the functionals and constraints on $  u , x $
 +
are convex (in a number of cases certain extra conditions have to be fulfilled here). The concept of duality, used in convex analysis, has demonstrated a particular extremal property of the trajectories of the system, described by conjugate variables in the Pontryagin maximum principle. This has enabled one to reduce the boundary value problem arising from the application of necessary conditions of general type to a solution of a simpler dual extremal problem. Within the framework of this approach the theory of linear systems with impulse controls has been developed, which simulates objects subject to instantaneous influences (percussion, explosive, impulse), and which is formalized by the use of differential equations in generalized functions with corresponding orders of singularity. The method of attainability domains (see [[#References|[2]]], [[#References|[3]]]) has been put to effective use, especially in the theory of game systems.
  
In the absence of complete a priori information on the system (including the statistical description of insufficient quantities) the problem of optimal programming control is studied under conditions of uncertainty. In the system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846050.png" />
+
In the absence of complete a priori information on the system (including the statistical description of insufficient quantities) the problem of optimal programming control is studied under conditions of uncertainty. In the system $  ( t _ {0} \leq  t \leq  t _ {1} ) $
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846051.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
\dot{x}  = f( t , x , u , w),\ \
 +
x( t _ {0} )  = x  ^ {0}  \in  X  ^ {0} ,\ \
 +
w  \in  W ,
 +
$$
  
let the parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846052.png" />, realized in the form of time functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846053.png" />, and the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846054.png" /> be unknown, and let only the sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846055.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846056.png" /> be given. Then, assuming the existence and extendability to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846057.png" /> of the solutions
+
let the parameter $  w \in \mathbf R  ^ {q} $,  
 +
realized in the form of time functions $  w = w( t) $,  
 +
and the vector $  x  ^ {0} $
 +
be unknown, and let only the sets $  X  ^ {0} \subseteq \mathbf R  ^ {n} $,  
 +
$  W \subseteq \mathbf R  ^ {q} $
 +
be given. Then, assuming the existence and extendability to $  [ t _ {0} , t _ {1} ] $
 +
of the solutions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846058.png" /></td> </tr></table>
+
$$
 +
x( t  \mid  x  ^ {0} , u( \cdot ), w( \cdot )),\ \
 +
x( t _ {0}  \mid  x  ^ {0} , u( \cdot ), w( \cdot )) = x  ^ {0} ,
 +
$$
  
of equation (4) (for given <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846059.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846060.png" />), a bundle (ensemble) of trajectories can be formed:
+
of equation (4) (for given $  x  ^ {0} , u( \tau ), w( \tau ) $,
 +
$  t _ {0} \leq  \tau \leq  t _ {1} $),  
 +
a bundle (ensemble) of trajectories can be formed:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846061.png" /></td> </tr></table>
+
$$
 +
X( t  \mid  u( \cdot )) =
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846062.png" /></td> </tr></table>
+
$$
 +
= \
 +
\cup \{ {x( t  \mid  x  ^ {0} , u( \cdot ), w( \cdot )) }
 +
: {x  ^ {0} \in X  ^ {0} , w( \tau ) \in W,
 +
t _ {0} \leq  \tau \leq  t } \} .
 +
$$
  
By selecting a programming control <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846063.png" /> (the same one for all trajectories of the bundle), it is possible to control the position <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846064.png" /> in the phase space. A typical problem of optimal programming control under conditions of uncertainty consists of the optimization of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846065.png" /> along a functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846066.png" /> of maximum type:
+
By selecting a programming control $  u( t) $(
 +
the same one for all trajectories of the bundle), it is possible to control the position $  X( t  \mid  u( \cdot )) $
 +
in the phase space. A typical problem of optimal programming control under conditions of uncertainty consists of the optimization of $  u( t) $
 +
along a functional $  \Phi $
 +
of maximum type:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846067.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
\Phi ( X( t _ {1}  \mid  u( \cdot )))  = \
 +
\max \{ {\phi ( x) } : {x \in X ( t _ {1}  \mid  u( \cdot )) } \}
 +
$$
  
(then the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846068.png" /> of the problem will ensure a guaranteed result) or of an integral functional
+
(then the solution $  u  ^ {0} ( t) $
 +
of the problem will ensure a guaranteed result) or of an integral functional
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o068/o068460/o06846069.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
$$ \tag{6 }
 +
\Phi ( X( t _ {1}  \mid  u( \cdot )))  = \
 +
\int\limits _ {X( t _ {1}  \mid  u( \cdot )) } f _ {0} ( x) dx.
 +
$$
  
 
The use of the technique of inferring necessary conditions of optimality or its modifications has enabled one to formulate requirements which ensure the existence of analogues of the Pontryagin principle for the problems (5), (6) (in the first instance it takes the form of a minimax condition). For linear systems, these problems allow just as detailed a solution as for systems with complete information (see [[#References|[3]]], [[#References|[20]]], [[#References|[21]]]).
 
The use of the technique of inferring necessary conditions of optimality or its modifications has enabled one to formulate requirements which ensure the existence of analogues of the Pontryagin principle for the problems (5), (6) (in the first instance it takes the form of a minimax condition). For linear systems, these problems allow just as detailed a solution as for systems with complete information (see [[#References|[3]]], [[#References|[20]]], [[#References|[21]]]).
Line 75: Line 183:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  L.S. Pontryagin,  V.G. Boltayanskii,  R.V. Gamkrelidze,  E.F. Mishchenko,  "The mathematical theory of optimal processes" , Wiley  (1967)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  N.N. Krasovskii,  "Theory of control by motion" , Moscow  (1968)  (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  N.N. Krasovaskii,  A.I. Subbotin,  "Game-theoretical control problems" , Springer  (1988)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  R.E. Kalman,  "On the general theory of control systems" , ''Proc. 1-st Internat. Congress Internat. Fed. Autom. Control'' , '''2''' , Moscow  (1961)  pp. 521–547  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  E.B. Lee,  L. Marcus,  "Foundations of optimal control theory" , Wiley  (1967)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  R.V. Gamkrelidze,  "Principles of optimal control theory" , Plenum  (1978)  (Translated from Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  J. Varga,  "Optimal control of differential and functional equations" , Acad. Press  (1972)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  A.D. Ioffe,  V.M. Tikhomirov,  "Duality of convex functions and extremal problems"  ''Russian Math. Surveys'' , '''23''' :  6  (1968)  pp. 53–124  ''Uspekhi Mat. Nauk.'' , '''23''' :  6  (1968)  pp. 51–116</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  A.Ya Dubovitskii,  A.A. Milyutin,  "Extremum problems in the presence of restrictions"  ''USSR Comp. Math. Math. Phys.'' , '''5''' :  3  (1965)  pp. 1–80  ''Zh. Vychisl. Mat. i Mat. Fiz.'' , '''5''' :  3  (1965)  pp. 395–453</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top">  L.W. Neustadt,  "Optimization, a theory of necessary conditions" , Princeton Univ. Press  (1976)</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top">  B.N. Pshenichnyi,  "Necessary conditions for an extremum" , M. Dekker  (1971)  (Translated from Russian)</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top">  F.H. Clarke,  "Generalized gradients and applications"  ''Trans. Amer. Math. Soc.'' , '''205'''  (1975)  pp. 247–262</TD></TR><TR><TD valign="top">[13]</TD> <TD valign="top">  H.J. Sussmann,  "Existence and uniqueness of minimal realizations of nonlinear systems"  ''Math. Syst. Theory'' , '''10''' :  3  (1977)  pp. 263–284</TD></TR><TR><TD valign="top">[14]</TD> <TD valign="top">  J.-L. Lions,  "Optimal control of systems governed by partial differential equations" , Springer  (1971)  (Translated from French)</TD></TR><TR><TD valign="top">[15]</TD> <TD valign="top">  V.G. Boltyanskii,  "Mathematical methods of optimal control" , Holt, Rinehart &amp; Winston  (1971)  (Translated from Russian)</TD></TR><TR><TD valign="top">[16]</TD> <TD valign="top">  V.G. Boltyanskii,  "Optimal control of discrete systems" , Wiley  (1978)  (Translated from Russian)</TD></TR><TR><TD valign="top">[17]</TD> <TD valign="top">  R. Gabasov,  F.M. Kirillova,  "Singular optimal control" , Moscow  (1973)  (In Russian)</TD></TR><TR><TD valign="top">[18]</TD> <TD valign="top">  V.F. Krotov,  V.Z. Bukreev,  V.I. Gurman,  "New methods of variational calculus in flight dynamics" , Moscow  (1969)  (In Russian)</TD></TR><TR><TD valign="top">[19]</TD> <TD valign="top">  E.S. Levitin,  A.A. Milyutin,  N.P. Osmolovskii,  "Conditions of higher order for a local minimum in problems with constraints"  ''Russian Math. Surveys'' , '''33''' :  6  (1978)  pp. 97–168  ''Uspekhi Mat. Nauk.'' , '''33''' :  6  (1978)  pp. 85–148</TD></TR><TR><TD valign="top">[20]</TD> <TD valign="top">  A.B. Kurzhanskii,  "Control and observability under conditions of uncertainty" , Moscow  (1977)  (In Russian)</TD></TR><TR><TD valign="top">[21]</TD> <TD valign="top">  V.F. Demvyanov,  V.N. Malozemov,  "Introduction to minimax" , Moscow  (1972)  (In Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  L.S. Pontryagin,  V.G. Boltayanskii,  R.V. Gamkrelidze,  E.F. Mishchenko,  "The mathematical theory of optimal processes" , Wiley  (1967)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  N.N. Krasovskii,  "Theory of control by motion" , Moscow  (1968)  (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  N.N. Krasovaskii,  A.I. Subbotin,  "Game-theoretical control problems" , Springer  (1988)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  R.E. Kalman,  "On the general theory of control systems" , ''Proc. 1-st Internat. Congress Internat. Fed. Autom. Control'' , '''2''' , Moscow  (1961)  pp. 521–547  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  E.B. Lee,  L. Marcus,  "Foundations of optimal control theory" , Wiley  (1967)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  R.V. Gamkrelidze,  "Principles of optimal control theory" , Plenum  (1978)  (Translated from Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  J. Varga,  "Optimal control of differential and functional equations" , Acad. Press  (1972)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  A.D. Ioffe,  V.M. Tikhomirov,  "Duality of convex functions and extremal problems"  ''Russian Math. Surveys'' , '''23''' :  6  (1968)  pp. 53–124  ''Uspekhi Mat. Nauk.'' , '''23''' :  6  (1968)  pp. 51–116</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  A.Ya Dubovitskii,  A.A. Milyutin,  "Extremum problems in the presence of restrictions"  ''USSR Comp. Math. Math. Phys.'' , '''5''' :  3  (1965)  pp. 1–80  ''Zh. Vychisl. Mat. i Mat. Fiz.'' , '''5''' :  3  (1965)  pp. 395–453</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top">  L.W. Neustadt,  "Optimization, a theory of necessary conditions" , Princeton Univ. Press  (1976)</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top">  B.N. Pshenichnyi,  "Necessary conditions for an extremum" , M. Dekker  (1971)  (Translated from Russian)</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top">  F.H. Clarke,  "Generalized gradients and applications"  ''Trans. Amer. Math. Soc.'' , '''205'''  (1975)  pp. 247–262</TD></TR><TR><TD valign="top">[13]</TD> <TD valign="top">  H.J. Sussmann,  "Existence and uniqueness of minimal realizations of nonlinear systems"  ''Math. Syst. Theory'' , '''10''' :  3  (1977)  pp. 263–284</TD></TR><TR><TD valign="top">[14]</TD> <TD valign="top">  J.-L. Lions,  "Optimal control of systems governed by partial differential equations" , Springer  (1971)  (Translated from French)</TD></TR><TR><TD valign="top">[15]</TD> <TD valign="top">  V.G. Boltyanskii,  "Mathematical methods of optimal control" , Holt, Rinehart &amp; Winston  (1971)  (Translated from Russian)</TD></TR><TR><TD valign="top">[16]</TD> <TD valign="top">  V.G. Boltyanskii,  "Optimal control of discrete systems" , Wiley  (1978)  (Translated from Russian)</TD></TR><TR><TD valign="top">[17]</TD> <TD valign="top">  R. Gabasov,  F.M. Kirillova,  "Singular optimal control" , Moscow  (1973)  (In Russian)</TD></TR><TR><TD valign="top">[18]</TD> <TD valign="top">  V.F. Krotov,  V.Z. Bukreev,  V.I. Gurman,  "New methods of variational calculus in flight dynamics" , Moscow  (1969)  (In Russian)</TD></TR><TR><TD valign="top">[19]</TD> <TD valign="top">  E.S. Levitin,  A.A. Milyutin,  N.P. Osmolovskii,  "Conditions of higher order for a local minimum in problems with constraints"  ''Russian Math. Surveys'' , '''33''' :  6  (1978)  pp. 97–168  ''Uspekhi Mat. Nauk.'' , '''33''' :  6  (1978)  pp. 85–148</TD></TR><TR><TD valign="top">[20]</TD> <TD valign="top">  A.B. Kurzhanskii,  "Control and observability under conditions of uncertainty" , Moscow  (1977)  (In Russian)</TD></TR><TR><TD valign="top">[21]</TD> <TD valign="top">  V.F. Demvyanov,  V.N. Malozemov,  "Introduction to minimax" , Moscow  (1972)  (In Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====

Latest revision as of 08:04, 6 June 2020


A solution of a problem in the mathematical theory of optimal control (cf. Optimal control, mathematical theory of), in which the control $ u= u( t) $ is formed as a function of time (whereby it is also supposed that, during the process, no information, apart from that which is given at the very beginning, is obtained). In this way, an optimal programming control is formed through a priori information on the system and cannot be corrected, unlike an optimal synthesis control.

The problem of existence of solutions to a problem of optimal programming control breaks down into two questions: explaining the possibility of realizing the aim of the control with the given constraints (the existence of an admissible control which realizes the aim of the control) and establishing the solvability of an extremum problem — attainability of the (as a rule, relative) extremum — in the aforementioned class of admissible controls (the existence of an optimal control).

It is particularly important, in relation to the first question, to study the property of controllability of a system. For a system

$$ \frac{dx}{dt} = f( t, x, u) $$

it signifies the existence in a given class $ U = \{ u( \cdot ) \} $ of admissible control functions $ u( t) $ which transfer a phase point (see Pontryagin maximum principle) from any given starting position $ x( t _ {0} ) = x ^ {0} \in \mathbf R ^ {n} $ to any given final position $ x( t _ {1} ) = x ^ {1} \in \mathbf R ^ {n} $( for a fixed or free time $ T = t _ {1} - t _ {0} $, dependent on the formulation of the problem). The necessary and sufficient conditions for controllability (or for complete controllability) are known in computable form for linear systems

$$ \tag{1 } \dot{x} = A( t) x + B( t) u,\ \ x \in \mathbf R ^ {n} , u \in \mathbf R ^ {p} , $$

with analytic or periodic coefficients (these are simplest when $ A \equiv \textrm{ const } $, $ B \equiv \textrm{ const } $). For general linear systems the question of solvability of the problem of transfer from one convex set onto another (with convex constraints on $ u , x $) has also been completely solved. In non-linear systems, only local conditions of controllability (in a small neighbourhood of the given trajectory) or conditions for particular classes of systems (see [2], [4], [5]) are known. The property of controllability has also been studied for numerous generalizations relating, in particular, to special classes $ U $( for example, the set $ U $ of all bounded, piecewise-continuous controls $ u( t) $), for controllability of some of the coordinates, or for more general classes of systems, including infinite-dimensional ones.

The question of the existence of an optimal control is in general related to a compactness property, in some topology, of minimizing sequences of controls or trajectories, and to the property of semi-continuity in the corresponding variables of the minimizing functionals. For a system

$$ \tag{2 } \dot{x} = f( t, x, u),\ \ t _ {0} \leq t \leq t _ {1} ,\ \ x \in \mathbf R ^ {n} ,\ u \in \mathbf R ^ {p} , $$

under the constraints

$$ \tag{3 } u \in U \subseteq \mathbf R ^ {p} $$

the first of these properties is associated with convexity of the set

$$ f( t, x, U) = \{ {f( t, x, u) } : {u \in U } \} , $$

while the second (for integral functionals) is linked with convexity in the corresponding values of $ J( x( \cdot ), u( \cdot )) $. The lack of these properties is compensated for by broadening the initial variational problems. So, the non-convexity of $ f( t, x, u) $ can be compensated for by the introduction of sliding systems: Generalized solutions of ordinary differential equations, generated by control-measures given on $ U $ and creating an effect of "convexification" (see [6], [7]; cf. also Optimal sliding regime). The absence of convexity in integral functionals $ J( x( \cdot ), u( \cdot )) $ is compensated for by imbedding the problem in a more general one, with a new functional which is a convex minorant of the previous one, and imbedding the solution of the new problem in a broader class of controls (see [9]). In the instances shown, the existence of an optimal control often follows from the existence of an admissible control.

The theory of necessary conditions for an extremum is most developed in problems of optimal programming control. The Pontryagin maximum principle has served as a basic result in this case, as it includes necessary conditions for a strong extremum in a problem of optimal control.

General methods for obtaining necessary conditions in extremum problems were created and are used effectively for problems of optimal programming control with more complex constraints (such as phase, functional, minimax, mixed, etc.). They are based, in one way or another, on theorems of separability of convex cones (see [9], [10]). For example, let $ E $ be a vector space, let $ f( x) $, $ x \in E $, be a given functional, let $ Q _ {i} $ be a set from $ E $, let

$$ Q = \cap _ { 1 } ^ { n } Q _ {i} , $$

let $ x ^ {0} \in Q $ be a point at which $ f( x) $ reaches its minimum on $ Q $, and let

$$ Q _ {0} = \{ {x } : {f( x) < f( x ^ {0} ) } \} . $$

The essence of one wide-spread general method is that every one of the sets $ Q _ {i} $, $ i = 0 \dots n $, is approximated in a neighbourhood of the point $ x ^ {0} $ by a convex cone $ K _ {i} $ with vertex at $ x ^ {0} $( the "descent cone" for $ Q _ {0} $; the cone of "admissible directions" for constraints of inequality type; the cone of "tangent directions" for equality-type constraints, including differential links, etc.). A necessary condition for a minimum is now that $ x ^ {0} $ is the single point common to all $ K _ {i} $, $ i = 0 \dots n $, and, consequently, that the cones are "separable" (see [8]). An analytic form is added to the latter "geometric" condition and, where possible, is put into an appropriate form, for example, using a Hamilton function. Dependent on the initial constraints, as well as on the class of applicable variations, necessary conditions can take both a form analogous to the Pontryagin maximum principle and a form of a local (linearized) maximum principle (of a condition of a weak extremum with respect to $ u $). The realization of this method thus depends on the possibility of describing the cones $ K _ {i} $ analytically. Their effective description has been achieved for sets $ Q _ {i} $ defined by smooth functions which satisfy certain supplementary regularity conditions at the examined point or by convex functions (see [9], [10]). In principle, this method also permits generalizations to the case of non-smooth constraints, including differential ones. Here, for example, the concept of a subdifferential of a convex function and its generalization can be used, when convexity is lacking (see [11], [12]).

Conditions of the first order, analogous to the Pontryagin maximum principle, are known for solutions in the class of generalized function-measures (the so-called integral maximum principle), for controllable systems which can be described by differential equations with a perturbed argument, partial differential equations, evolution equations in a Banach space, differential equations on manifolds, recurrence difference equations, etc. (see [1], [6], [7], [13][16]).

From the necessary conditions for an extremum given in a problem of optimal control follow the well-known necessary first-order conditions of the classical variational calculus. In particular, in a two-point boundary value problem for the systems (2) and (3), where $ U $ is an open set, $ J( x( \cdot ), u( \cdot )) $ is a standard integral functional, the Pontryagin principle implies the Weierstrass necessary condition for an extremum in the classical variational calculus.

Methods are being developed in the theory of optimal control to obtain necessary higher-order conditions (especially, of the second order) for non-classical variational problems (see [19]). Interest in higher-order conditions has largely been related to the study of degenerate problems of optimal control, leading to the so-called special controls, which do not have adequate analogues in the classical theory. For example, in the Pontryagin principle, the function $ H( t, \psi , x, u) $ can either lead to a whole family of controls, each of which satisfies the maximum principle, or does not depend on $ u $ at all (in which case any of the admissible values of $ u $ satisfies the Pontryagin principle). This situation is quite characteristic of a whole series of applied control problems in space. In the given case, the isolation of an optimal control already requires a mastering of the first-order extremals (the so-called Pontryagin extremals) and the use for them of necessary second-order optimality conditions (or, in general, those of higher order). Different forms of necessary conditions have been obtained here by the use of special classes of "non-classical" variations (for example, a "bundle" of needle variations, etc.). The realization of special controls is also often connected with the use of sliding systems (see [17], [18]).

The theory of sufficient conditions of optimality has not been examined in much detail. Results are known which relate to conditions of local optimality and contain, among other simple requirements, conditions of non-degeneracy of the variational system and constraints on the properties of the Hessian of the right-hand sides, calculated along an admissible trajectory for the corresponding ordinary differential equation. Another group of sufficient conditions is based on the method of dynamic programming and its relation to the theory of the maximum principle (see [8]). There are also formalisms which lead to sufficient conditions for an absolute minimum, based on the idea of broadening variational problems. Their domain of practical applicability embraces special classes of problems with convexity criteria and degenerate problems of optimal control (see [18]).

The complete solution of a problem of optimal programming control (necessary and sufficient conditions of optimality) is known for linear systems (1) when both the functionals and constraints on $ u , x $ are convex (in a number of cases certain extra conditions have to be fulfilled here). The concept of duality, used in convex analysis, has demonstrated a particular extremal property of the trajectories of the system, described by conjugate variables in the Pontryagin maximum principle. This has enabled one to reduce the boundary value problem arising from the application of necessary conditions of general type to a solution of a simpler dual extremal problem. Within the framework of this approach the theory of linear systems with impulse controls has been developed, which simulates objects subject to instantaneous influences (percussion, explosive, impulse), and which is formalized by the use of differential equations in generalized functions with corresponding orders of singularity. The method of attainability domains (see [2], [3]) has been put to effective use, especially in the theory of game systems.

In the absence of complete a priori information on the system (including the statistical description of insufficient quantities) the problem of optimal programming control is studied under conditions of uncertainty. In the system $ ( t _ {0} \leq t \leq t _ {1} ) $

$$ \tag{4 } \dot{x} = f( t , x , u , w),\ \ x( t _ {0} ) = x ^ {0} \in X ^ {0} ,\ \ w \in W , $$

let the parameter $ w \in \mathbf R ^ {q} $, realized in the form of time functions $ w = w( t) $, and the vector $ x ^ {0} $ be unknown, and let only the sets $ X ^ {0} \subseteq \mathbf R ^ {n} $, $ W \subseteq \mathbf R ^ {q} $ be given. Then, assuming the existence and extendability to $ [ t _ {0} , t _ {1} ] $ of the solutions

$$ x( t \mid x ^ {0} , u( \cdot ), w( \cdot )),\ \ x( t _ {0} \mid x ^ {0} , u( \cdot ), w( \cdot )) = x ^ {0} , $$

of equation (4) (for given $ x ^ {0} , u( \tau ), w( \tau ) $, $ t _ {0} \leq \tau \leq t _ {1} $), a bundle (ensemble) of trajectories can be formed:

$$ X( t \mid u( \cdot )) = $$

$$ = \ \cup \{ {x( t \mid x ^ {0} , u( \cdot ), w( \cdot )) } : {x ^ {0} \in X ^ {0} , w( \tau ) \in W, t _ {0} \leq \tau \leq t } \} . $$

By selecting a programming control $ u( t) $( the same one for all trajectories of the bundle), it is possible to control the position $ X( t \mid u( \cdot )) $ in the phase space. A typical problem of optimal programming control under conditions of uncertainty consists of the optimization of $ u( t) $ along a functional $ \Phi $ of maximum type:

$$ \tag{5 } \Phi ( X( t _ {1} \mid u( \cdot ))) = \ \max \{ {\phi ( x) } : {x \in X ( t _ {1} \mid u( \cdot )) } \} $$

(then the solution $ u ^ {0} ( t) $ of the problem will ensure a guaranteed result) or of an integral functional

$$ \tag{6 } \Phi ( X( t _ {1} \mid u( \cdot ))) = \ \int\limits _ {X( t _ {1} \mid u( \cdot )) } f _ {0} ( x) dx. $$

The use of the technique of inferring necessary conditions of optimality or its modifications has enabled one to formulate requirements which ensure the existence of analogues of the Pontryagin principle for the problems (5), (6) (in the first instance it takes the form of a minimax condition). For linear systems, these problems allow just as detailed a solution as for systems with complete information (see [3], [20], [21]).

References

[1] L.S. Pontryagin, V.G. Boltayanskii, R.V. Gamkrelidze, E.F. Mishchenko, "The mathematical theory of optimal processes" , Wiley (1967) (Translated from Russian)
[2] N.N. Krasovskii, "Theory of control by motion" , Moscow (1968) (In Russian)
[3] N.N. Krasovaskii, A.I. Subbotin, "Game-theoretical control problems" , Springer (1988) (Translated from Russian)
[4] R.E. Kalman, "On the general theory of control systems" , Proc. 1-st Internat. Congress Internat. Fed. Autom. Control , 2 , Moscow (1961) pp. 521–547 (In Russian)
[5] E.B. Lee, L. Marcus, "Foundations of optimal control theory" , Wiley (1967)
[6] R.V. Gamkrelidze, "Principles of optimal control theory" , Plenum (1978) (Translated from Russian)
[7] J. Varga, "Optimal control of differential and functional equations" , Acad. Press (1972)
[8] A.D. Ioffe, V.M. Tikhomirov, "Duality of convex functions and extremal problems" Russian Math. Surveys , 23 : 6 (1968) pp. 53–124 Uspekhi Mat. Nauk. , 23 : 6 (1968) pp. 51–116
[9] A.Ya Dubovitskii, A.A. Milyutin, "Extremum problems in the presence of restrictions" USSR Comp. Math. Math. Phys. , 5 : 3 (1965) pp. 1–80 Zh. Vychisl. Mat. i Mat. Fiz. , 5 : 3 (1965) pp. 395–453
[10] L.W. Neustadt, "Optimization, a theory of necessary conditions" , Princeton Univ. Press (1976)
[11] B.N. Pshenichnyi, "Necessary conditions for an extremum" , M. Dekker (1971) (Translated from Russian)
[12] F.H. Clarke, "Generalized gradients and applications" Trans. Amer. Math. Soc. , 205 (1975) pp. 247–262
[13] H.J. Sussmann, "Existence and uniqueness of minimal realizations of nonlinear systems" Math. Syst. Theory , 10 : 3 (1977) pp. 263–284
[14] J.-L. Lions, "Optimal control of systems governed by partial differential equations" , Springer (1971) (Translated from French)
[15] V.G. Boltyanskii, "Mathematical methods of optimal control" , Holt, Rinehart & Winston (1971) (Translated from Russian)
[16] V.G. Boltyanskii, "Optimal control of discrete systems" , Wiley (1978) (Translated from Russian)
[17] R. Gabasov, F.M. Kirillova, "Singular optimal control" , Moscow (1973) (In Russian)
[18] V.F. Krotov, V.Z. Bukreev, V.I. Gurman, "New methods of variational calculus in flight dynamics" , Moscow (1969) (In Russian)
[19] E.S. Levitin, A.A. Milyutin, N.P. Osmolovskii, "Conditions of higher order for a local minimum in problems with constraints" Russian Math. Surveys , 33 : 6 (1978) pp. 97–168 Uspekhi Mat. Nauk. , 33 : 6 (1978) pp. 85–148
[20] A.B. Kurzhanskii, "Control and observability under conditions of uncertainty" , Moscow (1977) (In Russian)
[21] V.F. Demvyanov, V.N. Malozemov, "Introduction to minimax" , Moscow (1972) (In Russian)

Comments

An optimal programming control is usually called an optimal open-loop control in the Western literature, while an optimal synthesis control is better known as an optimal closed-loop control or optimal feedback control. See also Optimal control, mathematical theory of.

References

[a1] W.H. Fleming, R.W. Rishel, "Deterministic and stochastic control" , Springer (1975)
[a2] D.P. Bertsekas, S.E. Shreve, "Stochastic optimal control: the discrete-time case" , Acad. Press (1978)
[a3] D.P. Bertsekas, "Dynamic programming and stochastic control" , Acad. Press (1976)
[a4] M.H.A. Davis, "Martingale methods in stochastic control" , Stochastic Control and Stochastic Differential Systems , Lect. notes in control and inform. sci. , 16 , Springer (1979) pp. 85–117
[a5] L. Cesari, "Optimization - Theory and applications" , Springer (1983)
[a6] V. Barbu, G. Da Prato, "Hamilton–Jacobi equations in Hilbert spaces" , Pitman (1983)
[a7] H.J. Kushner, "Introduction to stochastic control" , Holt (1971)
[a8] P.R. Kumar, P. Varaiya, "Stochastic systems: estimation, identification and adaptive control" , Prentice-Hall (1986)
[a9] L. Ljung, "System identification theory for the user" , Prentice-Hall (1987)
[a10] A.E. Bryson, Y.-C. Ho, "Applied optimal control" , Ginn (1969)
[a11] H.W. Knobloch, "Higher order necessary conditions in optimal control theory" , Springer (1981)
How to Cite This Entry:
Optimal programming control. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Optimal_programming_control&oldid=13851
This article was adapted from an original article by A.B. Kurzhanskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article