Optimal programming control

From Encyclopedia of Mathematics
Jump to: navigation, search

A solution of a problem in the mathematical theory of optimal control (cf. Optimal control, mathematical theory of), in which the control $ u= u( t) $ is formed as a function of time (whereby it is also supposed that, during the process, no information, apart from that which is given at the very beginning, is obtained). In this way, an optimal programming control is formed through a priori information on the system and cannot be corrected, unlike an optimal synthesis control.

The problem of existence of solutions to a problem of optimal programming control breaks down into two questions: explaining the possibility of realizing the aim of the control with the given constraints (the existence of an admissible control which realizes the aim of the control) and establishing the solvability of an extremum problem — attainability of the (as a rule, relative) extremum — in the aforementioned class of admissible controls (the existence of an optimal control).

It is particularly important, in relation to the first question, to study the property of controllability of a system. For a system

$$ \frac{dx}{dt} = f( t, x, u) $$

it signifies the existence in a given class $ U = \{ u( \cdot ) \} $ of admissible control functions $ u( t) $ which transfer a phase point (see Pontryagin maximum principle) from any given starting position $ x( t _ {0} ) = x ^ {0} \in \mathbf R ^ {n} $ to any given final position $ x( t _ {1} ) = x ^ {1} \in \mathbf R ^ {n} $( for a fixed or free time $ T = t _ {1} - t _ {0} $, dependent on the formulation of the problem). The necessary and sufficient conditions for controllability (or for complete controllability) are known in computable form for linear systems

$$ \tag{1 } \dot{x} = A( t) x + B( t) u,\ \ x \in \mathbf R ^ {n} , u \in \mathbf R ^ {p} , $$

with analytic or periodic coefficients (these are simplest when $ A \equiv \textrm{ const } $, $ B \equiv \textrm{ const } $). For general linear systems the question of solvability of the problem of transfer from one convex set onto another (with convex constraints on $ u , x $) has also been completely solved. In non-linear systems, only local conditions of controllability (in a small neighbourhood of the given trajectory) or conditions for particular classes of systems (see [2], [4], [5]) are known. The property of controllability has also been studied for numerous generalizations relating, in particular, to special classes $ U $( for example, the set $ U $ of all bounded, piecewise-continuous controls $ u( t) $), for controllability of some of the coordinates, or for more general classes of systems, including infinite-dimensional ones.

The question of the existence of an optimal control is in general related to a compactness property, in some topology, of minimizing sequences of controls or trajectories, and to the property of semi-continuity in the corresponding variables of the minimizing functionals. For a system

$$ \tag{2 } \dot{x} = f( t, x, u),\ \ t _ {0} \leq t \leq t _ {1} ,\ \ x \in \mathbf R ^ {n} ,\ u \in \mathbf R ^ {p} , $$

under the constraints

$$ \tag{3 } u \in U \subseteq \mathbf R ^ {p} $$

the first of these properties is associated with convexity of the set

$$ f( t, x, U) = \{ {f( t, x, u) } : {u \in U } \} , $$

while the second (for integral functionals) is linked with convexity in the corresponding values of $ J( x( \cdot ), u( \cdot )) $. The lack of these properties is compensated for by broadening the initial variational problems. So, the non-convexity of $ f( t, x, u) $ can be compensated for by the introduction of sliding systems: Generalized solutions of ordinary differential equations, generated by control-measures given on $ U $ and creating an effect of "convexification" (see [6], [7]; cf. also Optimal sliding regime). The absence of convexity in integral functionals $ J( x( \cdot ), u( \cdot )) $ is compensated for by imbedding the problem in a more general one, with a new functional which is a convex minorant of the previous one, and imbedding the solution of the new problem in a broader class of controls (see [9]). In the instances shown, the existence of an optimal control often follows from the existence of an admissible control.

The theory of necessary conditions for an extremum is most developed in problems of optimal programming control. The Pontryagin maximum principle has served as a basic result in this case, as it includes necessary conditions for a strong extremum in a problem of optimal control.

General methods for obtaining necessary conditions in extremum problems were created and are used effectively for problems of optimal programming control with more complex constraints (such as phase, functional, minimax, mixed, etc.). They are based, in one way or another, on theorems of separability of convex cones (see [9], [10]). For example, let $ E $ be a vector space, let $ f( x) $, $ x \in E $, be a given functional, let $ Q _ {i} $ be a set from $ E $, let

$$ Q = \cap _ { 1 } ^ { n } Q _ {i} , $$

let $ x ^ {0} \in Q $ be a point at which $ f( x) $ reaches its minimum on $ Q $, and let

$$ Q _ {0} = \{ {x } : {f( x) < f( x ^ {0} ) } \} . $$

The essence of one wide-spread general method is that every one of the sets $ Q _ {i} $, $ i = 0 \dots n $, is approximated in a neighbourhood of the point $ x ^ {0} $ by a convex cone $ K _ {i} $ with vertex at $ x ^ {0} $( the "descent cone" for $ Q _ {0} $; the cone of "admissible directions" for constraints of inequality type; the cone of "tangent directions" for equality-type constraints, including differential links, etc.). A necessary condition for a minimum is now that $ x ^ {0} $ is the single point common to all $ K _ {i} $, $ i = 0 \dots n $, and, consequently, that the cones are "separable" (see [8]). An analytic form is added to the latter "geometric" condition and, where possible, is put into an appropriate form, for example, using a Hamilton function. Dependent on the initial constraints, as well as on the class of applicable variations, necessary conditions can take both a form analogous to the Pontryagin maximum principle and a form of a local (linearized) maximum principle (of a condition of a weak extremum with respect to $ u $). The realization of this method thus depends on the possibility of describing the cones $ K _ {i} $ analytically. Their effective description has been achieved for sets $ Q _ {i} $ defined by smooth functions which satisfy certain supplementary regularity conditions at the examined point or by convex functions (see [9], [10]). In principle, this method also permits generalizations to the case of non-smooth constraints, including differential ones. Here, for example, the concept of a subdifferential of a convex function and its generalization can be used, when convexity is lacking (see [11], [12]).

Conditions of the first order, analogous to the Pontryagin maximum principle, are known for solutions in the class of generalized function-measures (the so-called integral maximum principle), for controllable systems which can be described by differential equations with a perturbed argument, partial differential equations, evolution equations in a Banach space, differential equations on manifolds, recurrence difference equations, etc. (see [1], [6], [7], [13][16]).

From the necessary conditions for an extremum given in a problem of optimal control follow the well-known necessary first-order conditions of the classical variational calculus. In particular, in a two-point boundary value problem for the systems (2) and (3), where $ U $ is an open set, $ J( x( \cdot ), u( \cdot )) $ is a standard integral functional, the Pontryagin principle implies the Weierstrass necessary condition for an extremum in the classical variational calculus.

Methods are being developed in the theory of optimal control to obtain necessary higher-order conditions (especially, of the second order) for non-classical variational problems (see [19]). Interest in higher-order conditions has largely been related to the study of degenerate problems of optimal control, leading to the so-called special controls, which do not have adequate analogues in the classical theory. For example, in the Pontryagin principle, the function $ H( t, \psi , x, u) $ can either lead to a whole family of controls, each of which satisfies the maximum principle, or does not depend on $ u $ at all (in which case any of the admissible values of $ u $ satisfies the Pontryagin principle). This situation is quite characteristic of a whole series of applied control problems in space. In the given case, the isolation of an optimal control already requires a mastering of the first-order extremals (the so-called Pontryagin extremals) and the use for them of necessary second-order optimality conditions (or, in general, those of higher order). Different forms of necessary conditions have been obtained here by the use of special classes of "non-classical" variations (for example, a "bundle" of needle variations, etc.). The realization of special controls is also often connected with the use of sliding systems (see [17], [18]).

The theory of sufficient conditions of optimality has not been examined in much detail. Results are known which relate to conditions of local optimality and contain, among other simple requirements, conditions of non-degeneracy of the variational system and constraints on the properties of the Hessian of the right-hand sides, calculated along an admissible trajectory for the corresponding ordinary differential equation. Another group of sufficient conditions is based on the method of dynamic programming and its relation to the theory of the maximum principle (see [8]). There are also formalisms which lead to sufficient conditions for an absolute minimum, based on the idea of broadening variational problems. Their domain of practical applicability embraces special classes of problems with convexity criteria and degenerate problems of optimal control (see [18]).

The complete solution of a problem of optimal programming control (necessary and sufficient conditions of optimality) is known for linear systems (1) when both the functionals and constraints on $ u , x $ are convex (in a number of cases certain extra conditions have to be fulfilled here). The concept of duality, used in convex analysis, has demonstrated a particular extremal property of the trajectories of the system, described by conjugate variables in the Pontryagin maximum principle. This has enabled one to reduce the boundary value problem arising from the application of necessary conditions of general type to a solution of a simpler dual extremal problem. Within the framework of this approach the theory of linear systems with impulse controls has been developed, which simulates objects subject to instantaneous influences (percussion, explosive, impulse), and which is formalized by the use of differential equations in generalized functions with corresponding orders of singularity. The method of attainability domains (see [2], [3]) has been put to effective use, especially in the theory of game systems.

In the absence of complete a priori information on the system (including the statistical description of insufficient quantities) the problem of optimal programming control is studied under conditions of uncertainty. In the system $ ( t _ {0} \leq t \leq t _ {1} ) $

$$ \tag{4 } \dot{x} = f( t , x , u , w),\ \ x( t _ {0} ) = x ^ {0} \in X ^ {0} ,\ \ w \in W , $$

let the parameter $ w \in \mathbf R ^ {q} $, realized in the form of time functions $ w = w( t) $, and the vector $ x ^ {0} $ be unknown, and let only the sets $ X ^ {0} \subseteq \mathbf R ^ {n} $, $ W \subseteq \mathbf R ^ {q} $ be given. Then, assuming the existence and extendability to $ [ t _ {0} , t _ {1} ] $ of the solutions

$$ x( t \mid x ^ {0} , u( \cdot ), w( \cdot )),\ \ x( t _ {0} \mid x ^ {0} , u( \cdot ), w( \cdot )) = x ^ {0} , $$

of equation (4) (for given $ x ^ {0} , u( \tau ), w( \tau ) $, $ t _ {0} \leq \tau \leq t _ {1} $), a bundle (ensemble) of trajectories can be formed:

$$ X( t \mid u( \cdot )) = $$

$$ = \ \cup \{ {x( t \mid x ^ {0} , u( \cdot ), w( \cdot )) } : {x ^ {0} \in X ^ {0} , w( \tau ) \in W, t _ {0} \leq \tau \leq t } \} . $$

By selecting a programming control $ u( t) $( the same one for all trajectories of the bundle), it is possible to control the position $ X( t \mid u( \cdot )) $ in the phase space. A typical problem of optimal programming control under conditions of uncertainty consists of the optimization of $ u( t) $ along a functional $ \Phi $ of maximum type:

$$ \tag{5 } \Phi ( X( t _ {1} \mid u( \cdot ))) = \ \max \{ {\phi ( x) } : {x \in X ( t _ {1} \mid u( \cdot )) } \} $$

(then the solution $ u ^ {0} ( t) $ of the problem will ensure a guaranteed result) or of an integral functional

$$ \tag{6 } \Phi ( X( t _ {1} \mid u( \cdot ))) = \ \int\limits _ {X( t _ {1} \mid u( \cdot )) } f _ {0} ( x) dx. $$

The use of the technique of inferring necessary conditions of optimality or its modifications has enabled one to formulate requirements which ensure the existence of analogues of the Pontryagin principle for the problems (5), (6) (in the first instance it takes the form of a minimax condition). For linear systems, these problems allow just as detailed a solution as for systems with complete information (see [3], [20], [21]).


[1] L.S. Pontryagin, V.G. Boltayanskii, R.V. Gamkrelidze, E.F. Mishchenko, "The mathematical theory of optimal processes" , Wiley (1967) (Translated from Russian)
[2] N.N. Krasovskii, "Theory of control by motion" , Moscow (1968) (In Russian)
[3] N.N. Krasovaskii, A.I. Subbotin, "Game-theoretical control problems" , Springer (1988) (Translated from Russian)
[4] R.E. Kalman, "On the general theory of control systems" , Proc. 1-st Internat. Congress Internat. Fed. Autom. Control , 2 , Moscow (1961) pp. 521–547 (In Russian)
[5] E.B. Lee, L. Marcus, "Foundations of optimal control theory" , Wiley (1967)
[6] R.V. Gamkrelidze, "Principles of optimal control theory" , Plenum (1978) (Translated from Russian)
[7] J. Varga, "Optimal control of differential and functional equations" , Acad. Press (1972)
[8] A.D. Ioffe, V.M. Tikhomirov, "Duality of convex functions and extremal problems" Russian Math. Surveys , 23 : 6 (1968) pp. 53–124 Uspekhi Mat. Nauk. , 23 : 6 (1968) pp. 51–116
[9] A.Ya Dubovitskii, A.A. Milyutin, "Extremum problems in the presence of restrictions" USSR Comp. Math. Math. Phys. , 5 : 3 (1965) pp. 1–80 Zh. Vychisl. Mat. i Mat. Fiz. , 5 : 3 (1965) pp. 395–453
[10] L.W. Neustadt, "Optimization, a theory of necessary conditions" , Princeton Univ. Press (1976)
[11] B.N. Pshenichnyi, "Necessary conditions for an extremum" , M. Dekker (1971) (Translated from Russian)
[12] F.H. Clarke, "Generalized gradients and applications" Trans. Amer. Math. Soc. , 205 (1975) pp. 247–262
[13] H.J. Sussmann, "Existence and uniqueness of minimal realizations of nonlinear systems" Math. Syst. Theory , 10 : 3 (1977) pp. 263–284
[14] J.-L. Lions, "Optimal control of systems governed by partial differential equations" , Springer (1971) (Translated from French)
[15] V.G. Boltyanskii, "Mathematical methods of optimal control" , Holt, Rinehart & Winston (1971) (Translated from Russian)
[16] V.G. Boltyanskii, "Optimal control of discrete systems" , Wiley (1978) (Translated from Russian)
[17] R. Gabasov, F.M. Kirillova, "Singular optimal control" , Moscow (1973) (In Russian)
[18] V.F. Krotov, V.Z. Bukreev, V.I. Gurman, "New methods of variational calculus in flight dynamics" , Moscow (1969) (In Russian)
[19] E.S. Levitin, A.A. Milyutin, N.P. Osmolovskii, "Conditions of higher order for a local minimum in problems with constraints" Russian Math. Surveys , 33 : 6 (1978) pp. 97–168 Uspekhi Mat. Nauk. , 33 : 6 (1978) pp. 85–148
[20] A.B. Kurzhanskii, "Control and observability under conditions of uncertainty" , Moscow (1977) (In Russian)
[21] V.F. Demvyanov, V.N. Malozemov, "Introduction to minimax" , Moscow (1972) (In Russian)


An optimal programming control is usually called an optimal open-loop control in the Western literature, while an optimal synthesis control is better known as an optimal closed-loop control or optimal feedback control. See also Optimal control, mathematical theory of.


[a1] W.H. Fleming, R.W. Rishel, "Deterministic and stochastic control" , Springer (1975)
[a2] D.P. Bertsekas, S.E. Shreve, "Stochastic optimal control: the discrete-time case" , Acad. Press (1978)
[a3] D.P. Bertsekas, "Dynamic programming and stochastic control" , Acad. Press (1976)
[a4] M.H.A. Davis, "Martingale methods in stochastic control" , Stochastic Control and Stochastic Differential Systems , Lect. notes in control and inform. sci. , 16 , Springer (1979) pp. 85–117
[a5] L. Cesari, "Optimization - Theory and applications" , Springer (1983)
[a6] V. Barbu, G. Da Prato, "Hamilton–Jacobi equations in Hilbert spaces" , Pitman (1983)
[a7] H.J. Kushner, "Introduction to stochastic control" , Holt (1971)
[a8] P.R. Kumar, P. Varaiya, "Stochastic systems: estimation, identification and adaptive control" , Prentice-Hall (1986)
[a9] L. Ljung, "System identification theory for the user" , Prentice-Hall (1987)
[a10] A.E. Bryson, Y.-C. Ho, "Applied optimal control" , Ginn (1969)
[a11] H.W. Knobloch, "Higher order necessary conditions in optimal control theory" , Springer (1981)
How to Cite This Entry:
Optimal programming control. Encyclopedia of Mathematics. URL:
This article was adapted from an original article by A.B. Kurzhanskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article