Namespaces
Variants
Actions

Optimal singular regime

From Encyclopedia of Mathematics
Revision as of 08:04, 6 June 2020 by Ulf Rehmann (talk | contribs) (tex encoded by computer)
Jump to: navigation, search


optimal singular control

An optimal control for which, at a certain period of time, the conditions

$$ \tag{1 } \frac{\partial H }{\partial u } = 0, $$

$$ \tag{2 } \frac{\partial ^ {2} H }{\partial u ^ {2} } = 0 $$

are fulfilled simultaneously, where $ H $ is a Hamilton function. In the case of vectors, when the optimal singular regime occurs through $ k $, $ k > 1 $, control components, condition (1) is replaced by the $ k $ conditions

$$ \tag{3 } \frac{\partial H }{\partial u _ {s} } = 0,\ \ s= 1 \dots k, $$

while instead of equality (2), the determinant

$$ \tag{4 } \left | \frac{\partial ^ {2} H }{\partial u _ {s} \partial u _ {p} } \right | = \ 0,\ p , s = 1 \dots k , $$

must vanish.

In an optimal singular regime, the Hamilton function $ H $ is stationary, but its second differential is not negative definite, i.e. the maximum of $ H $ as a function of $ u $( when $ u $ is a variable within the admissible domain) is a "mixed maximum" .

The most typical problems in which an optimal singular regime may occur are problems of optimal control in which the integrand and the right-hand sides are linearly dependent on the control.

Problems of this type are studied below, beginning with a scalar singular control.

Let the minimum of the functional

$$ \tag{5 } J = \int\limits _ { 0 } ^ { {t _ 1} } F( x) + u \Phi ( x) dt $$

have to be determined, given the constraints

$$ \tag{6 } \dot{x} ^ {i} = f ^ { i } ( x) + u \phi ^ {i} ( x),\ \ i= 1 \dots n, $$

the boundary conditions

$$ \tag{7 } x( 0) = x _ {0} ,\ \ x( t _ {1} ) = x _ {1} , $$

and the constraints on the control

$$ \tag{8 } | u | \leq 1. $$

The necessary conditions which the optimal singular regime must satisfy allow one to examine the problem (5)–(8) in advance and to isolate the manifolds of singular sections on which an optimal control lies within the admissible domain $ | u | < 1 $. By joining to it non-singular sections which satisfy the boundary conditions (7), with a control $ | u | = 1 $ defined by the Pontryagin maximum principle, it is possible to obtain an optimal solution of the problem (5)–(8).

In line with the maximum principle, an optimal control for any $ t \in [ 0, t _ {1} ] $ must provide a maximum of the Hamilton function

$$ \tag{9 } H( \psi ( t), x( t), u( t)) = \ \max _ {| u | \leq 1 } H( \psi ( t), x( t), u), $$

where

$$ H( \psi , x, u) = \ -( F( x) + u \Phi ( x)) + \sum _ { i= } 1 ^ { n } \psi _ {i} ( f ^ { i } ( x) + u \phi ^ {i} ( x)), $$

while $ \psi = ( \psi _ {1} \dots \psi _ {n} ) $ is a conjugate vector function which does not vanish and which satisfies the system of equations

$$ \tag{10 } \dot \psi _ {i} = - \frac{\partial H( \psi , x, u) }{ \partial x ^ {i} } ,\ \ i= 1 \dots n. $$

Condition (2) is fulfilled for any control, not just for an optimal singular regime.

For those periods of time at which $ \partial H/ \partial u \neq 0 $, condition (9) defines a non-singular optimal control which takes the boundary values

$$ u( t) = \mathop{\rm sign} \frac{\partial H }{\partial u } = \pm 1. $$

Thus, sections $ [ \tau _ {0} , \tau _ {1} ] $ with an optimal singular control

$$ | u( t) | < 1 $$

can appear only when condition (1) is fulfilled:

$$ \tag{11 } \frac{\partial H( \psi ( t), x( t), u( t)) }{\partial u } = \ 0,\ \ \tau _ {0} \leq t \leq \tau _ {1} ; $$

i.e. the Hamilton function is clearly independent of the control $ u $. Consequently, for linear control problems condition (9) does not allow a direct determination of an optimal singular control $ u( t) $.

Let the function $ \partial H/ \partial u $ be differentiated with respect to $ t $ by virtue of the systems (6), (10) until the control $ u $ occurs in the next derivative with a non-zero coefficient. It has been proved (see [1][3]) that the control $ u $ with a non-zero coefficient can only occur in an even derivative, i.e.

$$ \tag{12 } \left . \begin{array}{c} \frac \partial {\partial u } \left ( \frac{d ^ {s} }{dt ^ {s} } \left ( \frac{\partial H }{\partial u } \right ) \right ) = 0,\ \ s= 1 \dots 2q- 1, \\ \frac{d ^ {2q} }{dt ^ {2q} } \left ( \frac{\partial H }{ \partial u } \right ) = \ a( \psi , x) + u \cdot b( \psi , x), \\ b( \psi ( t), x( t)) \neq 0, \end{array} \right \} $$

and that fulfillment of the inequality

$$ \tag{13 } (- 1) ^ {q} b( \psi , x) = (- 1) ^ {q} \left ( \frac \partial { \partial u } \left ( \frac{d ^ {2q} }{dt ^ {2q} } \left ( \frac{\partial H }{\partial u } \right ) \right ) \right ) \leq 0 $$

is a necessary condition for optimality of the singular control. If $ b( \psi ( t), x( t)) \neq 0 $ on the whole segment, then the optimal singular control is

$$ u( t) = - \frac{a( \psi ( t), x( t)) }{b( \psi ( t), x( t)) } ,\ \ \tau _ {0} \leq t \leq \tau _ {1} . $$

In as much as conditions (12) are obtained as a result of successive differentiation of (11), then on a section of the singular regime, especially at joining points $ \tau _ {0} $ and $ \tau _ {1} $ of singular and non-singular sections, apart from equality (11), the following $ 2q- 1 $ equalities are fulfilled:

$$ \tag{14 } \frac{d ^ {s} }{dt ^ {s} } \left ( \frac{\partial H }{ \partial u } \right ) = \ 0,\ \ s= 1 \dots 2q- 1. $$

Analysis of conditions (11), (14) shows that for cases of even and odd values of $ q $, the character of the situation when singular and non-singular sections of the trajectory come together is different (see [4]).

When $ q $ is even, the optimal control of a non-singular section cannot be piecewise continuous. Discontinuities of the control (switching points) condense to a joining point with a singular section. Thus the optimal control is a Lebesgue-measurable function with a countable set of discontinuity points.

When $ q $ is odd, only two piecewise-smooth optimal trajectories can go to a point lying on a singular section (or come out of it). Let the dimension of the manifold of singular sections in the $ n $- dimensional space of phase coordinates equal $ k $. Then if $ q $ is odd, the optimal trajectories with a piecewise-continuous control fill out only a surface of dimension $ k+ 1 $ in the phase space. Therefore, when $ k \leq n- 2 $, almost all remaining trajectories will have a control with an infinite number of switching points.

The hypothesis that $ k = n- q $ has been formulated (see [4]). If this is true, then, when $ q \geq 2 $, condensation of switching points before coming out onto the singular section (or coming off it) is a typical manifestation of problems of the type (5)–(8).

An example of the joining of singular and non-singular sections of an optimal control with an infinite number of switchings is given in [5].

When $ q $, $ q \geq 2 $, is even, the optimal singular control on a non-singular section bordering a singular one cannot be piecewise continuous, but has an infinite number of switching points which condense towards the point of entry $ \tau _ {0} $, i.e. no $ \epsilon > 0 $ exists such that in a period $ [ \tau _ {0} - \epsilon , \tau _ {0} ] $ the optimal control is constant.

In the most commonly found optimal singular regimes, $ q= 1 $. In this case, singular and non-singular sections are joined by a piecewise-continuous optimal control.

In a more general case of an optimal singular regime in which $ k $, $ k > 1 $, controls are examined:

$$ \tag{15 } J = \int\limits _ { 0 } ^ { {t _ 1} } \left ( F( x) + \sum _ { s= } 1 ^ { k } u _ {s} \Phi _ {s} ( x) \right ) dt, $$

$$ \tag{16 } \dot{x} ^ {i} = f ^ { i } ( x) + \sum _ { s= } 1 ^ { k } u _ {s} \phi _ {s} ^ {i} ( x) $$

(as in the scaler case), condition (4), by virtue of linearity, is fulfilled for any control. In a section $ [ \tau _ {0} , \tau _ {1} ] $, the optimal singular regime with $ k $ components with $ | u _ {s} | < 1 $ must fulfill the $ k $ conditions (3):

$$ \tag{17 } M _ {s} ( \psi ( t), x( t)) = \ \frac{\partial H( \psi ( t), x( t), u( t)) }{\partial u _ {s} } = 0, $$

$$ \tau _ {0} \leq t \leq \tau _ {1} ,\ s = 1 \dots k, $$

where

$$ H( \psi , x) = $$

$$ = \ - \left ( F( x) + \sum _ { s= } 1 ^ { k } u _ {s} \Phi _ {s} ( x) \right ) + \sum _ { i= } 1 ^ { n } \psi _ {i} \left ( f ^ { i } ( x) + \sum _ { s= } 1 ^ { k } u _ {s} \phi _ {s} ^ {i} ( x) \right ) = $$

$$ = \ Q( \psi , x) + \sum _ { s= } 1 ^ { k } u _ {s} M _ {s} ( \psi , x), $$

while the $ \psi _ {i} $ are defined by (10).

Further necessary conditions of optimality for optimal singular regimes with several components differ as follows from the cases with one component examined above. On a section of an optimal singular regime, two types of necessary conditions must be fulfilled. One of them, an inequality-type condition, is the analogue of condition (13). Other necessary conditions are equality-type conditions and do not have one-component analogues (see [6]).

By differentiating (17) totally with respect to $ t $, a system of $ k $ linear equations relative to $ k $ unknowns $ u _ {1} \dots u _ {k} $ is obtained:

$$ \tag{18 } \frac{dM _ {s} ( \psi , x) }{dt} = \ \sum _ { p= } 1 ^ { k } u _ {p} \left [ \sum _ { i= } 1 ^ { n } \left ( \frac{\partial M _ {s} }{\partial x ^ {i} } \phi _ {p} ^ {i} - \frac{\partial M _ {p} }{\partial x ^ {i} } \phi _ {s} ^ {i} \right ) \right ] + $$

$$ + \sum _ { i= } 1 ^ { n } \left ( \frac{\partial M _ {s} }{\partial x ^ {i} } f ^ { i } - \frac{\partial Q }{\partial x ^ {i} } \phi _ {s} ^ {i} \right ) = 0,\ s= 1 \dots k. $$

The coefficient matrix of the system (18),

$$ a _ {sp} = \ \sum _ { i= } 1 ^ { n } \left ( \frac{\partial M _ {s} }{\partial x ^ {i} } \phi _ {p} ^ {i} - \frac{\partial M _ {p} }{\partial x ^ {i} } \phi _ {s} ^ {i} \right ) ,\ \ s, p = 1 \dots k, $$

is skew-symmetric: $ a _ {sp} = - a _ {ps} $. Hence it follows that the entries $ a _ {ss} $ on the main diagonal of the matrix $ ( a _ {sp} ) $ are equal to zero. In general, the remaining entries $ a _ {sp} $, $ s \neq p $, of (18) on an arbitrary trajectory corresponding to a non-optimal control differ from zero. On an optimal singular regime with $ k $ components, the necessary conditions requiring all $ k( k- 1)/2 $ coefficients of (18) to vanish must be fulfilled (see [6]):

$$ \tag{19 } \sum _ { i= } 1 ^ { n } \left ( \frac{\partial M _ {s} }{\partial x ^ {i} } \phi _ {p} ^ {i} - \frac{\partial M _ {p} }{\partial x ^ {i} } \phi _ {s} ^ {i} \right ) = 0,\ \ s, p = 1 \dots k,\ \ s < p. $$

Apart from these conditions, the following inequality-type condition must be fulfilled (which is the analogue of condition (13) for an optimal singular regime with one component when $ q= 1 $):

$$ \tag{20 } \sum _ {s, p = 1 } ^ { k } \frac \partial {\partial u _ {p} } \left ( \frac{d ^ {2} }{dt ^ {2} } \left ( \frac{\partial H }{\partial u _ {s} } \right ) \right ) \delta u _ {s} \delta u _ {p} \geq 0. $$

Conditions (13), (20) can be considered as a generalization of the Legendre condition and the Clebsch condition in the case of an optimal singular regime; the inequalities shown are therefore sometimes called the generalized Legendre–Clebsch conditions.

The importance of optimal singular regimes in optimal control problems is explained by the following property of optimal singular regimes (see [4]): If an optimal trajectory originating at a certain point contains a section of an optimal singular regime, then all optimal trajectories originating at nearby points possess the same property.

Research has been carried out into questions relating to a definition of an order of an optimal singular regime for linear and non-linear control problems (see [8]).

All the results on optimal singular regimes mentioned above are obtained by examining the second variation of the functional. It is possible to obtain further necessary conditions for optimality of a singular regime by examining third and fourth variations of the functional (see [9]).

References

[1] G. Kelli, Raket. Tekhn. i Kosmonavtik. : 8 (1964) pp. 26–29
[2] G. Robbins, Raket. Tekhn. i Kosmonavtik. : 6 (1965) pp. 139–145
[3] R. Kopp, G. Moier, Raket. Tekhn. i Kosmonavtik. : 8 (1965) pp. 84–90
[4] Ya.M. Bershchanskii, "Fusing of singular and nonsingular parts of optimal control" Automat. Remote Control , 40 : 3 (1979) pp. 325–330 Avtomatik. i Telemekh. : 3 (1979) pp. 5–11
[5] A.T. Fuller, "Theory of discrete, optimal and self-adjusting systems" , Proc. 1st Internat. Congress Internat. Fed. Autom. Control , Moscow (1960) pp. 584–605 (In Russian)
[6] I.B. Vapnyarskii, "An existence theorem for optimal control in the Boltz problem, some of its applications and the necessary conditions for the optimality of moving and singular systems" USSR Comp. Math. Math. Phys. , 7 : 2 (1967) pp. 22–54 Zh. Vychisl. Mat. i Mat. Fiz. , 7 : 2 (1967) pp. 259–283
[7] A. Krener, "The high order maximal principle and its application to singular extremals" Siam J. Control Optim. , 15 : 2 (1977) pp. 256–293
[8] R.M. Lewis, "Definition of oder and function conditions in singular optimal control problems" Siam J. Control Optim. , 18 : 1 (1980) pp. 21–32
[9] I.T. Skorodinskii, "The necessary condition for optimality of singular controls" USSR Comp. Math. Math. Phys. , 19 : 5 (1979) pp. 46–53 Zh. Vychisl. Mat. i Mat. Fiz. , 19 : 5 (1979) pp. 1134–1140
[10] R. Gabasov, F.M. Kirillova, "Singular optimal control" , Moscow (1973) (In Russian)

Comments

In [a1] new necessary conditions for the singular control problem in the calculus of variations that generalize the classical Legendre–Clebsch condition are given. This new condition is sometimes referred to as the Kelley condition.

References

[a1] H.J. Kelley, R.E. Kopp, H.G. Moyer, "Singular extremals" G. Leitmann (ed.) , Topics of Optimization , Acad. Press (1967) pp. Chapt. 3; 63–101
[a2] H.W. Knobloch, "Higher order necessary conditions in optimal control theory" , Springer (1981)
[a3] A.E. Bryson, Y.-C. Ho, "Applied optimal control" , Ginn (1969)
[a4] H. Hermes, J.P. Lasalle, "Functional analysis and time optimal control" , Acad. Press (1969)
[a5] D.J. Bell, D.H. Jacobson, "Singular optimal control problems" , Acad. Press (1975)
[a6] W.H. Fleming, R.W. Rishel, "Deterministic and stochastic control" , Springer (1975)
[a7] D.P. Bertsekas, S.E. Shreve, "Stochastic optimal control: the discrete-time case" , Acad. Press (1978)
[a8] D.P. Bertsekas, "Dynamic programming and stochastic control" , Acad. Press (1976)
[a9] M.H.A. Davis, "Martingale methods in stochastic control" , Stochastic Control and Stochastic Differential Systems , Lect. notes in control and inform. sci. , 16 , Springer (1979) pp. 85–117
[a10] L. Cesari, "Optimization - Theory and applications" , Springer (1983)
[a11] L.W. Neustadt, "Optimization, a theory of necessary conditions" , Princeton Univ. Press (1976)
[a12] V. Barbu, G. Da Prato, "Hamilton–Jacobi equations in Hilbert spaces" , Pitman (1983)
[a13] H.J. Kushner, "Introduction to stochastic control" , Holt (1971)
[a14] P.R. Kumar, P. Varaiya, "Stochastic systems: estimation, identification and adaptive control" , Prentice-Hall (1986)
[a15] L. Ljung, "System identification theory for the user" , Prentice-Hall (1987)
How to Cite This Entry:
Optimal singular regime. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Optimal_singular_regime&oldid=48055
This article was adapted from an original article by I.B. Vapnyarskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article