Namespaces
Variants
Actions

Optimal sliding regime

From Encyclopedia of Mathematics
Revision as of 14:54, 7 June 2020 by Ulf Rehmann (talk | contribs) (tex encoded by computer)
Jump to: navigation, search


A term used in the theory of optimal control to describe an optimal method of controlling a system when a minimizing sequence of control functions does not have a limit in the class of Lebesgue-measurable functions.

For example, suppose a minimum of the functional

$$ \tag{1 } J( x, u) = \int\limits _ { 0 } ^ { 3 } ( x ^ {2} - u ^ {2} ) dt $$

has to be found, given the constraints

$$ \tag{2 } \dot{x} = u , $$

$$ \tag{3 } x( 0) = 1,\ x( 3) = 1, $$

$$ \tag{4 } | u | \leq 1. $$

In order to obtain a minimum of the functional (1), it is desirable for every $ t $ to have as small a value as possible of $ | x( t) | $ and as large a value as possible of $ | u( t) | $. The first requirement, with regard to the constraining condition (2), the boundary conditions (3) and the control constraint (4), is satisfied by the trajectory

$$ \tag{5 } x( t) = \left \{ \begin{array}{l} 1- t, \\ 0, \\ t- 2, \end{array} \ \begin{array}{l} 0 \leq t \leq 1, \\ 1 < t < 2, \\ 2 \leq t \leq 3. \end{array} \right .$$

If the trajectory (5) could be created for a control which, for all $ t $, has the boundary values

$$ \tag{6 } u( t) = + 1 \ \textrm{ or } \ u( t) = - 1, $$

the absolute minimum of the functional (1) would be obtained. However, the "ideal" trajectory (5) cannot be created for any control function $ u( t) $ which satisfies (6), since, when $ 1 < t < 2 $, $ u( t) \equiv 0 $. Nevertheless, it is possible, using control functions $ u _ {n} ( t) $ which, when $ n \rightarrow \infty $ and $ 1 < t < 2 $, realize ever more frequent switchings from 1 to $ - 1 $ and vice versa:

$$ \tag{7 } u _ {n} ( t) = $$

$$ = \ \left \{ \begin{array}{ll} - 1, &0 \leq t \leq 1, \\ + 1, & \frac{k}{n} < t- 1 \leq \frac{2k+ 1 }{2n} , k= 0 \dots n- 1, \\ - 1, & \frac{2k+ 1 }{2n} < t- 1 \leq \frac{k+ 1 }{n} ,\ k = 0 \dots n- 1, \\ + 1, &2 < t \leq 3 \\ \end{array} \right . $$

( $ n= 1, 2 ,\dots $), to create a minimizing sequence of controls $ \{ u _ {n} ( t) \} $ which satisfies (6) and a minimizing sequence of trajectories $ \{ x _ {n} ( t) \} $ converging towards the "ideal" trajectory (5).

Each trajectory $ x _ {n} ( t) $ differs from (5) only on the interval $ ( 1, 2) $ on which, instead of being a precise path along the $ x $- axis, it makes a "saw-toothed" path with $ n $ identical "teeth" , positioned above the $ x $- axis. The "teeth of the saw" become ever finer when $ n \rightarrow \infty $, such that $ \lim\limits _ {n \rightarrow \infty } x _ {n} ( t) = 0 $, $ 1 < t < 2 $. In this way, the minimizing sequence of trajectories $ \{ x _ {n} ( t) \} $ converges towards (5), but the minimizing sequence of controls $ \{ u _ {n} ( t) \} $, which, when $ n \rightarrow \infty $ and $ 1 < t < 2 $, realizes ever more frequent switchings from 1 to $ - 1 $ and vice versa, does not have a limit in the class of measurable (and even more so in the class of piecewise continuous) functions. This means that on the section $ ( 1, 2) $ an optimal sliding regime occurs.

Using heuristic reasoning, it is possible to describe the optimal sliding regime obtained in the following way: An optimal control at each point of the interval $ ( 1, 2) $" slides" , i.e. skips from the value $ + 1 $ to $ - 1 $ and back, such that, for any interval of time, however small, the measure of the set of points $ t $ in which $ u = + 1 $ is equal to the measure of the set of points $ t $ in which $ u = - 1 $, which, by virtue of equation (2), ensures a precise motion along the $ x $- axis. The description given above of the character of change of an optimal control on part of a sliding regime is non-rigorous, for it does not satisfy the ordinary definition of a function.

It is possible to give a rigorous definition of an optimal sliding regime if, along with the initial problem (1)–(4), an auxiliary "split" problem is introduced: To find a minimum of the functional

$$ \tag{8 } I( x, \alpha , u) = \int\limits _ { 0 } ^ { 3 } ( x ^ {2} - \alpha _ {0} u _ {0} ^ {2} - \alpha _ {1} u _ {1} ^ {2} ) dt, $$

given the constraints

$$ \tag{9 } \dot{x} = \alpha _ {0} u _ {0} + \alpha _ {1} u _ {1} , $$

$$ \tag{10 } x( 0) = 1,\ x( 3) = 1, $$

$$ \tag{11 } | u _ {0} | \leq 1,\ | u _ {1} | \leq 1, \ \alpha _ {0} + \alpha _ {1} = 1,\ \alpha _ {0} , \alpha _ {1} \geq 0. $$

The split problem (8)–(11) differs from the initial one in that, instead of one control function $ u( t) $, two independent control functions $ u _ {0} ( t) $ and $ u _ {1} ( t) $ are introduced; the integrand and the function on the right-hand side of equation (2) of the initial problem are replaced by a linear convex combination of corresponding functions, taken with different controls $ u _ {0} ( t) $ and $ u _ {1} ( t) $ and with coefficients $ \alpha _ {0} ( t) $, $ \alpha _ {1} ( t) $, which are also considered as control functions.

Thus, in problem (8)–(11) there are four controls $ u _ {0} $, $ u _ {1} $, $ \alpha _ {0} $, $ \alpha _ {1} $. Insofar as $ \alpha _ {0} $ and $ \alpha _ {1} $ are related by the equality-type condition $ \alpha _ {0} + \alpha _ {1} = 1 $, it is possible to drop one of the controls $ \alpha _ {0} $ or $ \alpha _ {1} $ by expressing it through the other. However, for the convenience of subsequent analysis, it is advisable to leave both controls in an explicit form.

Unlike the initial problem, an optimal control for the split problem (8)–(11) exists. On a section of the optimal sliding regime of the initial problem, the optimal control for the split problem takes the form

$$ \alpha _ {0} ( t) = \alpha _ {1} ( t) = \frac{1}{2} ,\ \ u _ {0} ( t) = - 1 ,\ \ u _ {1} ( t) = + 1 , $$

$$ 1 < t < 2, $$

while on the sections of entry and exit:

$$ \alpha _ {0} ( t) = 1,\ \ u _ {0} ( t) = - 1, $$

$$ \alpha _ {1} ( t) = 0,\ u _ {1} ( t) \textrm{ may be arbitrary }, $$

$$ 0 \leq t \leq 1 , $$

$$ \alpha _ {1} ( t) = 1,\ u _ {1} ( t) = + 1, $$

$$ \alpha _ {0} ( t) = 0,\ u _ {0} ( t) \textrm{ may be arbitrary }, $$

$$ 2 \leq t \leq 3. $$

On the section of an optimal sliding regime, the controls $ \alpha _ {0} $ and $ \alpha _ {1} $, going linearly into the right-hand side, and the integrand accept values within the admissible domain. This means that the optimal sliding regime of the initial problem (1)–(4) is an optimal singular regime, or optimal singular control, for the auxiliary split problem (8)–(11).

The same results occur for optimal sliding regimes in general problems of optimal control. Suppose that a minimum of the functional

$$ \tag{12 } J( x, u) = \int\limits _ { t _ {0} } ^ { {t _ 1 } } f ^ { 0 } ( t, x, u) dt, $$

$$ f ^ { 0 } ( t, x, u) : \mathbf R \times \mathbf R ^ {n} \times \mathbf R ^ {m} \rightarrow \mathbf R , $$

has to be found, given the conditions

$$ \tag{13 } \dot{x} = f( t, x, u),\ f( t, x, u): \mathbf R \times \mathbf R ^ {n} \times \mathbf R ^ {m} \rightarrow \mathbf R ^ {n} , $$

$$ \tag{14 } x( t _ {0} ) = x _ {0} ,\ x( t _ {1} ) = x _ {1} , $$

$$ \tag{15 } u \in U. $$

The optimal sliding regime is characterized by the non-uniqueness of the maximum with respect to $ u $ of the Hamilton function

$$ H( t, x, \psi , u) = \ \sum _ { i= } 0 ^ { n } \psi _ {i} f ^ { i } ( t, x, u), $$

where $ \psi _ {i} $ are conjugate variables (see [2]). Under these conditions, on the section $ [ \tau _ {1} , \tau _ {2} ] $ of the $ ( k+ 1) $- st "slide" $ ( k > 1) $ with the maxima $ u _ {0} \dots u _ {k} $, the initial problem splits and takes the form

$$ \tag{16 } I( x, \alpha , u) = \int\limits _ { \tau _ {1} } ^ { {\tau _ 2 } } \sum _ { s= } 0 ^ { k } \alpha _ {s} f ^ { 0 } ( t, x, u _ {s} ) dt, $$

$$ \tag{17 } \dot{x} = \sum _ { s= } 0 ^ { k } \alpha _ {s} f( t, x, u _ {s} ), $$

$$ \tag{18 } x( t) = x _ {0} ,\ x( t _ {1} ) = x _ {1} , $$

$$ \tag{19 } u _ {s} \in U,\ \sum _ { s= } 0 ^ { k } \alpha _ {s} = 1,\ \alpha _ {s} \geq 0,\ s= 0 \dots k. $$

The Hamilton function for the problem (16)–(19),

$$ H( t, x, \psi , \alpha , u) = \ \sum _ { i= } 0 ^ { n } \psi _ {i} \left ( \sum _ { s= } 0 ^ { k } \alpha _ {s} f ^ { i } ( t, x, u _ {s} ) \right ) , $$

after excluding $ \alpha _ {0} $ and regrouping the terms, can be reduced to the form

$$ \tag{20 } H( t, x, \psi , \alpha , u) = $$

$$ = \ \sum _ { s= } 1 ^ { k } \left ( \sum _ { i= } 0 ^ { n } \psi _ {i} f ^ { i } ( t, x, u _ {0} ) - \sum _ { i= } 0 ^ { n } \psi _ {i} f ^ { i } ( t, x, u _ {s} ) \right ) \alpha _ {s\ } = $$

$$ = \ \sum _ { s= } 0 ^ { k } ( H( t, x, \psi , u _ {1} ) - H( t, x, \psi , u _ {s} )) \alpha _ {s} . $$

Because the $ H( t, x, \psi , u _ {s} ) $, $ s = 0 \dots k $, are equal to the maxima of $ H $ with respect to $ u $ on the set $ U $, on the section $ [ \tau _ {1} , \tau _ {s} ] $ of the optimal sliding regime with $ k+ 1 $ maxima the coefficients at the $ k $ independent linear controls $ \alpha _ {1} \dots \alpha _ {k} $ of the Hamilton function of the split problem (12)–(15) are equal to zero. An optimal sliding regime with a "slide" through $ k+ 1 $ maxima is an optimal singular regime with $ k $ components for the split problem (16)–(19). The maximum possible value of $ k $ advisable to take when researching sliding regimes is defined by the condition of convexity of the set of values of the right-hand side vector and the convexity from below of the greatest lower bound of the set of values of the integrand of the split system obtained when the control vector $ ( \alpha _ {s} , u _ {s} ) $, $ s = 0 \dots k $, runs through the whole admissible domain of values. Thus, $ k \leq n $ is an estimation from above for $ k $. In the most general case, all optimal sliding regimes of the initial problem can be obtained as optimal singular controls of the split problem written out for $ k= n $. In particular, in the above example, the split problem has been examined when $ k= 1 $, insofar as the constraints contain only one equation; investigation into the split problem (8)–(11) has proved adequate for research into the optimal sliding regime of the initial problem (1)–(4).

If $ k= 1 $ and controls $ u _ {0} ( t) \dots u _ {k} ( t) $ are known which provide equal absolute maxima of the Hamilton function $ H( t, x, \psi , u) $ in the admissible domain $ U $, then the analysis of the optimal sliding regime reduces to the investigation of an optimal singular regime with $ k $ components. This investigation can be carried out by using necessary conditions of optimality of a singular control (see Optimal singular regime).

Investigations have been carried out into optimal sliding regimes using sufficient conditions of optimality (see [4]).

References

[1] R.V. Gamkrelidze, "Optimal sliding states" Soviet Math. Dokl. , 3 : 2 (1962) pp. 559–562 Dokl. Akad. Nauk SSSR , 143 : 6 (1962) pp. 1243–1245
[2] L.S. Pontryagin, V.G. Boltayanskii, R.V. Gamkrelidze, E.F. Mishchenko, "The mathematical theory of optimal processes" , Wiley (1967) (Translated from Russian)
[3] I.B. Vapnyarskii, "An existence theorem for optimal control in the Boltz problem, some of its applications and the necessary conditions for the optimality of moving and singular systems" USSR Comp. Math. Math. Phys. , 7 : 2 (1963) pp. 22–54 Zh. Vychisl. Mat. i Mat. Fiz. , 7 : 2 (1967) pp. 259–283
[4] V.F. Krotov, "Methods for solving variational problems based on sufficient conditions for an absolute minimum II" Automat. Remote Control , 24 (1963) pp. 539–553 Avtomatik. i Telemekh. , 24 : 5 (1963) pp. 581–598

Comments

A sliding control is also called a chattering control, see [a1].

For more references see also Optimal singular regime.

References

[a1] L. Markus, "Foundations of optimal control theory" , Wiley (1967)
How to Cite This Entry:
Optimal sliding regime. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Optimal_sliding_regime&oldid=49501
This article was adapted from an original article by I.B. Vapnyarskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article