Namespaces
Variants
Actions

Difference between revisions of "Time-optimal control problem"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
t0928701.png
 +
$#A+1 = 62 n = 0
 +
$#C+1 = 62 : ~/encyclopedia/old_files/data/T092/T.0902870 Time\AAhoptimal control problem
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
One of the problems in the mathematical theory of optimal control (cf. [[Optimal control, mathematical theory of|Optimal control, mathematical theory of]]), consisting in the determination of the minimum time
 
One of the problems in the mathematical theory of optimal control (cf. [[Optimal control, mathematical theory of|Optimal control, mathematical theory of]]), consisting in the determination of the minimum time
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t0928701.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
J( u)  = t _ {1}  $$
  
 
in which a controlled object, the movement of which is described by a system of ordinary differential equations
 
in which a controlled object, the movement of which is described by a system of ordinary differential equations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t0928702.png" /></td> </tr></table>
+
$$
 +
\dot{x}  = f( x, u),\ \
 +
u \in U,\ \
 +
f : \mathbf R  ^ {n} \times \mathbf R  ^ {p} \rightarrow \mathbf R  ^ {n} ,
 +
$$
  
can be transferred from a given initial position <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t0928703.png" /> to a given final position <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t0928704.png" />. Here, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t0928705.png" /> is the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t0928706.png" />-dimensional vector of phase coordinates, while <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t0928707.png" /> is the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t0928708.png" />-dimensional vector of controlling parameters (controls) which, for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t0928709.png" />, belong to a given closed admissible domain of controls <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287010.png" />.
+
can be transferred from a given initial position $  x( 0) = x _ {0} $
 +
to a given final position $  x( t _ {1} ) = x _ {1} $.  
 +
Here, $  x = x( t) $
 +
is the $  n $-
 +
dimensional vector of phase coordinates, while $  u = u( t) $
 +
is the $  p $-
 +
dimensional vector of controlling parameters (controls) which, for any t $,  
 +
belong to a given closed admissible domain of controls $  U $.
  
The required minimum time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287011.png" /> is the functional (1) depending on the chosen control <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287012.png" />. As the class of admissible controls, in which the time-optimal control is to be found, it is sufficient, for the majority of applications, to examine piecewise-continuous controls <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287013.png" />, i.e. functions which are continuous for all values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287014.png" /> being considered, with the exception of a finite number of moments of time, at which they can have discontinuities of the first kind. Theoretically, strictly speaking, the more general class of Lebesgue-measurable functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287015.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287016.png" />, should be considered.
+
The required minimum time t _ {1} $
 +
is the functional (1) depending on the chosen control $  u( t) $.  
 +
As the class of admissible controls, in which the time-optimal control is to be found, it is sufficient, for the majority of applications, to examine piecewise-continuous controls $  u( t) $,  
 +
i.e. functions which are continuous for all values of t $
 +
being considered, with the exception of a finite number of moments of time, at which they can have discontinuities of the first kind. Theoretically, strictly speaking, the more general class of Lebesgue-measurable functions $  u( t) $,  
 +
0 \leq  t \leq  t _ {1} $,  
 +
should be considered.
  
The time-optimal control problem can be considered as a particular instance of the [[Bolza problem|Bolza problem]] or the [[Mayer problem|Mayer problem]] in variational calculus, and is obtained from these problems by the special form of the functional to be optimized. The time-optimal control <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287017.png" /> must satisfy the [[Pontryagin maximum principle|Pontryagin maximum principle]], which is a necessary condition that generalizes the necessary conditions of Euler, Clebsch and Weierstrass, used in classical variational calculus.
+
The time-optimal control problem can be considered as a particular instance of the [[Bolza problem|Bolza problem]] or the [[Mayer problem|Mayer problem]] in variational calculus, and is obtained from these problems by the special form of the functional to be optimized. The time-optimal control $  u( t) $
 +
must satisfy the [[Pontryagin maximum principle|Pontryagin maximum principle]], which is a necessary condition that generalizes the necessary conditions of Euler, Clebsch and Weierstrass, used in classical variational calculus.
  
 
For linear time-optimal control problems, certain conclusions can be drawn from the necessary conditions regarding the qualitative structure of the optimal control. Problems which satisfy the following three conditions are called linear time-optimal control problems ([[#References|[1]]], [[#References|[2]]]):
 
For linear time-optimal control problems, certain conclusions can be drawn from the necessary conditions regarding the qualitative structure of the optimal control. Problems which satisfy the following three conditions are called linear time-optimal control problems ([[#References|[1]]], [[#References|[2]]]):
  
1) the controls of the movement of the object are linear in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287018.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287019.png" />:
+
1) the controls of the movement of the object are linear in $  x $
 +
and $  u $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287020.png" /></td> </tr></table>
+
$$
 +
\dot{x}  = Ax + Bu,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287021.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287022.png" /> are constant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287023.png" />- and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287024.png" />-matrices, respectively;
+
where $  A $
 +
and $  B $
 +
are constant $  ( n \times n) $-  
 +
and $  ( n \times p) $-
 +
matrices, respectively;
  
2) the final position <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287025.png" /> coincides with the coordinate origin, which is an equilibrium position of the object if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287026.png" />;
+
2) the final position $  x _ {1} $
 +
coincides with the coordinate origin, which is an equilibrium position of the object if $  u = 0 $;
  
3) the domain of controls <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287027.png" /> is a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287028.png" />-dimensional convex polyhedron, such that the coordinate origin of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287029.png" />-space belongs to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287030.png" /> but is not a vertex of it.
+
3) the domain of controls $  U $
 +
is a $  p $-
 +
dimensional convex polyhedron, such that the coordinate origin of the $  u $-
 +
space belongs to $  U $
 +
but is not a vertex of it.
  
 
Let the condition of general position be fulfilled, consisting of the linear independence of the vectors
 
Let the condition of general position be fulfilled, consisting of the linear independence of the vectors
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287031.png" /></td> </tr></table>
+
$$
 +
Bw, ABw \dots A  ^ {n-} 1 Bw,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287032.png" /> is an arbitrary <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287033.png" />-dimensional vector parallel to an edge of the polyhedron <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287034.png" />. Then a control <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287035.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287036.png" />, transferring the object from a given initial position <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287037.png" /> to an equilibrium position (the coordinate origin in the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287038.png" />-space) is a time-optimal control if and only if the Pontryagin maximum principle holds for it. Furthermore, the optimal control <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287039.png" /> in the linear time-optimal control problem is piecewise constant, and the vertices of the polyhedron <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287040.png" /> are its only values.
+
where $  w $
 +
is an arbitrary $  p $-
 +
dimensional vector parallel to an edge of the polyhedron $  U $.  
 +
Then a control $  u( t) $,  
 +
0 \leq  t \leq  t _ {1} $,  
 +
transferring the object from a given initial position $  x _ {0} $
 +
to an equilibrium position (the coordinate origin in the $  x $-
 +
space) is a time-optimal control if and only if the Pontryagin maximum principle holds for it. Furthermore, the optimal control $  u( t) $
 +
in the linear time-optimal control problem is piecewise constant, and the vertices of the polyhedron $  U $
 +
are its only values.
  
In general, the number of jumps of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287041.png" />, although finite, can be arbitrary. In the following important case, the number of jumps permits an upper bound.
+
In general, the number of jumps of $  u( t) $,  
 +
although finite, can be arbitrary. In the following important case, the number of jumps permits an upper bound.
  
If the polyhedron <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287042.png" /> is the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287043.png" />-dimensional parallelopipedon
+
If the polyhedron $  U $
 +
is the $  p $-
 +
dimensional parallelopipedon
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287044.png" /></td> </tr></table>
+
$$
 +
a  ^ {s}  \leq  u  ^ {s}  \leq  b  ^ {s} ,\ \
 +
s = 1 \dots p,
 +
$$
  
and all the eigenvalues of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287045.png" /> are real, then every one of the components <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287046.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287047.png" />, of the optimal control <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287048.png" /> is a piecewise-constant function, taking only the values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287049.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287050.png" /> and having at most <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287051.png" /> jumps, i.e. at most <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287052.png" /> intervals of constancy.
+
and all the eigenvalues of the matrix $  A $
 +
are real, then every one of the components $  u  ^ {s} ( t) $,  
 +
$  s = 1 \dots p $,  
 +
of the optimal control $  u( t) $
 +
is a piecewise-constant function, taking only the values $  a  ^ {s} $
 +
and $  b  ^ {s} $
 +
and having at most $  n- 1 $
 +
jumps, i.e. at most $  n $
 +
intervals of constancy.
  
The problem of time-optimal control can also be studied for non-autonomous systems, i.e. for systems whose right-hand side <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287053.png" /> depends on the time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287054.png" />.
+
The problem of time-optimal control can also be studied for non-autonomous systems, i.e. for systems whose right-hand side $  f $
 +
depends on the time t $.
  
In those cases where this works, it is useful to look at the problem of time-optimal control not only in its programming formulation as described above, but also in a positional formulation in the form of a synthesis problem (see [[Optimal synthesis control|Optimal synthesis control]]). The solution of this synthesis problem provides a qualitative representation of the structure of the time-optimal control transferring the system from any point in a neighbourhood of an initial starting point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287055.png" /> to a given final position <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287056.png" />.
+
In those cases where this works, it is useful to look at the problem of time-optimal control not only in its programming formulation as described above, but also in a positional formulation in the form of a synthesis problem (see [[Optimal synthesis control|Optimal synthesis control]]). The solution of this synthesis problem provides a qualitative representation of the structure of the time-optimal control transferring the system from any point in a neighbourhood of an initial starting point $  x _ {0} $
 +
to a given final position $  x _ {1} $.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  L.S. Pontryagin,  V.G. Boltayanskii,  R.V. Gamkrelidze,  E.F. Mishchenko,  "The mathematical theory of optimal processes" , Wiley  (1962)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  V.G. Boltyanskii,  "Mathematical methods of optimal control" , Holt, Rinehart &amp; Winston  (1971)  (Translated from Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  L.S. Pontryagin,  V.G. Boltayanskii,  R.V. Gamkrelidze,  E.F. Mishchenko,  "The mathematical theory of optimal processes" , Wiley  (1962)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  V.G. Boltyanskii,  "Mathematical methods of optimal control" , Holt, Rinehart &amp; Winston  (1971)  (Translated from Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
The concept of a reachable set is a useful aid for visualizing properties of the optimal control. The reachable set is a function of the time, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287057.png" />, and consists of all points that can be reached at time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287058.png" />, starting from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287059.png" />, and using admissible controls only. For linear time-optimal control problems this set is compact and convex for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287060.png" />. The minimum time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287061.png" /> obviously satisfies <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t092/t092870/t09287062.png" />. For more information on the number of jumps (switches) see [[#References|[a2]]].
+
The concept of a reachable set is a useful aid for visualizing properties of the optimal control. The reachable set is a function of the time, $  R( t) $,  
 +
and consists of all points that can be reached at time t $,  
 +
starting from $  x _ {0} $,  
 +
and using admissible controls only. For linear time-optimal control problems this set is compact and convex for any t $.  
 +
The minimum time t _ {1} $
 +
obviously satisfies $  t _ {1} = \min \{ {t } : {x _ {1} \in R( t) } \} $.  
 +
For more information on the number of jumps (switches) see [[#References|[a2]]].
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  J.P. LaSalle,  "Functional analysis and time optimal control" , Acad. Press  (1969)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  G.J. Olsder,  "Time-optimal control of multivariable systems near the origin"  ''J. Optim. Theory &amp; Appl.'' , '''15'''  (1975)  pp. 497–517</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  J.P. LaSalle,  "Functional analysis and time optimal control" , Acad. Press  (1969)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  G.J. Olsder,  "Time-optimal control of multivariable systems near the origin"  ''J. Optim. Theory &amp; Appl.'' , '''15'''  (1975)  pp. 497–517</TD></TR></table>

Latest revision as of 08:25, 6 June 2020


One of the problems in the mathematical theory of optimal control (cf. Optimal control, mathematical theory of), consisting in the determination of the minimum time

$$ \tag{1 } J( u) = t _ {1} $$

in which a controlled object, the movement of which is described by a system of ordinary differential equations

$$ \dot{x} = f( x, u),\ \ u \in U,\ \ f : \mathbf R ^ {n} \times \mathbf R ^ {p} \rightarrow \mathbf R ^ {n} , $$

can be transferred from a given initial position $ x( 0) = x _ {0} $ to a given final position $ x( t _ {1} ) = x _ {1} $. Here, $ x = x( t) $ is the $ n $- dimensional vector of phase coordinates, while $ u = u( t) $ is the $ p $- dimensional vector of controlling parameters (controls) which, for any $ t $, belong to a given closed admissible domain of controls $ U $.

The required minimum time $ t _ {1} $ is the functional (1) depending on the chosen control $ u( t) $. As the class of admissible controls, in which the time-optimal control is to be found, it is sufficient, for the majority of applications, to examine piecewise-continuous controls $ u( t) $, i.e. functions which are continuous for all values of $ t $ being considered, with the exception of a finite number of moments of time, at which they can have discontinuities of the first kind. Theoretically, strictly speaking, the more general class of Lebesgue-measurable functions $ u( t) $, $ 0 \leq t \leq t _ {1} $, should be considered.

The time-optimal control problem can be considered as a particular instance of the Bolza problem or the Mayer problem in variational calculus, and is obtained from these problems by the special form of the functional to be optimized. The time-optimal control $ u( t) $ must satisfy the Pontryagin maximum principle, which is a necessary condition that generalizes the necessary conditions of Euler, Clebsch and Weierstrass, used in classical variational calculus.

For linear time-optimal control problems, certain conclusions can be drawn from the necessary conditions regarding the qualitative structure of the optimal control. Problems which satisfy the following three conditions are called linear time-optimal control problems ([1], [2]):

1) the controls of the movement of the object are linear in $ x $ and $ u $:

$$ \dot{x} = Ax + Bu, $$

where $ A $ and $ B $ are constant $ ( n \times n) $- and $ ( n \times p) $- matrices, respectively;

2) the final position $ x _ {1} $ coincides with the coordinate origin, which is an equilibrium position of the object if $ u = 0 $;

3) the domain of controls $ U $ is a $ p $- dimensional convex polyhedron, such that the coordinate origin of the $ u $- space belongs to $ U $ but is not a vertex of it.

Let the condition of general position be fulfilled, consisting of the linear independence of the vectors

$$ Bw, ABw \dots A ^ {n-} 1 Bw, $$

where $ w $ is an arbitrary $ p $- dimensional vector parallel to an edge of the polyhedron $ U $. Then a control $ u( t) $, $ 0 \leq t \leq t _ {1} $, transferring the object from a given initial position $ x _ {0} $ to an equilibrium position (the coordinate origin in the $ x $- space) is a time-optimal control if and only if the Pontryagin maximum principle holds for it. Furthermore, the optimal control $ u( t) $ in the linear time-optimal control problem is piecewise constant, and the vertices of the polyhedron $ U $ are its only values.

In general, the number of jumps of $ u( t) $, although finite, can be arbitrary. In the following important case, the number of jumps permits an upper bound.

If the polyhedron $ U $ is the $ p $- dimensional parallelopipedon

$$ a ^ {s} \leq u ^ {s} \leq b ^ {s} ,\ \ s = 1 \dots p, $$

and all the eigenvalues of the matrix $ A $ are real, then every one of the components $ u ^ {s} ( t) $, $ s = 1 \dots p $, of the optimal control $ u( t) $ is a piecewise-constant function, taking only the values $ a ^ {s} $ and $ b ^ {s} $ and having at most $ n- 1 $ jumps, i.e. at most $ n $ intervals of constancy.

The problem of time-optimal control can also be studied for non-autonomous systems, i.e. for systems whose right-hand side $ f $ depends on the time $ t $.

In those cases where this works, it is useful to look at the problem of time-optimal control not only in its programming formulation as described above, but also in a positional formulation in the form of a synthesis problem (see Optimal synthesis control). The solution of this synthesis problem provides a qualitative representation of the structure of the time-optimal control transferring the system from any point in a neighbourhood of an initial starting point $ x _ {0} $ to a given final position $ x _ {1} $.

References

[1] L.S. Pontryagin, V.G. Boltayanskii, R.V. Gamkrelidze, E.F. Mishchenko, "The mathematical theory of optimal processes" , Wiley (1962) (Translated from Russian)
[2] V.G. Boltyanskii, "Mathematical methods of optimal control" , Holt, Rinehart & Winston (1971) (Translated from Russian)

Comments

The concept of a reachable set is a useful aid for visualizing properties of the optimal control. The reachable set is a function of the time, $ R( t) $, and consists of all points that can be reached at time $ t $, starting from $ x _ {0} $, and using admissible controls only. For linear time-optimal control problems this set is compact and convex for any $ t $. The minimum time $ t _ {1} $ obviously satisfies $ t _ {1} = \min \{ {t } : {x _ {1} \in R( t) } \} $. For more information on the number of jumps (switches) see [a2].

References

[a1] J.P. LaSalle, "Functional analysis and time optimal control" , Acad. Press (1969)
[a2] G.J. Olsder, "Time-optimal control of multivariable systems near the origin" J. Optim. Theory & Appl. , 15 (1975) pp. 497–517
How to Cite This Entry:
Time-optimal control problem. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Time-optimal_control_problem&oldid=14799
This article was adapted from an original article by I.B. Vapnyarskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article