Namespaces
Variants
Actions

Difference between revisions of "Pontryagin maximum principle"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
(TeX done and links)
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
Relations describing necessary conditions for a strong maximum in a non-classical variational problem in the mathematical theory of optimal control (cf. [[Optimal control, mathematical theory of|Optimal control, mathematical theory of]]). It was first formulated in 1956 by L.S. Pontryagin (see [[#References|[1]]]).
+
{{TEX|done}}
  
The proposed formulation of the Pontryagin maximum principle corresponds to the following problem of optimal control. Given a system of ordinary differential equations
+
Relations describing necessary conditions for a strong maximum in a non-classical [[variational problem]] in the [[Optimal control, mathematical theory of|mathematical theory of optimal control]]. It was first formulated in 1956 by L.S. Pontryagin <ref name="Pontryagin" />.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p0737801.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
The proposed formulation of the Pontryagin maximum principle corresponds to the following problem of optimal control. Given a system of [[Differential equation, ordinary|ordinary differential equation]]s
 
+
\begin{equation}\label{eq:1}
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p0737802.png" /> is a phase vector, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p0737803.png" /> is a control parameter and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p0737804.png" /> is a continuous vector function in the variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p0737805.png" /> that is continuously differentiable with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p0737806.png" />. A certain set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p0737807.png" /> of admissible values of the control parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p0737808.png" /> in the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p0737809.png" /> is given; two points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378010.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378011.png" /> in the phase space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378012.png" /> are given; the initial time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378013.png" /> is fixed. Any piecewise-continuous function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378014.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378015.png" />, with values in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378016.png" />, is called an admissible control. One says that an admissible control <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378017.png" /> transfers the phase point from the position <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378018.png" /> to the position <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378019.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378020.png" />) if the corresponding solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378021.png" /> of the system (1) satisfying the initial condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378022.png" /> is defined for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378023.png" /> and if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378024.png" />. Among all admissible controls transferring the phase point from the position <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378025.png" /> to the position <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378026.png" /> it is required to find an optimal control, i.e. a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378027.png" /> for which the functional
+
\dot{x}=f(x,u),
 
+
\end{equation}
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378028.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
where $x\in\mathbb{R}^n$ is a phase vector, $u\in\mathbb{R}^p$ is a control parameter and $f$ is a [[Continuous function|continuous]] [[vector function]] in the variables $x$, $u$ that is continuously differentiable with respect to $x$. A certain set $U$ of admissible values of the control parameter $u$ in the space $\mathbb{R}^p$ is given; two points $x^0$ and $x^1$ in the [[phase space]] $\mathbb{R}^n$ are given; the initial time $t_0$ is fixed. Any piecewise-continuous function $u(t)$, $t_0\leq t\leq t_1$, with values in $U$, is called an admissible control. One says that an admissible control $u=u(t)$ transfers the phase point from the position $x^0$ to the position $x^1$ ($x^0\rightarrow x^1$) if the corresponding solution $x(t)$ of the system \eqref{eq:1} satisfying the initial condition $x(t_0)=x^0$ is defined for all $t\in[t_0,t_1]$ and if $x(t_1)=x^1$. Among all admissible controls transferring the phase point from the position $x^0$ to the position $x^1$ it is required to find an optimal control, i.e. a function $u^*(t)$ for which the [[functional]]
 
+
\begin{equation}\label{eq:2}
takes least possible value. Here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378029.png" /> is a given function from the same class as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378030.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378031.png" /> is the solution of the system (1) with the initial condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378032.png" /> corresponding to the control <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378033.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378034.png" /> is the time at which this solution passes through <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378035.png" />. The problem consists of finding a pair consisting of the optimal control <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378036.png" /> and the corresponding optimal trajectory <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378037.png" /> of (1).
+
J=\int\limits_{t_0}^{t_1}f^0(x(t),u(t))dt
 +
\end{equation}
 +
takes least possible value. Here $f^0(x,u)$ is a given function from the same class as $f(x,u)$, $x(t)$ is the solution of the system \eqref{eq:1} with the initial condition $x(t_0)=x^0$ corresponding to the control $u(t)$, and $t_1$ is the time at which this solution passes through $x^1$. The problem consists of finding a pair consisting of the optimal control $u^*(t)$ and the corresponding optimal trajectory $x^*(t)$ of \eqref{eq:1}.
  
 
Let
 
Let
 +
\begin{equation}
 +
H(\psi,x,u)=(\psi,\mathbf{f}(x,u))
 +
\end{equation}
 +
be a scalar function ([[Hamiltonian]]) of the variables $\psi$, $x$, $u$, where $\psi=(\psi_0,\psi^1)\in\mathbb{R}^{n+1}$, $\psi_0\in\mathbb{R}^1$, $\psi^1\in\mathbb{R}^n$, $\mathbf{f}=(f_0,f)$. To the function $H(\psi,x,u)$ corresponds a canonical [[Hamiltonian system]] (with respect to $\psi$, $x$)
 +
\begin{equation}\label{eq:3}
 +
\frac{dx}{dt}=\frac{\partial H}{\partial\psi},\quad\frac{d\psi}{dt}=-\frac{\partial H}{\partial x},
 +
\end{equation}
 +
(the first equation in \eqref{eq:3} is the system \eqref{eq:1}). Let
 +
\begin{equation}
 +
M(\psi,x)=\sup\{H(\psi,x,u)\colon u\in U\}.
 +
\end{equation}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378038.png" /></td> </tr></table>
+
The Pontryagin maximum principle states: If $u^*(t)$, $x^*(t)$ ($t\in[t_0,t_1]$) is a solution of the optimal control problem \eqref{eq:1}, \eqref{eq:2} ($x^0\rightarrow x^1$, $u\in U$), then there exists a non-zero absolutely-continuous function $\psi(t)$ such that $\psi(t)$, $x^*(t)$, $u^*(t)$ satisfy system \eqref{eq:3} in $[t_0,t_1]$, such that for almost-all $t\in[t_0,t_1]$ the function $H(\psi(t),x^*(t),u^*(t))$ attains its maximum:
 +
\begin{equation}\label{eq:4}
 +
H(\psi(t),x^*(t),u^*(t))=M(\psi(t),x^*(t)),
 +
\end{equation}
 +
and such that at the terminal time $t_1$ the conditions
 +
\begin{equation}
 +
M(\psi(t_1),x^*(t_1))=0,\quad\psi_0(t_1)\leq 0,
 +
\end{equation}
 +
are satisfied.
  
be a scalar function (Hamiltonian) of the variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378039.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378040.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378041.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378042.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378043.png" />. To the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378044.png" /> corresponds a canonical (Hamiltonian) system (with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378045.png" />)
+
If the functions $\psi(t)$, $x(t)$, $u(t)$ satisfy the relations \eqref{eq:3}, \eqref{eq:4} (i.e. $x(t)$, $u(t)$ are Pontryagin extremal), then the conditions
 +
\begin{equation}
 +
\mathcal{M}(t)=M(\psi(t),x(t))\equiv const,\quad\psi_0(t)\equiv const,
 +
\end{equation}
 +
hold.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378046.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
From the above statement follows the maximum principle for the time-optimal problem ($f^0=1$, $J=t_1-t_0$). This statement admits a natural generalization to non-autonomous systems, problems with variable end-points and problems with restricted phase coordinates ($x(t)\in X$, where $X$ is a [[closed set]] in the phase space $\mathbb{R}^n$ satisfying some additional restrictions <ref name="Pontryagin" />.
  
(the first equation in (3) is the system (1)). Let
+
Admitting closed sets $U$, $X$ (in particular, these regions can be determined by systems of non-strict inequalities) makes the problem under consideration non-classical. The fundamental necessary conditions from the classical calculus of variations with ordinary derivative follow from the Pontryagin maximum principle (see <ref name="Pontryagin" /> and also [[Weierstrass conditions (for a variational extremum)]]).
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378047.png" /></td> </tr></table>
+
A widely used proof of the above formulation of the Pontryagin maximum principle, based on needle variations (i.e. one considers admissible controls arbitrarily deviating from the optimal one but only on a finite number of small time intervals), consists of linearization of the problem in a neighborhood of the optimal solution, construction of a [[convex cone]] of variations of the optimal trajectory, and subsequent application of the theorem on separated convex cones <ref name="Pontryagin" />. The corresponding condition is then rewritten in the analytical form \eqref{eq:3}, \eqref{eq:4} in terms of the maximum of the Hamiltonian $H(\psi,x,u)$ of the phase variables $x$, the controls $u$ and the adjoint variables $\psi$, which play the same role as the [[Lagrange multipliers]] in the classical calculus of variations. Effective application of the Pontryagin maximum principle often necessitates the solution of a two-point boundary value problem for \eqref{eq:3}.
  
The Pontryagin maximum principle states: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378048.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378049.png" />) is a solution of the optimal control problem (1), (2) (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378050.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378051.png" />), then there exists a non-zero absolutely-continuous function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378052.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378053.png" /> satisfy system (3) in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378054.png" />, such that for almost-all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378055.png" /> the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378056.png" /> attains its maximum:
+
The most complete solution of the problem of optimal control was obtained in the case of certain linear systems, for which the relations in the Pontryagin maximum principle are not only necessary but also sufficient optimality conditions.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378057.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
There are numerous generalizations of the Pontryagin maximum principle; for instance, in the direction of more complicated non-classical constraints (including mixed constraints imposed on the controls and phase coordinates, functional and different integral constraints), in studies of the sufficiency of the corresponding constraints, in the consideration of generalized solutions, so-called sliding regimes, systems of differential equations with non-smooth right-hand side, differential inclusions, optimal control problems for discrete systems and systems with an infinite number of degrees of freedom, in particular, described by partial differential equations, equations with an after effect (including equations with a delay), evolution equations in a [[Banach space]], etc. The latter lead to new classes of variations of the corresponding functionals, the introduction of the so-called integral maximum principle, the linearized maximum principle, etc. Rather general classes of variational problems with non-classical constraints (including non-strict inequalities) or with non-smooth functionals are usually called problems of Pontryagin type. The discovery of the Pontryagin maximum principle initiated the development of mathematical optimal control theory. It stimulated new research in the field of differential equations, functional analysis and extremal problems, computational mathematics and other related domains.
  
and such that at the terminal time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378058.png" /> the conditions
 
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378059.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
 
  
are satisfied.
+
====Comments====
  
If the functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378060.png" /> satisfy the relations (3), (4) (i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378061.png" /> are Pontryagin extremal), then the conditions
+
In the Western literature the Pontryagin maximum principle is also simply known as the minimum principle. (Cf. [[Optimal control, mathematical theory of]])
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378062.png" /></td> </tr></table>
 
  
hold.
 
 
From the above statement follows the maximum principle for the time-optimal problem (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378063.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378064.png" />). This statement admits a natural generalization to non-autonomous systems, problems with variable end-points and problems with restricted phase coordinates (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378065.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378066.png" /> is a closed set in the phase space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378067.png" /> satisfying some additional restrictions (see [[#References|[1]]])).
 
 
Admitting closed sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378068.png" /> (in particular, these regions can be determined by systems of non-strict inequalities) makes the problem under consideration non-classical. The fundamental necessary conditions from the classical calculus of variations with ordinary derivative follow from the Pontryagin maximum principle (see [[#References|[1]]] and also [[Weierstrass conditions (for a variational extremum)|Weierstrass conditions (for a variational extremum)]]).
 
 
A widely used proof of the above formulation of the Pontryagin maximum principle, based on needle variations (i.e. one considers admissible controls arbitrarily deviating from the optimal one but only on a finite number of small time intervals), consists of linearization of the problem in a neighbourhood of the optimal solution, construction of a convex cone of variations of the optimal trajectory, and subsequent application of the theorem on separated convex cones (see [[#References|[1]]]). The corresponding condition is then rewritten in the analytical form (3), (4) in terms of the maximum of the Hamiltonian <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378069.png" /> of the phase variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378070.png" />, the controls <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378071.png" /> and the adjoint variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073780/p07378072.png" />, which play the same role as the [[Lagrange multipliers|Lagrange multipliers]] in the classical calculus of variations. Effective application of the Pontryagin maximum principle often necessitates the solution of a two-point boundary value problem for (3).
 
 
The most complete solution of the problem of optimal control was obtained in the case of certain linear systems, for which the relations in the Pontryagin maximum principle are not only necessary but also sufficient optimality conditions.
 
 
There are numerous generalizations of the Pontryagin maximum principle; for instance, in the direction of more complicated non-classical constraints (including mixed constraints imposed on the controls and phase coordinates, functional and different integral constraints), in studies of the sufficiency of the corresponding constraints, in the consideration of generalized solutions, so-called sliding regimes, systems of differential equations with non-smooth right-hand side, differential inclusions, optimal control problems for discrete systems and systems with an infinite number of degrees of freedom, in particular, described by partial differential equations, equations with an after effect (including equations with a delay), evolution equations in a Banach space, etc. The latter lead to new classes of variations of the corresponding functionals, the introduction of the so-called integral maximum principle, the linearized maximum principle, etc. Rather general classes of variational problems with non-classical constraints (including non-strict inequalities) or with non-smooth functionals are usually called problems of Pontryagin type. The discovery of the Pontryagin maximum principle initiated the development of mathematical optimal control theory. It stimulated new research in the field of differential equations, functional analysis and extremal problems, computational mathematics and other related domains.
 
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  L.S. Pontryagin,  V.G. Boltayanskii,  R.V. Gamkrelidze,  E.F. Mishchenko,  "The mathematical theory of optimal processes" , Wiley  (1962)  (Translated from Russian)</TD></TR></table>
 
  
 +
<references>
 +
<ref name="Pontryagin">L.S. Pontryagin,  V.G. Boltayanskii,  R.V. Gamkrelidze,  E.F. Mishchenko,  "The mathematical theory of optimal processes" , Wiley  (1962)  (Translated from Russian)</ref>
 +
</references>
  
 
+
<ol start="2">
====Comments====
+
<li>W.H. Fleming,  R.W. Rishel,  "Deterministic and stochastic optimal control" , Springer  (1975)</li>
In the Western literature the Pontryagin maximum principle is also simply known as the minimum principle.
+
<li>L. Markus,  "Foundations of optimal control theory" , Wiley  (1967)</li>
 
+
<li>L.D. Berkovitz,  "Optimal control theory" , Springer  (1974)</li>
====References====
+
<li>L. Cesari,  "Optimization - Theory and applications" , Springer  (1983)</li>
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  W.H. Fleming,  R.W. Rishel,  "Deterministic and stochastic optimal control" , Springer  (1975)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  L. Markus,  "Foundations of optimal control theory" , Wiley  (1967)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> L.D. Berkovitz,  "Optimal control theory" , Springer  (1974)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  L. Cesari,  "Optimization - Theory and applications" , Springer  (1983)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  F. Clarke,  "Optimization and nonsmooth analysis" , Wiley  (1983)</TD></TR></table>
+
<li>F. Clarke,  "Optimization and nonsmooth analysis" , Wiley  (1983)</li>
 +
</ol>

Latest revision as of 17:21, 7 June 2016


Relations describing necessary conditions for a strong maximum in a non-classical variational problem in the mathematical theory of optimal control. It was first formulated in 1956 by L.S. Pontryagin [1].

The proposed formulation of the Pontryagin maximum principle corresponds to the following problem of optimal control. Given a system of ordinary differential equations \begin{equation}\label{eq:1} \dot{x}=f(x,u), \end{equation} where $x\in\mathbb{R}^n$ is a phase vector, $u\in\mathbb{R}^p$ is a control parameter and $f$ is a continuous vector function in the variables $x$, $u$ that is continuously differentiable with respect to $x$. A certain set $U$ of admissible values of the control parameter $u$ in the space $\mathbb{R}^p$ is given; two points $x^0$ and $x^1$ in the phase space $\mathbb{R}^n$ are given; the initial time $t_0$ is fixed. Any piecewise-continuous function $u(t)$, $t_0\leq t\leq t_1$, with values in $U$, is called an admissible control. One says that an admissible control $u=u(t)$ transfers the phase point from the position $x^0$ to the position $x^1$ ($x^0\rightarrow x^1$) if the corresponding solution $x(t)$ of the system \eqref{eq:1} satisfying the initial condition $x(t_0)=x^0$ is defined for all $t\in[t_0,t_1]$ and if $x(t_1)=x^1$. Among all admissible controls transferring the phase point from the position $x^0$ to the position $x^1$ it is required to find an optimal control, i.e. a function $u^*(t)$ for which the functional \begin{equation}\label{eq:2} J=\int\limits_{t_0}^{t_1}f^0(x(t),u(t))dt \end{equation} takes least possible value. Here $f^0(x,u)$ is a given function from the same class as $f(x,u)$, $x(t)$ is the solution of the system \eqref{eq:1} with the initial condition $x(t_0)=x^0$ corresponding to the control $u(t)$, and $t_1$ is the time at which this solution passes through $x^1$. The problem consists of finding a pair consisting of the optimal control $u^*(t)$ and the corresponding optimal trajectory $x^*(t)$ of \eqref{eq:1}.

Let \begin{equation} H(\psi,x,u)=(\psi,\mathbf{f}(x,u)) \end{equation} be a scalar function (Hamiltonian) of the variables $\psi$, $x$, $u$, where $\psi=(\psi_0,\psi^1)\in\mathbb{R}^{n+1}$, $\psi_0\in\mathbb{R}^1$, $\psi^1\in\mathbb{R}^n$, $\mathbf{f}=(f_0,f)$. To the function $H(\psi,x,u)$ corresponds a canonical Hamiltonian system (with respect to $\psi$, $x$) \begin{equation}\label{eq:3} \frac{dx}{dt}=\frac{\partial H}{\partial\psi},\quad\frac{d\psi}{dt}=-\frac{\partial H}{\partial x}, \end{equation} (the first equation in \eqref{eq:3} is the system \eqref{eq:1}). Let \begin{equation} M(\psi,x)=\sup\{H(\psi,x,u)\colon u\in U\}. \end{equation}

The Pontryagin maximum principle states: If $u^*(t)$, $x^*(t)$ ($t\in[t_0,t_1]$) is a solution of the optimal control problem \eqref{eq:1}, \eqref{eq:2} ($x^0\rightarrow x^1$, $u\in U$), then there exists a non-zero absolutely-continuous function $\psi(t)$ such that $\psi(t)$, $x^*(t)$, $u^*(t)$ satisfy system \eqref{eq:3} in $[t_0,t_1]$, such that for almost-all $t\in[t_0,t_1]$ the function $H(\psi(t),x^*(t),u^*(t))$ attains its maximum: \begin{equation}\label{eq:4} H(\psi(t),x^*(t),u^*(t))=M(\psi(t),x^*(t)), \end{equation} and such that at the terminal time $t_1$ the conditions \begin{equation} M(\psi(t_1),x^*(t_1))=0,\quad\psi_0(t_1)\leq 0, \end{equation} are satisfied.

If the functions $\psi(t)$, $x(t)$, $u(t)$ satisfy the relations \eqref{eq:3}, \eqref{eq:4} (i.e. $x(t)$, $u(t)$ are Pontryagin extremal), then the conditions \begin{equation} \mathcal{M}(t)=M(\psi(t),x(t))\equiv const,\quad\psi_0(t)\equiv const, \end{equation} hold.

From the above statement follows the maximum principle for the time-optimal problem ($f^0=1$, $J=t_1-t_0$). This statement admits a natural generalization to non-autonomous systems, problems with variable end-points and problems with restricted phase coordinates ($x(t)\in X$, where $X$ is a closed set in the phase space $\mathbb{R}^n$ satisfying some additional restrictions [1].

Admitting closed sets $U$, $X$ (in particular, these regions can be determined by systems of non-strict inequalities) makes the problem under consideration non-classical. The fundamental necessary conditions from the classical calculus of variations with ordinary derivative follow from the Pontryagin maximum principle (see [1] and also Weierstrass conditions (for a variational extremum)).

A widely used proof of the above formulation of the Pontryagin maximum principle, based on needle variations (i.e. one considers admissible controls arbitrarily deviating from the optimal one but only on a finite number of small time intervals), consists of linearization of the problem in a neighborhood of the optimal solution, construction of a convex cone of variations of the optimal trajectory, and subsequent application of the theorem on separated convex cones [1]. The corresponding condition is then rewritten in the analytical form \eqref{eq:3}, \eqref{eq:4} in terms of the maximum of the Hamiltonian $H(\psi,x,u)$ of the phase variables $x$, the controls $u$ and the adjoint variables $\psi$, which play the same role as the Lagrange multipliers in the classical calculus of variations. Effective application of the Pontryagin maximum principle often necessitates the solution of a two-point boundary value problem for \eqref{eq:3}.

The most complete solution of the problem of optimal control was obtained in the case of certain linear systems, for which the relations in the Pontryagin maximum principle are not only necessary but also sufficient optimality conditions.

There are numerous generalizations of the Pontryagin maximum principle; for instance, in the direction of more complicated non-classical constraints (including mixed constraints imposed on the controls and phase coordinates, functional and different integral constraints), in studies of the sufficiency of the corresponding constraints, in the consideration of generalized solutions, so-called sliding regimes, systems of differential equations with non-smooth right-hand side, differential inclusions, optimal control problems for discrete systems and systems with an infinite number of degrees of freedom, in particular, described by partial differential equations, equations with an after effect (including equations with a delay), evolution equations in a Banach space, etc. The latter lead to new classes of variations of the corresponding functionals, the introduction of the so-called integral maximum principle, the linearized maximum principle, etc. Rather general classes of variational problems with non-classical constraints (including non-strict inequalities) or with non-smooth functionals are usually called problems of Pontryagin type. The discovery of the Pontryagin maximum principle initiated the development of mathematical optimal control theory. It stimulated new research in the field of differential equations, functional analysis and extremal problems, computational mathematics and other related domains.


Comments

In the Western literature the Pontryagin maximum principle is also simply known as the minimum principle. (Cf. Optimal control, mathematical theory of)


References

  1. 1.0 1.1 1.2 1.3 L.S. Pontryagin, V.G. Boltayanskii, R.V. Gamkrelidze, E.F. Mishchenko, "The mathematical theory of optimal processes" , Wiley (1962) (Translated from Russian)
  1. W.H. Fleming, R.W. Rishel, "Deterministic and stochastic optimal control" , Springer (1975)
  2. L. Markus, "Foundations of optimal control theory" , Wiley (1967)
  3. L.D. Berkovitz, "Optimal control theory" , Springer (1974)
  4. L. Cesari, "Optimization - Theory and applications" , Springer (1983)
  5. F. Clarke, "Optimization and nonsmooth analysis" , Wiley (1983)
How to Cite This Entry:
Pontryagin maximum principle. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Pontryagin_maximum_principle&oldid=13246
This article was adapted from an original article by A.B. Kurzhanskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article