# Optimality, sufficient conditions for

Conditions which ensure the optimality of a given solution of a problem of the variational calculus in a chosen class of comparison curves.

Sufficient conditions for optimality of a weak minimum (see [1]): For a curve $ \overline{y}\; ( x) $ to provide a weak minimum for the functional

$$ \tag{1 } J( y) = \int\limits _ { x _ {0} } ^ { {x _ 1 } } F( x, y, y ^ \prime ) dx , $$

given the boundary conditions

$$ y( x _ {0} ) = y _ {0} ,\ \ y( x _ {1} ) = y _ {1} , $$

it is sufficient that the following conditions are fulfilled.

1) The curve $ \overline{y}\; ( x) $ must be an extremal, i.e. it must satisfy the Euler equation

$$ F _ {y} - \frac{d}{dx} F _ {y ^ \prime } = 0. $$

2) Along the curve $ \overline{y}\; ( x) $, including its ends, the strong Legendre condition

$$ F _ {y ^ \prime y ^ \prime } ( x, y, y ^ \prime ) > 0 $$

must be fulfilled.

3) The curve $ \overline{y}\; ( x) $ must satisfy the strong Jacobi condition, which requires that the solution of the Jacobi equation

$$ \tag{2 } \left ( F _ {yy} - \frac{d}{dx} F _ {yy ^ \prime } \right ) \eta - \frac{d}{dx} ( F _ {y ^ \prime y ^ \prime } \eta ^ \prime ) = 0 $$

with initial conditions

$$ \eta ( x _ {0} ) = 0,\ \ \eta ^ \prime ( x _ {0} ) = 1 , $$

must not vanish at the points of the right-closed interval $ x _ {0} < x \leq x _ {1} $.

The coefficients of the Jacobi equation (2), which is a linear differential equation of the second order, are calculated along the extremal $ \overline{y}\; ( x) $ and represent known functions in $ x $.

For a strong minimum it is sufficient that the following extra condition be fulfilled, next to those mentioned above.

4) There exists a neighbourhood of the curve $ \overline{y}\; ( x) $ such that at each point $ ( x, y) $ of it, and for any $ y ^ \prime $, the inequality

$$ \tag{3 } {\mathcal E} ( x, y, u( x, y), y ^ \prime ) \geq 0 $$

hold, where

$$ {\mathcal E} ( x , y , u , y ^ \prime ) = F( x, y, y ^ \prime ) - F( x, y, u) + $$

$$ - ( y ^ \prime - u) F _ {y ^ \prime } ( x, y, u) $$

is the Weierstrass function, while $ u( x, y) $ is the slope of the field of extremals surrounding $ \overline{y}\; ( x) $.

At the extremal $ \overline{y}\; ( x) $, condition (3) takes the form

$$ \tag{4 } {\mathcal E} ( x, \overline{y}\; , \overline{y}\; {} ^ \prime , y ^ \prime ) = \ F( x, \overline{y}\; , y ^ \prime ) - F( x, \overline{y}\; , \overline{y}\; {} ^ \prime ) + $$

$$ - ( y ^ \prime - \overline{y}\; {} ^ \prime ) F _ {y ^ \prime } ( x, \overline{y}\; , \overline{y}\; {} ^ \prime ) \geq 0. $$

Condition (4) is necessary for a strong minimum; it is called the Weierstrass necessary condition (cf. Weierstrass conditions (for a variational extremum)). Thus, unlike the sufficient conditions for a weak minimum, which require certain strong necessary conditions to be fulfilled at the points of the extremal, the sufficient conditions of a strong minimum require that the Weierstrass necessary condition be fulfilled in a neighbourhood of the extremal. In general, it is impossible to weaken the formulation of the sufficient conditions for a strong minimum once the requirement that the Weierstrass condition be fulfilled in a neighbourhood of the extremal has been replaced by the strong Weierstrass condition (condition (4) with strict inequality) at the points of the extremal (see [1]).

For non-classical variational problems, examined in the mathematical theory of optimal control (cf. Optimal control, mathematical theory of), several approaches to the establishment of sufficient conditions for optimality of an absolute extremum exist.

Let a problem of optimal control be posed in which the minimum of the functional

$$ \tag{5 } J = \int\limits _ { 0 } ^ { {t _ 1} } f ^ { 0 } ( x, u) dt, $$

$$ f ^ { 0 } : \mathbf R ^ {n} \times \mathbf R ^ {p} \rightarrow \mathbf R , $$

has to be determined, given the conditions

$$ \tag{6 } \dot{x} = f( x, u),\ f: \mathbf R ^ {n} \times \mathbf R ^ {p} \rightarrow \ \mathbf R ^ {n} , $$

$$ \tag{7 } x( 0) = x _ {0} ,\ x( t _ {1} ) = x _ {1} , $$

$$ \tag{8 } u \in U , $$

where $ U $ is a given closed set in $ p $- dimensional space.

Using the method of dynamic programming [3], sufficient conditions for optimality are formulated in the following way. For a control $ u( t) $ to be an optimal control in the problem (5)–(8), it is sufficient that:

a) there exists a continuous function $ S( x) $ which has continuous partial derivatives for all $ x $ with the possible exception of a certain piecewise-smooth set of dimension less than $ n $, which vanishes at the end-point $ x _ {1} $, $ S( x _ {1} ) = 0 $, and which satisfies the Bellman equation

$$ \tag{9 } \max _ {u \in U } \ \left ( \frac{\partial S }{\partial x } f( x, u) - f ^ { 0 } ( x, u) \right ) = 0; $$

b) $ u( t) = v( x( t)) $ if $ 0 \leq t \leq t _ {1} $, where $ v( x) $ is a synthesizing function (cf. also Optimal synthesis control), which can be defined using the Bellman equation:

$$ \frac{\partial S }{\partial x } f( x, v( x)) - f ^ { 0 } ( x, v( x)) = $$

$$ = \ \max _ {u \in U } \left ( \frac{\partial S }{\partial x } f( x, u) - f ^ { 0 } ( x, u) \right ) = 0. $$

In reality, when using the method of dynamic programming, a stronger result is obtained: Sufficient conditions for optimality for a set of different controls which transfer a phase point from an arbitrary initial state to a given final state $ x _ {1} $.

In the more general case of a non-autonomous system, i.e. the integrand and the vector-function of right-hand sides also depend on the time $ t $, the function $ S $ will depend on $ t $ and a term $ \partial S/ \partial t $ must be added to the left-hand side of equation (9). There is a proof (see [4]) which assumes the condition of continuous differentiability of the function $ S( x) $ for all $ x $, which is very restrictive and is not fulfilled in most problems although it is usually supposed.

Sufficient conditions for optimality can be constructed on the basis of the Pontryagin maximum principle. If in a region $ G $ of the phase space a regular synthesis is realized (cf. Optimal synthesis control), then all trajectories obtained using the maximum principle when constructing the regular synthesis are optimal in $ G $.

The definition of a regular synthesis, although rather cumbersome, does not essentially impose any particular restrictions on the problem (5)–(8).

There is another approach to the construction of sufficient conditions for optimality (see ). Let $ \phi ( x) $ be a continuous function with continuous partial derivatives for all admissible $ x $ belonging to a given region $ G $, and let

$$ \tag{10 } R( x, u) = \phi _ {x} f( x, u) - f ^ { 0 } ( x, u). $$

For the pair $ \overline{u}\; ( t), \overline{x}\; ( t) $ to establish an absolute minimum in the problem (5)–(8), it is sufficient that there exists a function $ \phi ( x) $ such that

$$ \tag{11 } R ( \overline{x}\; , \overline{u}\; ) = \max _ {\begin{array}{c} x \in G \\ u \in U \end{array} } \ R( x, u). $$

Corresponding variations of the above-mentioned formulation of the sufficient conditions for optimality are permissible for the more general cases of non-autonomous systems, problems with Mayer- and Bolza-type functions (see Bolza problem), as well as for optimal sliding regimes (see Optimal sliding regime and ).

Research has been carried out into variational problems with functionals in the form of multiple integrals and with differential constraints in the form of partial differential equations, in which functions of several variables occur (see [6]).

#### References

[1] | M.A. Lavrent'ev, L.A. Lyusternik, "A course in variational calculus" , Moscow-Leningrad (1950) (In Russian) |

[2] | G.A. Bliss, "Lectures on the calculus of variations" , Chicago Univ. Press (1947) |

[3] | R. Bellman, "Dynamic programming" , Princeton Univ. Press (1957) |

[4] | V.G. Boltyanskii, "Mathematical methods of optimal control" , Holt, Rinehart & Winston (1971) (Translated from Russian) |

[5a] | V.F. Krotov, "Methods for solving variational problems on the basis of sufficient conditions for an absolute minimum I" Automat. Remote Control , 23 (1963) pp. 1473–1484 Avtomat. i Telemekh. , 23 : 12 (1962) pp. 1571–1583 |

[5b] | V.F. Krotov, "Methods for solving variational problems on the basis of sufficient conditions for an absolute minimum II" Automat. Remote Control , 24 (1963) pp. 539–553 Avtomat. i Telemekh. , 24 : 5 (1963) pp. 581–598 |

[6] | A.G. Butkovskii, "Theory of optimal control of systems with distributed parameters" , Moscow (1965) (In Russian) |

#### Comments

One distinguishes fixed end-time and free end-time problems. The problem expressed by the equations (5)–(8) is a free end-time one; the end-time is determined by the moment that the state $ x ( t) $ equals the final state $ x _ {1} $.

For additional references see also Optimal control, mathematical theory of.

#### References

[a1] | L. Markus, "Foundations of optimal control theory" , Wiley (1967) |

**How to Cite This Entry:**

Optimality, sufficient conditions for.

*Encyclopedia of Mathematics.*URL: http://encyclopediaofmath.org/index.php?title=Optimality,_sufficient_conditions_for&oldid=48059