Difference between revisions of "Conditional extremum"
(Importing text file) |
Ulf Rehmann (talk | contribs) m (tex encoded by computer) |
||
Line 1: | Line 1: | ||
+ | <!-- | ||
+ | c0244901.png | ||
+ | $#A+1 = 22 n = 0 | ||
+ | $#C+1 = 22 : ~/encyclopedia/old_files/data/C024/C.0204490 Conditional extremum | ||
+ | Automatically converted into TeX, above some diagnostics. | ||
+ | Please remove this comment and the {{TEX|auto}} line below, | ||
+ | if TeX found to be correct. | ||
+ | --> | ||
+ | |||
+ | {{TEX|auto}} | ||
+ | {{TEX|done}} | ||
+ | |||
A minimum or maximum value attained by a given function (or functional) under the condition that certain other functions (functionals) take values in a given admissible set. If the conditions restricting in the above sense the domain of the independent variable (function) are absent, one speaks of an unconditional extremum. | A minimum or maximum value attained by a given function (or functional) under the condition that certain other functions (functionals) take values in a given admissible set. If the conditions restricting in the above sense the domain of the independent variable (function) are absent, one speaks of an unconditional extremum. | ||
The classical problem on conditional extrema is the problem of determining the minima of a function of several variables | The classical problem on conditional extrema is the problem of determining the minima of a function of several variables | ||
− | + | $$ \tag{1 } | |
+ | f ( x _ {1} \dots x _ {n} ) | ||
+ | $$ | ||
under the condition that certain other functions assume given values: | under the condition that certain other functions assume given values: | ||
− | + | $$ \tag{2 } | |
+ | g _ {i} ( x _ {1} \dots x _ {n} ) = \ | ||
+ | c _ {i} ,\ \ | ||
+ | i = 1 \dots m,\ \ | ||
+ | m < n. | ||
+ | $$ | ||
− | In this problem the set | + | In this problem the set $ G $, |
+ | in which the values of the vector function $ g = ( g _ {1} \dots g _ {m} ) $ | ||
+ | entering in the supplementary condition (2) may lie, contains only the single point $ c = ( c _ {1} \dots c _ {m} ) $ | ||
+ | in $ m $- | ||
+ | dimensional Euclidean space $ \mathbf R ^ {m} $. | ||
If in (2), along with equality laws one has certain inequalities, say, | If in (2), along with equality laws one has certain inequalities, say, | ||
− | + | $$ \tag{3 } | |
+ | \left . | ||
+ | |||
+ | \begin{array}{ll} | ||
+ | g _ {i} ( x _ {1} \dots x _ {n} ) = c _ {i} , &\ | ||
+ | i = 1 \dots m _ {1} , m _ {1} < n, \\ | ||
+ | g _ {i} ( x _ {1} \dots x _ {n} ) \leq c _ {i} , &\ | ||
+ | i = m _ {1} + 1 \dots m _ {2} , \\ | ||
+ | g _ {i} ( x _ {1} \dots x _ {n} ) \geq c _ {i} , &\ | ||
+ | i = m _ {2} + 1 \dots m, \\ | ||
+ | \end{array} | ||
+ | \ | ||
+ | \right \} | ||
+ | $$ | ||
− | then the classical problem reduces to a problem in [[Non-linear programming|non-linear programming]]. In the problem (1), (3) the set | + | then the classical problem reduces to a problem in [[Non-linear programming|non-linear programming]]. In the problem (1), (3) the set $ G $ |
+ | of admissible values of the vector function $ g $ | ||
+ | consists of a certain curvilinear polyhedron, contained (generally) in an $ ( n - m _ {1} ) $- | ||
+ | dimensional hypersurface given by the $ m _ {1} $, | ||
+ | $ m _ {1} < n $, | ||
+ | first conditions of equation type in (3). The boundary of this curvilinear polyhedron is defined by the $ n - m _ {1} $ | ||
+ | inequalities in (3). | ||
− | A particular case of the problem (1), (3) on a conditional extremum is the problem of [[Linear programming|linear programming]], in which all functions | + | A particular case of the problem (1), (3) on a conditional extremum is the problem of [[Linear programming|linear programming]], in which all functions $ f $ |
+ | and $ g _ {i} $ | ||
+ | being considered are linear in $ x _ {1} \dots x _ {n} $. | ||
+ | In the problem of linear programming the set $ G $ | ||
+ | of admissible values of the vector function $ g $ | ||
+ | involved in the restrictions of the domain of the variables $ x _ {1} \dots x _ {n} $, | ||
+ | defines a convex polyhedron, contained in an $ ( n - m _ {1} ) $- | ||
+ | dimensional hypersurface given by the $ m _ {1} $ | ||
+ | conditions of equality type in (3). | ||
In a similar way the majority of problems on optimizing functionals that are of practical interest fall under the heading of problems about conditional extrema (cf. [[Isoperimetric problem|Isoperimetric problem]]; [[Bolza problem|Bolza problem]]; [[Lagrange problem|Lagrange problem]]; [[Mayer problem|Mayer problem]]). Thus, as in mathematical programming, the basic problems in the calculus of variations and in the theory of optimal control are problems about conditional extrema. | In a similar way the majority of problems on optimizing functionals that are of practical interest fall under the heading of problems about conditional extrema (cf. [[Isoperimetric problem|Isoperimetric problem]]; [[Bolza problem|Bolza problem]]; [[Lagrange problem|Lagrange problem]]; [[Mayer problem|Mayer problem]]). Thus, as in mathematical programming, the basic problems in the calculus of variations and in the theory of optimal control are problems about conditional extrema. |
Latest revision as of 17:46, 4 June 2020
A minimum or maximum value attained by a given function (or functional) under the condition that certain other functions (functionals) take values in a given admissible set. If the conditions restricting in the above sense the domain of the independent variable (function) are absent, one speaks of an unconditional extremum.
The classical problem on conditional extrema is the problem of determining the minima of a function of several variables
$$ \tag{1 } f ( x _ {1} \dots x _ {n} ) $$
under the condition that certain other functions assume given values:
$$ \tag{2 } g _ {i} ( x _ {1} \dots x _ {n} ) = \ c _ {i} ,\ \ i = 1 \dots m,\ \ m < n. $$
In this problem the set $ G $, in which the values of the vector function $ g = ( g _ {1} \dots g _ {m} ) $ entering in the supplementary condition (2) may lie, contains only the single point $ c = ( c _ {1} \dots c _ {m} ) $ in $ m $- dimensional Euclidean space $ \mathbf R ^ {m} $.
If in (2), along with equality laws one has certain inequalities, say,
$$ \tag{3 } \left . \begin{array}{ll} g _ {i} ( x _ {1} \dots x _ {n} ) = c _ {i} , &\ i = 1 \dots m _ {1} , m _ {1} < n, \\ g _ {i} ( x _ {1} \dots x _ {n} ) \leq c _ {i} , &\ i = m _ {1} + 1 \dots m _ {2} , \\ g _ {i} ( x _ {1} \dots x _ {n} ) \geq c _ {i} , &\ i = m _ {2} + 1 \dots m, \\ \end{array} \ \right \} $$
then the classical problem reduces to a problem in non-linear programming. In the problem (1), (3) the set $ G $ of admissible values of the vector function $ g $ consists of a certain curvilinear polyhedron, contained (generally) in an $ ( n - m _ {1} ) $- dimensional hypersurface given by the $ m _ {1} $, $ m _ {1} < n $, first conditions of equation type in (3). The boundary of this curvilinear polyhedron is defined by the $ n - m _ {1} $ inequalities in (3).
A particular case of the problem (1), (3) on a conditional extremum is the problem of linear programming, in which all functions $ f $ and $ g _ {i} $ being considered are linear in $ x _ {1} \dots x _ {n} $. In the problem of linear programming the set $ G $ of admissible values of the vector function $ g $ involved in the restrictions of the domain of the variables $ x _ {1} \dots x _ {n} $, defines a convex polyhedron, contained in an $ ( n - m _ {1} ) $- dimensional hypersurface given by the $ m _ {1} $ conditions of equality type in (3).
In a similar way the majority of problems on optimizing functionals that are of practical interest fall under the heading of problems about conditional extrema (cf. Isoperimetric problem; Bolza problem; Lagrange problem; Mayer problem). Thus, as in mathematical programming, the basic problems in the calculus of variations and in the theory of optimal control are problems about conditional extrema.
For the solution of problems on conditional extrema, particularly when considering theoretical questions related to problems on conditional extrema, it is often helpful to use (undetermined) Lagrange multipliers, which allow a reduction of the problem to one on an unconditional extremum, thus simplifying the task of finding necessary conditions for optimality. The use of Lagrange multipliers is at the base of the majority of classical methods for solving problems on a conditional extremum.
References
[1] | G.F. Hadley, "Nonlinear and dynamic programming" , Addison-Wesley (1964) |
[2] | G.A. Bliss, "Lectures on the calculus of variations" , Chicago Univ. Press (1947) |
[3] | L.S. Pontryagin, V.G. Boltayanskii, R.V. Gamkrelidze, E.F. Mishchenko, "The mathematical theory of optimal processes" , Wiley (1962) (Translated from Russian) |
Conditional extremum. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Conditional_extremum&oldid=15260