Conditional extremum

From Encyclopedia of Mathematics
Revision as of 17:11, 7 February 2011 by (talk) (Importing text file)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

A minimum or maximum value attained by a given function (or functional) under the condition that certain other functions (functionals) take values in a given admissible set. If the conditions restricting in the above sense the domain of the independent variable (function) are absent, one speaks of an unconditional extremum.

The classical problem on conditional extrema is the problem of determining the minima of a function of several variables


under the condition that certain other functions assume given values:


In this problem the set , in which the values of the vector function entering in the supplementary condition (2) may lie, contains only the single point in -dimensional Euclidean space .

If in (2), along with equality laws one has certain inequalities, say,


then the classical problem reduces to a problem in non-linear programming. In the problem (1), (3) the set of admissible values of the vector function consists of a certain curvilinear polyhedron, contained (generally) in an -dimensional hypersurface given by the , , first conditions of equation type in (3). The boundary of this curvilinear polyhedron is defined by the inequalities in (3).

A particular case of the problem (1), (3) on a conditional extremum is the problem of linear programming, in which all functions and being considered are linear in . In the problem of linear programming the set of admissible values of the vector function involved in the restrictions of the domain of the variables , defines a convex polyhedron, contained in an -dimensional hypersurface given by the conditions of equality type in (3).

In a similar way the majority of problems on optimizing functionals that are of practical interest fall under the heading of problems about conditional extrema (cf. Isoperimetric problem; Bolza problem; Lagrange problem; Mayer problem). Thus, as in mathematical programming, the basic problems in the calculus of variations and in the theory of optimal control are problems about conditional extrema.

For the solution of problems on conditional extrema, particularly when considering theoretical questions related to problems on conditional extrema, it is often helpful to use (undetermined) Lagrange multipliers, which allow a reduction of the problem to one on an unconditional extremum, thus simplifying the task of finding necessary conditions for optimality. The use of Lagrange multipliers is at the base of the majority of classical methods for solving problems on a conditional extremum.


[1] G.F. Hadley, "Nonlinear and dynamic programming" , Addison-Wesley (1964)
[2] G.A. Bliss, "Lectures on the calculus of variations" , Chicago Univ. Press (1947)
[3] L.S. Pontryagin, V.G. Boltayanskii, R.V. Gamkrelidze, E.F. Mishchenko, "The mathematical theory of optimal processes" , Wiley (1962) (Translated from Russian)
How to Cite This Entry:
Conditional extremum. Encyclopedia of Mathematics. URL:
This article was adapted from an original article by I.B. Vapnyarskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article