Namespaces
Variants
Actions

Fritz John condition

From Encyclopedia of Mathematics
Revision as of 16:56, 7 February 2011 by 127.0.0.1 (talk) (Importing text file)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

A necessary condition for local optimality in problems with inequality constraints. It is closely related to the classical problem of minimizing a function on a constraint set . The functions are defined on , the objective is assumed to be differentiable (cf. also Differentiable function) and the constraints , , are continuously differentiable at a feasible point tested for optimality; is a finite index set. If for every , where is a neighbourhood of , then is said to be a constrained local minimum. At such a point the gradients of the objective function and the constraints are linearly dependent, i.e., there exist multipliers , , not all zero, such that . If the gradients of the constraints are linearly independent, then one can specify . This result, usually proved by the implicit function theorem (cf. Implicit function), is used to formulate the Lagrange method in calculus.

Now assume that the feasible set is determined by inequality constraints, i.e., consider

The Fritz John condition describes local optimality of a feasible point using the gradients of the objective function and the "active" constraints , [a10]. The basic Fritz John condition is as follows. Consider the problem (NP) where all functions are differentiable at some feasible . If is a constrained local minimum, then there exist multipliers , , not all zero, such that

By Gordan's theorem of the alternative (e.g., [a14]), or the Dubovitskii–Milyutin theorem (e.g., [a7]), the Fritz John condition is equivalent to the inconsistency of the system

which is a "primal" optimality condition. For problems with both equality and inequality constraints, the Fritz John condition requires that the equality constraints be continuously differentiable while the active inequality constraints need only be differentiable. The multipliers that correspond to the equality constraints are unrestricted in sign. This result, referred to as the Mangasarian–Fromovitz condition, does not follow from the Fritz John condition; e.g., [a14], [a15]. Conditions that guarantee that the leading multiplier in the Fritz John condition is positive, in which case it can be set , are called regularization conditions or constraint qualifications. One of these is that the gradients of active constraints are linearly independent. This condition is typically violated in multi-level and multi-objective optimization problems; e.g., [a4], [a23].

Convex programs.

The Fritz John condition is not sufficient for optimality even for linear programs; e.g., [a3], p. 150, [a4], [a14]. However, a reformulation of it is both necessary and sufficient for optimality of a feasible point for a convex program, i.e., for the problem (NP) when all functions are convex (cf. Convex function (of a real variable)). First, using properties of feasible directions, optimality of is characterized by the inconsistency of the system

where is the set of the constraints that are equal to zero on the entire feasible set and is the intersection of the cones of directions of constancy at of all such constraints. This is equivalent to the Fritz John formulation , where the multipliers are non-negative and not all equal to zero, and is the polar set of , [a4]. Moreover, feasibility of guarantees here . If the gradients of active constraints are linearly independent, or, more generally, if is an empty set (this is known as Slater's condition, cf. Mathematical programming), then the Fritz John condition becomes

Consistency of this system is known as the Karush–Kuhn–Tucker condition (cf. also Karush–Kuhn–Tucker conditions).

Alternatively, in convex programming the Fritz John condition is equivalent to the existence of a saddle point of the Lagrangian . First, define and let be the cardinality of . Consider the convex program (NP). Then a point is optimal if and only if there exists a in such that for every and every (saddle-point characterization of optimality).

This formulation is useful when some functions are not differentiable. Using non-smooth analysis, one can replace derivatives by other objects such as subgradients (see, e.g., [a3]).

Optimality and stability.

Descriptions of optimality often require stability in the convex model

all functions are continuous, is considered as a "parameter" , and , are convex for every . (The functions need not be convex in .) Assume that the set of optimal solutions of the corresponding convex program is non-empty and bounded at some . Perturbations of from that locally preserve lower semi-continuity of the feasible set mapping form a "region of stability" at , denoted by . Denote the constraints that are equal to zero on by . Let , let be the cardinality of the set , and consider the Lagrangian . Then the following condition characterizes local optimality of , relative to stable perturbations, for the optimal value function (cf., e.g., [a23]).

Consider the convex model (NP,) around some . Then locally minimizes relative to perturbations in if and only if there exists a non-negative function such that, whenever ,

for every non-negative and every (a characterization of local optimality on a region of stability).

The above claim does not generally hold if the stability requirement is omitted. As an example, consider the convex model . Here for any , hence is a global minimum. The saddle-point condition requires such that for all . But such a multiplier function does not exist outside the region of stability, e.g., for , as a sequence on with shows.

The necessary conditions for local optimality of parameters, subject to stable perturbations, are simplified under "input constraint qualifications" [a21]. Global optimality of a parameter for the convex model can be characterized by a saddle-point condition on the region of cooperation of . This is the set of all for which . No stability is required in this case [a22]. Programs that assume the form of a convex model, after "freezing" some variables, are called partly convex. Every program with twice continuously differentiable functions can be transformed into an equivalent partly convex program (cf., e.g., [a8], [a12]).

Abstract programs.

Various extensions of the basic Fritz John condition have been studied in abstract spaces with finite or infinite number of constraints; e.g., [a2], [a7], [a11], [a13], [a24]. The weakest derivative for which this condition holds appears to be the Gil de Lamadrid and Sova compact derivative [a16] in non-sequential locally convex Hausdorff spaces. A saddle-point condition that characterizes locally optimal parameters relative to their stable perturbations for the abstract convex model on the set , where and the parameter belong to some suitable abstract spaces and is a compact subset of , can be given in terms of the existence of a finite Borel measure for the Lagrangian (cf. e.g. [a1]). The abstract versions of the Fritz John and Karush–Kuhn–Tucker conditions are applicable to a wide range of problems, from optimal control problems in engineering [a5], [a6], [a17] and management [a18], [a20] to the prior selection problem in Bayesian statistical inference [a1]. These conditions are used for identification of optimal solutions, parameters, controls, and states, and for formulations of duality theories and numerical methods. The multipliers describe sensitivity of the optimal value function subject to the right-hand side perturbations and can be interpreted as values of the constraints ( "shadow prices" in linear programming). Some optimization problems can alternatively be studied by calculus of variations which, fundamentally, is the same method; e.g., [a6], [a9], [a18], [a19], [a20], p. 17.

References

[a1] M. Asgharian, S. Zlobec, "Abstract parametric programming" Preprint McGill Univ. , March (2000)
[a2] V. Barbu, Th. Precupanu, "Convexity and optimization in Banach spaces" , Sijthoff & Noordhoff (1978)
[a3] M.S. Bazaraa, H.D. Sherali, C.M. Shetty, "Nonlinear programming: Theory and algorithms" , Wiley (1993) (Edition: Second)
[a4] A. Ben-Israel, A. Ben-Tal, S. Zlobec, "Optimality in nonlinear programming: A feasible directions approach" , Wiley/Interscience (1981)
[a5] E. Bryson, Jr., Yu-Chi Ho, "Applied optimal control" , Blaisdell (1969)
[a6] M. Canon, C. Cullum, E. Polak, "Theory of optimal control and mathematical programming" , McGraw-Hill (1970)
[a7] I.V. Girsanov, "Lectures on mathematical theory of extremum problems" , Lecture Notes in economics and math. systems , 67 , Springer (1972)
[a8] J. Guddat, H.Th. Jongen, "On global optimization based on parametric optimization" J. Guddat (ed.) et al. (ed.) , Advances in Mathematical Optimization , Akad. Berlin (1988) pp. 63–79
[a9] H. Halkin, "Maximum principle of the Pontryagin type for systems described by nonlinear difference equations" SIAM J. Control , 4 (1966) pp. 90–111
[a10] F. John, "Extremum problems with inequalities as subsidiary conditions" K.O. Friedrichs (ed.) et al. (ed.) , Studies and Essays, Courant Anniversary Volume , Wiley/Interscience (1948) (Reprinted in: J. Moser (ed.): Fritz John Collected Papers 2, Birkhäuser, 1985, pp. 543-560)
[a11] M.B. Lignola, J. Morgan, "Existence of solutions to generalized bilevel programming problem" A. Migdalas (ed.) et al. (ed.) , Multilevel Optimization: Algorithms and Applications , Kluwer Acad. Publ. (1998) pp. 315–332
[a12] W.B. Liu, C.A. Floudas, "A remark on the GOP algorithm for global optimization" J. Global Optim. , 3 (1993) pp. 519–521
[a13] D.G. Luenberger, "Optimization by vector space methods" , Wiley (1969)
[a14] O.L. Mangasarian, "Nonlinear programming" , McGraw-Hill (1969)
[a15] O.L. Mangasarian, S. Fromovitz, "The Fritz John optimality conditions in the presence of equality and inequality constraints" J. Math. Anal. Appl. , 17 (1967) pp. 37–47
[a16] H. Massam, S. Zlobec, "Various definitions of the derivative in mathematical programming" Math. Programming , 7 (1974) pp. 144–161 (Addendum: ibid 14 (1978), 108-111)
[a17] L.S. Pontryagin, V.G. Boltyanski, R.V. Gamkrelidze, E.F. Mishchenko, "The mathematical theory of optimal processes" , Wiley (1962)
[a18] S.P. Sethi, "A survey of management science applications of the deterministic maximum principle" , TIMS Studies in the Management Sci. , 9 , North-Holland (1978) pp. 33–67
[a19] D.R. Smith, "Variational methods in optimization" , Prentice-Hall (1974)
[a20] C.S. Tapiero, "Time, dynamics and the process of management modeling" , TIMS Studies in the Management Sci. , 9 , North-Holland (1978) pp. 7–31
[a21] M. van Rooyen, M. Sears, S. Zlobec, "Constraint qualifications in input optimization" J. Austral. Math. Soc. Ser. B , 30 (1989) pp. 326–342
[a22] S. Zlobec, "Partly convex programming and Zermelo's navigation problems" J. Global Optim. , 7 (1995) pp. 229–259
[a23] S. Zlobec, "Stable parametric programming" Optimization , 45 (1999) pp. 387–416 ((Augmented version forthcoming as research monograph, Kluwer Acad. Publ., Applied Optim. Series.))
[a24] S. Zlobec, B.D. Craven, "Stabilization and determination of the set of minimal binding constraints in convex programming" Math. Operationsforschung und Statistik, Ser. Optim. , 12 (1981) pp. 203–220
How to Cite This Entry:
Fritz John condition. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Fritz_John_condition&oldid=11697
This article was adapted from an original article by S. Zlobec (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article