Namespaces
Variants
Actions

Multi-objective optimization

From Encyclopedia of Mathematics
Revision as of 17:05, 7 February 2011 by 127.0.0.1 (talk) (Importing text file)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

In the real world one often encounters optimization problems with more than one (usually conflicting) objective function, such as the cost and the performance index of an industrial product. Such optimization problems are called multi-objective, or vector, optimization problems.

A multi-objective optimization problem with objective functions can be formulated as follows:

where is a constraint set in a certain space. One may consider a more general infinite-dimensional objective function. If the set is specified by inequality (and/or equality) constraints in a finite-dimensional Euclidean space, i.e.,

then the problem is called the multi-objective programming problem.

Since the objective space is only partially ordered, first of all one has to make clear the concept of a solution of the problem (P).

1) A point is a completely optimal solution of (P) if

Unfortunately, a completely optimal solution rarely exists.

2) A point is a Pareto optimal (or efficient) solution of (P) if there exists no such that

Slightly strengthened and weakened solution concepts of Pareto optimality are called proper Pareto optimality and weak Pareto optimality, respectively. By introducing a more general preference structure (preference ordering) in the objective space, one may obtain a more general solution concept.

Several mathematical notions from ordinary scalar optimization, such as optimality conditions, stability, sensitivity, and duality, have been extended to multi-objective optimization. The details can be found in [a1], [a2], [a3].

For the multi-objective programming problem the following extended Kuhn–Tucker condition has been obtained: If is a Pareto optimal solution and the Kuhn–Tucker constraint qualification is satisfied at , then there exist non-negative vectors and such that

In order to consider stability and sensitivity, the following parametric multi-objective optimization problem is considered:

where is a parameter vector. Let be the set of all Pareto optimal values in the objective space to (). The set-valued mapping is considered to be a generalization of the marginal (optimal-value) function in ordinary scalar optimization. The behaviour of has been analyzed both qualitatively and quantitatively.

Several types of duality, such as Lagrange duality, Wolfe duality, and conjugate duality, are investigated in optimization. Each of these duality theories has been extended to multi-objective optimization. For details see [a1], [a2], [a3].

In order to obtain a Pareto optimal solution of (P) one usually solves a scalarized optimization problem. Typical examples of the scalarization methods are as follows.

1) The weighted sum minimization method:

where denotes the relative importance of each ;

2) The -constraint method:

where denotes the admissible worst level of ;

3) The norm minimization method:

where is the ideal (or reference) point in . One often uses the (augmented) Chebyshev norm.

From the viewpoint of decision making, one Pareto optimal solution should be chosen as a final decision according to the preference structure of the decision maker. Roughly speaking, to this end there are three approaches:

1) generate all or sufficiently many Pareto optimal solutions and leave it to the decision maker to choose the preferred solution;

2) find the value (utility) function which represents the preference structure of the decision maker and maximize this scalar function (this approach is usually called multi-attribute utility theory);

3) find a Pareto optimal solution and extract local (partial) information about the preference structure from the decision maker. Based on this information, compute another Pareto optimal solution. This process is repeated until the decision maker is satisfied by the current solution. This approach is called an interactive method. Several interesting methods have been proposed [a4].

References

[a1] Y. Sawaragi, H. Nakayama, T. Tanino, "Theory of multiobjective optimization" , Acad. Press (1985)
[a2] J. Jahn, "Mathematical vector optimization in partially ordered linear spaces" , Peter Lang (1986)
[a3] D.T. Luc, "Theory of vector optimization" , Springer (1989)
[a4] R. Steuer, "Multiple criteria optimization: theory, computation and application" , Wiley (1986)
How to Cite This Entry:
Multi-objective optimization. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Multi-objective_optimization&oldid=47919
This article was adapted from an original article by T. Tanino (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article