Namespaces
Variants
Actions

Difference between revisions of "Multi-objective optimization"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
m1102201.png
 +
$#A+1 = 34 n = 0
 +
$#C+1 = 34 : ~/encyclopedia/old_files/data/M110/M.1100220 Multi\AAhobjective optimization
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
In the real world one often encounters optimization problems with more than one (usually conflicting) objective function, such as the cost and the performance index of an industrial product. Such optimization problems are called multi-objective, or vector, optimization problems.
 
In the real world one often encounters optimization problems with more than one (usually conflicting) objective function, such as the cost and the performance index of an industrial product. Such optimization problems are called multi-objective, or vector, optimization problems.
  
A multi-objective optimization problem with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m1102201.png" /> objective functions can be formulated as follows:
+
A multi-objective optimization problem with $  p $
 +
objective functions can be formulated as follows:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m1102202.png" /></td> </tr></table>
+
$$
 +
\textrm{ (P)  } \left \{
 +
\begin{array}{l}
 +
{\textrm{ minimize  } \  f ( x ) = ( f _ {1} ( x ) \dots f _ {p} ( x ) )  ^ {T} , } \\
 +
{\textrm{ subject }  roman ^ { \  }  x \in X, }
 +
\end{array}
 +
\right .
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m1102203.png" /> is a constraint set in a certain space. One may consider a more general infinite-dimensional objective function. If the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m1102204.png" /> is specified by inequality (and/or equality) constraints in a finite-dimensional Euclidean space, i.e.,
+
where $  X $
 +
is a constraint set in a certain space. One may consider a more general infinite-dimensional objective function. If the set $  X $
 +
is specified by inequality (and/or equality) constraints in a finite-dimensional Euclidean space, i.e.,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m1102205.png" /></td> </tr></table>
+
$$
 +
X = \left \{ {x \in \mathbf R  ^ {n} } : {g _ {j} ( x ) \leq  0,  j = 1 \dots m } \right \} ,
 +
$$
  
 
then the problem is called the multi-objective programming problem.
 
then the problem is called the multi-objective programming problem.
  
Since the objective space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m1102206.png" /> is only partially ordered, first of all one has to make clear the concept of a solution of the problem (P).
+
Since the objective space $  \mathbf R  ^ {p} $
 +
is only partially ordered, first of all one has to make clear the concept of a solution of the problem (P).
  
1) A point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m1102207.png" /> is a completely optimal solution of (P) if
+
1) A point $  x  ^ {*} \in X $
 +
is a completely optimal solution of (P) if
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m1102208.png" /></td> </tr></table>
+
$$
 +
f _ {i} ( x  ^ {*} ) \leq  f _ {i} ( x ) ,  \forall i = 1 \dots p,  \forall x \in X.
 +
$$
  
 
Unfortunately, a completely optimal solution rarely exists.
 
Unfortunately, a completely optimal solution rarely exists.
  
2) A point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m1102209.png" /> is a Pareto optimal (or efficient) solution of (P) if there exists no <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022010.png" /> such that
+
2) A point $  x  ^ {*} \in X $
 +
is a Pareto optimal (or efficient) solution of (P) if there exists no $  x \in X $
 +
such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022011.png" /></td> </tr></table>
+
$$
 +
f _ {i} ( x ) \leq  f _ {i} ( x  ^ {*} ) ,  \forall i = 1 \dots p,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022012.png" /></td> </tr></table>
+
$$
 +
f _ {i} ( x ) < f _ {i} ( x  ^ {*} ) ,  \exists i \in \{ 1 \dots p \} .
 +
$$
  
 
Slightly strengthened and weakened solution concepts of Pareto optimality are called proper Pareto optimality and weak Pareto optimality, respectively. By introducing a more general preference structure (preference ordering) in the objective space, one may obtain a more general solution concept.
 
Slightly strengthened and weakened solution concepts of Pareto optimality are called proper Pareto optimality and weak Pareto optimality, respectively. By introducing a more general preference structure (preference ordering) in the objective space, one may obtain a more general solution concept.
Line 29: Line 63:
 
Several mathematical notions from ordinary scalar optimization, such as optimality conditions, stability, sensitivity, and duality, have been extended to multi-objective optimization. The details can be found in [[#References|[a1]]], [[#References|[a2]]], [[#References|[a3]]].
 
Several mathematical notions from ordinary scalar optimization, such as optimality conditions, stability, sensitivity, and duality, have been extended to multi-objective optimization. The details can be found in [[#References|[a1]]], [[#References|[a2]]], [[#References|[a3]]].
  
For the multi-objective programming problem the following extended Kuhn–Tucker condition has been obtained: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022013.png" /> is a Pareto optimal solution and the Kuhn–Tucker constraint qualification is satisfied at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022014.png" />, then there exist non-negative vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022015.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022016.png" /> such that
+
For the multi-objective programming problem the following extended Kuhn–Tucker condition has been obtained: If $  x  ^ {*} $
 +
is a Pareto optimal solution and the Kuhn–Tucker constraint qualification is satisfied at $  x  ^ {*} $,  
 +
then there exist non-negative vectors $  \mu  ^ {*} \neq 0 \in \mathbf R  ^ {p} $
 +
and $  \lambda  ^ {*} \in \mathbf R  ^ {m} $
 +
such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022017.png" /></td> </tr></table>
+
$$
 +
\sum _ {i = 1 } ^ { p }  \mu _ {i}  ^ {*} \nabla f _ {i} ( x  ^ {*} ) + \sum _ {j = 1 } ^ { m }  \lambda _ {j}  ^ {*} \nabla g _ {j} ( x  ^ {*} ) = 0,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022018.png" /></td> </tr></table>
+
$$
 +
\lambda _ {j}  ^ {*} g _ {j} ( x  ^ {*} ) = 0,  j = 1 \dots m.
 +
$$
  
 
In order to consider stability and sensitivity, the following parametric multi-objective optimization problem is considered:
 
In order to consider stability and sensitivity, the following parametric multi-objective optimization problem is considered:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022019.png" /></td> </tr></table>
+
$$
 +
\textrm{ (P  } {} _ {u} \textrm{ )  }  \left \{
 +
\begin{array}{l}
 +
{\textrm{ minimize  } \  f ( x,u ) , } \\
 +
{\textrm{ subject }  roman ^ { \  }  x \in X ( u ) , }
 +
\end{array}
 +
\right .
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022020.png" /> is a parameter vector. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022021.png" /> be the set of all Pareto optimal values in the objective space to (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022022.png" />). The set-valued mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022023.png" /> is considered to be a generalization of the marginal (optimal-value) function in ordinary scalar optimization. The behaviour of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022024.png" /> has been analyzed both qualitatively and quantitatively.
+
where $  u $
 +
is a parameter vector. Let $  W ( u ) $
 +
be the set of all Pareto optimal values in the objective space to ( $  \textrm{ P } _ {u} $).  
 +
The set-valued mapping $  W $
 +
is considered to be a generalization of the marginal (optimal-value) function in ordinary scalar optimization. The behaviour of $  W $
 +
has been analyzed both qualitatively and quantitatively.
  
 
Several types of duality, such as Lagrange duality, Wolfe duality, and conjugate duality, are investigated in optimization. Each of these duality theories has been extended to multi-objective optimization. For details see [[#References|[a1]]], [[#References|[a2]]], [[#References|[a3]]].
 
Several types of duality, such as Lagrange duality, Wolfe duality, and conjugate duality, are investigated in optimization. Each of these duality theories has been extended to multi-objective optimization. For details see [[#References|[a1]]], [[#References|[a2]]], [[#References|[a3]]].
Line 47: Line 101:
 
1) The weighted sum minimization method:
 
1) The weighted sum minimization method:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022025.png" /></td> </tr></table>
+
$$
 +
\left \{
 +
\begin{array}{l}
 +
{\textrm{ minimize  } \  \sum _ {i = 1 } ^ { p }  w _ {i} f _ {i} ( x ) , } \\
 +
{\textrm{ subject }  roman ^ { \  }  x \in X, }
 +
\end{array}
 +
\right .
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022026.png" /> denotes the relative importance of each <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022027.png" />;
+
where $  w _ {i} $
 +
denotes the relative importance of each $  f _ {i} $;
  
2) The <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022029.png" />-constraint method:
+
2) The $  \epsilon $-
 +
constraint method:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022030.png" /></td> </tr></table>
+
$$
 +
\left \{
 +
\begin{array}{l}
 +
{\textrm{ minimize  } \  f _ {p} ( x ) , } \\
 +
{\textrm{ subject }  roman ^ { \  }  f _ {i} ( x ) \leq  \epsilon _ {i} ,  i = 1 \dots p - 1,  x \in X, }
 +
\end{array}
 +
\right .
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022031.png" /> denotes the admissible worst level of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022032.png" />;
+
where $  \epsilon _ {i} $
 +
denotes the admissible worst level of $  f _ {i} $;
  
 
3) The norm minimization method:
 
3) The norm minimization method:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022033.png" /></td> </tr></table>
+
$$
 +
\left \{
 +
\begin{array}{l}
 +
{\textrm{ minimize  } \  \left \| {f ( x ) - y  ^ {*} } \right \| , } \\
 +
{\textrm{ subject }  roman ^ { \  }  x \in X, }
 +
\end{array}
 +
\right .
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022034.png" /> is the ideal (or reference) point in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110220/m11022035.png" />. One often uses the (augmented) Chebyshev norm.
+
where $  y  ^ {*} $
 +
is the ideal (or reference) point in $  \mathbf R  ^ {p} $.  
 +
One often uses the (augmented) Chebyshev norm.
  
 
From the viewpoint of decision making, one Pareto optimal solution should be chosen as a final decision according to the preference structure of the decision maker. Roughly speaking, to this end there are three approaches:
 
From the viewpoint of decision making, one Pareto optimal solution should be chosen as a final decision according to the preference structure of the decision maker. Roughly speaking, to this end there are three approaches:

Latest revision as of 08:01, 6 June 2020


In the real world one often encounters optimization problems with more than one (usually conflicting) objective function, such as the cost and the performance index of an industrial product. Such optimization problems are called multi-objective, or vector, optimization problems.

A multi-objective optimization problem with $ p $ objective functions can be formulated as follows:

$$ \textrm{ (P) } \left \{ \begin{array}{l} {\textrm{ minimize } \ f ( x ) = ( f _ {1} ( x ) \dots f _ {p} ( x ) ) ^ {T} , } \\ {\textrm{ subject } roman ^ { \ } x \in X, } \end{array} \right . $$

where $ X $ is a constraint set in a certain space. One may consider a more general infinite-dimensional objective function. If the set $ X $ is specified by inequality (and/or equality) constraints in a finite-dimensional Euclidean space, i.e.,

$$ X = \left \{ {x \in \mathbf R ^ {n} } : {g _ {j} ( x ) \leq 0, j = 1 \dots m } \right \} , $$

then the problem is called the multi-objective programming problem.

Since the objective space $ \mathbf R ^ {p} $ is only partially ordered, first of all one has to make clear the concept of a solution of the problem (P).

1) A point $ x ^ {*} \in X $ is a completely optimal solution of (P) if

$$ f _ {i} ( x ^ {*} ) \leq f _ {i} ( x ) , \forall i = 1 \dots p, \forall x \in X. $$

Unfortunately, a completely optimal solution rarely exists.

2) A point $ x ^ {*} \in X $ is a Pareto optimal (or efficient) solution of (P) if there exists no $ x \in X $ such that

$$ f _ {i} ( x ) \leq f _ {i} ( x ^ {*} ) , \forall i = 1 \dots p, $$

$$ f _ {i} ( x ) < f _ {i} ( x ^ {*} ) , \exists i \in \{ 1 \dots p \} . $$

Slightly strengthened and weakened solution concepts of Pareto optimality are called proper Pareto optimality and weak Pareto optimality, respectively. By introducing a more general preference structure (preference ordering) in the objective space, one may obtain a more general solution concept.

Several mathematical notions from ordinary scalar optimization, such as optimality conditions, stability, sensitivity, and duality, have been extended to multi-objective optimization. The details can be found in [a1], [a2], [a3].

For the multi-objective programming problem the following extended Kuhn–Tucker condition has been obtained: If $ x ^ {*} $ is a Pareto optimal solution and the Kuhn–Tucker constraint qualification is satisfied at $ x ^ {*} $, then there exist non-negative vectors $ \mu ^ {*} \neq 0 \in \mathbf R ^ {p} $ and $ \lambda ^ {*} \in \mathbf R ^ {m} $ such that

$$ \sum _ {i = 1 } ^ { p } \mu _ {i} ^ {*} \nabla f _ {i} ( x ^ {*} ) + \sum _ {j = 1 } ^ { m } \lambda _ {j} ^ {*} \nabla g _ {j} ( x ^ {*} ) = 0, $$

$$ \lambda _ {j} ^ {*} g _ {j} ( x ^ {*} ) = 0, j = 1 \dots m. $$

In order to consider stability and sensitivity, the following parametric multi-objective optimization problem is considered:

$$ \textrm{ (P } {} _ {u} \textrm{ ) } \left \{ \begin{array}{l} {\textrm{ minimize } \ f ( x,u ) , } \\ {\textrm{ subject } roman ^ { \ } x \in X ( u ) , } \end{array} \right . $$

where $ u $ is a parameter vector. Let $ W ( u ) $ be the set of all Pareto optimal values in the objective space to ( $ \textrm{ P } _ {u} $). The set-valued mapping $ W $ is considered to be a generalization of the marginal (optimal-value) function in ordinary scalar optimization. The behaviour of $ W $ has been analyzed both qualitatively and quantitatively.

Several types of duality, such as Lagrange duality, Wolfe duality, and conjugate duality, are investigated in optimization. Each of these duality theories has been extended to multi-objective optimization. For details see [a1], [a2], [a3].

In order to obtain a Pareto optimal solution of (P) one usually solves a scalarized optimization problem. Typical examples of the scalarization methods are as follows.

1) The weighted sum minimization method:

$$ \left \{ \begin{array}{l} {\textrm{ minimize } \ \sum _ {i = 1 } ^ { p } w _ {i} f _ {i} ( x ) , } \\ {\textrm{ subject } roman ^ { \ } x \in X, } \end{array} \right . $$

where $ w _ {i} $ denotes the relative importance of each $ f _ {i} $;

2) The $ \epsilon $- constraint method:

$$ \left \{ \begin{array}{l} {\textrm{ minimize } \ f _ {p} ( x ) , } \\ {\textrm{ subject } roman ^ { \ } f _ {i} ( x ) \leq \epsilon _ {i} , i = 1 \dots p - 1, x \in X, } \end{array} \right . $$

where $ \epsilon _ {i} $ denotes the admissible worst level of $ f _ {i} $;

3) The norm minimization method:

$$ \left \{ \begin{array}{l} {\textrm{ minimize } \ \left \| {f ( x ) - y ^ {*} } \right \| , } \\ {\textrm{ subject } roman ^ { \ } x \in X, } \end{array} \right . $$

where $ y ^ {*} $ is the ideal (or reference) point in $ \mathbf R ^ {p} $. One often uses the (augmented) Chebyshev norm.

From the viewpoint of decision making, one Pareto optimal solution should be chosen as a final decision according to the preference structure of the decision maker. Roughly speaking, to this end there are three approaches:

1) generate all or sufficiently many Pareto optimal solutions and leave it to the decision maker to choose the preferred solution;

2) find the value (utility) function which represents the preference structure of the decision maker and maximize this scalar function (this approach is usually called multi-attribute utility theory);

3) find a Pareto optimal solution and extract local (partial) information about the preference structure from the decision maker. Based on this information, compute another Pareto optimal solution. This process is repeated until the decision maker is satisfied by the current solution. This approach is called an interactive method. Several interesting methods have been proposed [a4].

References

[a1] Y. Sawaragi, H. Nakayama, T. Tanino, "Theory of multiobjective optimization" , Acad. Press (1985)
[a2] J. Jahn, "Mathematical vector optimization in partially ordered linear spaces" , Peter Lang (1986)
[a3] D.T. Luc, "Theory of vector optimization" , Springer (1989)
[a4] R. Steuer, "Multiple criteria optimization: theory, computation and application" , Wiley (1986)
How to Cite This Entry:
Multi-objective optimization. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Multi-objective_optimization&oldid=13763
This article was adapted from an original article by T. Tanino (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article