Namespaces
Variants
Actions

Difference between revisions of "Lagrangean relaxation"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
l1100501.png
 +
$#A+1 = 59 n = 0
 +
$#C+1 = 59 : ~/encyclopedia/old_files/data/L110/L.1100050 Lagrangean relaxation,
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''Lagrangian relaxation''
 
''Lagrangian relaxation''
  
 
Consider the mixed integer programming problem
 
Consider the mixed integer programming problem
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l1100501.png" />) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l1100502.png" /> subject to
+
$  P $)
 +
$  v ( P ) = \min  c  ^ {T} x $
 +
subject to
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l1100503.png" /></td> </tr></table>
+
$$
 +
A _ {1} x \geq  b _ {1} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l1100504.png" /></td> </tr></table>
+
$$
 +
A _ {2} x \geq  b _ {2} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l1100505.png" /></td> </tr></table>
+
$$
 +
x \geq  0,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l1100506.png" /></td> </tr></table>
+
$$
 +
x _ {_ j }  \in \mathbf Z, \quad j \in N  ^  \prime  \subset  \{ 1 \dots n \} ,
 +
$$
  
with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l1100507.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l1100508.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l1100509.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005010.png" />, i.e., <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005011.png" />, the transpose of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005012.png" />, is a row vector of length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005013.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005014.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005015.png" /> are matrices of the right sizes. One assumes that in some way the constraints <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005016.png" /> are hard (to deal with) and that the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005017.png" /> are easy. Form the Lagrangean function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005018.png" /> and consider the so-called Lagrangean relaxation
+
with $  x \in \mathbf R  ^ {n} $,  
 +
$  b _ {1} \in \mathbf R ^ {m _ {1} } $,  
 +
$  b _ {2} \in \mathbf R ^ {m _ {2} } $,  
 +
$  c \in \mathbf R  ^ {n} $,  
 +
i.e., $  c  ^ {T} $,  
 +
the transpose of $  c $,  
 +
is a row vector of length $  n $,  
 +
and $  A _ {1} $,  
 +
$  A _ {2} $
 +
are matrices of the right sizes. One assumes that in some way the constraints $  A _ {1} x \geq  b _ {1} $
 +
are hard (to deal with) and that the $  A _ {2} x \geq  b _ {2} $
 +
are easy. Form the Lagrangean function $  c  ^ {T} x + u  ^ {T} ( b - Ax ) $
 +
and consider the so-called Lagrangean relaxation
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005019.png" />) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005020.png" /> subject to
+
$  L _ {u} $)
 +
$  v ( L _ {u} ) = \min  ( c  ^ {T} x + u  ^ {T} ( b _ {1} - A _ {1} x ) ) $
 +
subject to
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005021.png" /></td> </tr></table>
+
$$
 +
A _ {2} x \geq  b _ {2} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005022.png" /></td> </tr></table>
+
$$
 +
x \geq  0,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005023.png" /></td> </tr></table>
+
$$
 +
x _ {j} \in \mathbf Z, \quad j \in N  ^  \prime  ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005024.png" />. Note that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005025.png" />) is simply the problem <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005026.png" />) obtained by dropping the  "complicated constraints"  <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005027.png" />. The relaxed problem gives a lower bound for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005028.png" />. Better lower bounds can be obtained by choosing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005029.png" />. Problem <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005030.png" />) is indeed a relaxation of the original programming problem <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005031.png" />) because the feasible set of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005032.png" />) contains that of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005033.png" />) and for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005034.png" /> and feasible <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005035.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005036.png" />, because <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005037.png" />. Thus, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005038.png" />. The partition of the constraints should be such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005039.png" />) is much easier to solve than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005040.png" />). For an example of this in the case of the knapsack problem see, e.g., [[#References|[a2]]] (where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005041.png" />) is solvable in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005042.png" /> time whereas the knapsack problem itself is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005044.png" />-hard).
+
where $  u \in \mathbf R ^ {m _ {1} } $.  
 +
Note that $  L _ {0} $)  
 +
is simply the problem $  P $)  
 +
obtained by dropping the  "complicated constraints"   $  A _ {1} x \geq b _ {1} $.  
 +
The relaxed problem gives a lower bound for $  v ( P ) $.  
 +
Better lower bounds can be obtained by choosing $  u \neq 0 $.  
 +
Problem $  L _ {u} $)  
 +
is indeed a relaxation of the original programming problem $  P $)  
 +
because the feasible set of $  L _ {u} $)  
 +
contains that of $  P $)  
 +
and for all $  u \geq  0 $
 +
and feasible $  x $,  
 +
$  c  ^ {T} x + u  ^ {T} ( b - Ax ) \leq  c  ^ {T} x $,  
 +
because $  u  ^ {T} ( b - Ax ) \leq  0 $.  
 +
Thus, $  v ( L _ {u} ) \leq  v ( P ) $.  
 +
The partition of the constraints should be such that $  L _ {u} $)  
 +
is much easier to solve than $  P $).  
 +
For an example of this in the case of the knapsack problem see, e.g., [[#References|[a2]]] (where $  L _ {u} $)  
 +
is solvable in $  O ( n ) $
 +
time whereas the knapsack problem itself is $  {\mathcal N} {\mathcal P} $-
 +
hard).
  
Lagrangean relaxation is often used in combination with an enumeration technique such as branch-and-bound (see [[Branch-and-bound algorithm|Branch-and-bound algorithm]]) to provide lower bounds. One should therefore choose <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005045.png" /> such that it solves the partial dual problem
+
Lagrangean relaxation is often used in combination with an enumeration technique such as branch-and-bound (see [[Branch-and-bound algorithm|Branch-and-bound algorithm]]) to provide lower bounds. One should therefore choose $  u \in \mathbf R ^ {m _ {1} } $
 +
such that it solves the partial dual problem
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005046.png" /></td> </tr></table>
+
$$
 +
v ( D ) = \max  _ {u \geq  0 } v ( L _ {u} ) =
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005047.png" /></td> </tr></table>
+
$$
 +
=  
 +
\max  _ {u \geq  0 }  \min  _ {x \in { \mathop{\rm Feas} } ( L _ {u} ) } ( cx + u ( b _ {1} - A _ {1} x ) ) .
 +
$$
  
The difference <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005048.png" /> is called the duality gap.
+
The difference $  v ( P ) - v ( D ) $
 +
is called the duality gap.
  
 
Also, if there is no splitting up of the constraints into hard and easy parts, one can fruitfully consider Lagrangean relaxations. Take the general mathematical programming problem
 
Also, if there is no splitting up of the constraints into hard and easy parts, one can fruitfully consider Lagrangean relaxations. Take the general mathematical programming problem
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005049.png" />) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005050.png" />, and the corresponding Lagrangean (Lagrangean function)
+
$  P $)
 +
$  v ( P ) = \min  \{ {f ( x ) } : {g ( x ) \geq  b, x \in X } \} $,  
 +
and the corresponding Lagrangean (Lagrangean function)
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005051.png" /></td> </tr></table>
+
$$
 +
L ( x,y ) = f ( x ) + y  ^ {T} ( b - g ( x ) ) .
 +
$$
  
 
The problem
 
The problem
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005052.png" /></td> </tr></table>
+
$$
 +
\min  _ {x \in X }  \max  _ {y \geq  0 } L ( x,y )
 +
$$
  
 
is equivalent to the original one. The dual problem is
 
is equivalent to the original one. The dual problem is
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005053.png" />) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005054.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005055.png" />, and one has <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005056.png" />. For a convex programming problem under regularity conditions, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005057.png" />, i.e., the duality gap <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005058.png" /> is zero. In general this is not the case. The dual problem <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005059.png" />) is often called the Lagrangean relaxation (of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l110/l110050/l11005060.png" />)), especially in [[Integer programming|integer programming]].
+
$  D $)
 +
$  v ( D ) = \max  _ {y \geq  0 }  M ( y ) $,
 +
$  M ( y ) = \min  _ {x \in X }  L ( x,y ) $,  
 +
and one has $  v ( D ) \leq  v ( P ) $.  
 +
For a convex programming problem under regularity conditions, $  v ( D ) = v ( P ) $,  
 +
i.e., the duality gap $  v ( P ) - v ( D ) $
 +
is zero. In general this is not the case. The dual problem $  D $)  
 +
is often called the Lagrangean relaxation (of $  P $)),  
 +
especially in [[Integer programming|integer programming]].
  
 
Two surveys of Lagrangean relaxation techniques are [[#References|[a3]]], [[#References|[a4]]].
 
Two surveys of Lagrangean relaxation techniques are [[#References|[a3]]], [[#References|[a4]]].

Latest revision as of 22:15, 5 June 2020


Lagrangian relaxation

Consider the mixed integer programming problem

$ P $) $ v ( P ) = \min c ^ {T} x $ subject to

$$ A _ {1} x \geq b _ {1} , $$

$$ A _ {2} x \geq b _ {2} , $$

$$ x \geq 0, $$

$$ x _ {_ j } \in \mathbf Z, \quad j \in N ^ \prime \subset \{ 1 \dots n \} , $$

with $ x \in \mathbf R ^ {n} $, $ b _ {1} \in \mathbf R ^ {m _ {1} } $, $ b _ {2} \in \mathbf R ^ {m _ {2} } $, $ c \in \mathbf R ^ {n} $, i.e., $ c ^ {T} $, the transpose of $ c $, is a row vector of length $ n $, and $ A _ {1} $, $ A _ {2} $ are matrices of the right sizes. One assumes that in some way the constraints $ A _ {1} x \geq b _ {1} $ are hard (to deal with) and that the $ A _ {2} x \geq b _ {2} $ are easy. Form the Lagrangean function $ c ^ {T} x + u ^ {T} ( b - Ax ) $ and consider the so-called Lagrangean relaxation

$ L _ {u} $) $ v ( L _ {u} ) = \min ( c ^ {T} x + u ^ {T} ( b _ {1} - A _ {1} x ) ) $ subject to

$$ A _ {2} x \geq b _ {2} , $$

$$ x \geq 0, $$

$$ x _ {j} \in \mathbf Z, \quad j \in N ^ \prime , $$

where $ u \in \mathbf R ^ {m _ {1} } $. Note that $ L _ {0} $) is simply the problem $ P $) obtained by dropping the "complicated constraints" $ A _ {1} x \geq b _ {1} $. The relaxed problem gives a lower bound for $ v ( P ) $. Better lower bounds can be obtained by choosing $ u \neq 0 $. Problem $ L _ {u} $) is indeed a relaxation of the original programming problem $ P $) because the feasible set of $ L _ {u} $) contains that of $ P $) and for all $ u \geq 0 $ and feasible $ x $, $ c ^ {T} x + u ^ {T} ( b - Ax ) \leq c ^ {T} x $, because $ u ^ {T} ( b - Ax ) \leq 0 $. Thus, $ v ( L _ {u} ) \leq v ( P ) $. The partition of the constraints should be such that $ L _ {u} $) is much easier to solve than $ P $). For an example of this in the case of the knapsack problem see, e.g., [a2] (where $ L _ {u} $) is solvable in $ O ( n ) $ time whereas the knapsack problem itself is $ {\mathcal N} {\mathcal P} $- hard).

Lagrangean relaxation is often used in combination with an enumeration technique such as branch-and-bound (see Branch-and-bound algorithm) to provide lower bounds. One should therefore choose $ u \in \mathbf R ^ {m _ {1} } $ such that it solves the partial dual problem

$$ v ( D ) = \max _ {u \geq 0 } v ( L _ {u} ) = $$

$$ = \max _ {u \geq 0 } \min _ {x \in { \mathop{\rm Feas} } ( L _ {u} ) } ( cx + u ( b _ {1} - A _ {1} x ) ) . $$

The difference $ v ( P ) - v ( D ) $ is called the duality gap.

Also, if there is no splitting up of the constraints into hard and easy parts, one can fruitfully consider Lagrangean relaxations. Take the general mathematical programming problem

$ P $) $ v ( P ) = \min \{ {f ( x ) } : {g ( x ) \geq b, x \in X } \} $, and the corresponding Lagrangean (Lagrangean function)

$$ L ( x,y ) = f ( x ) + y ^ {T} ( b - g ( x ) ) . $$

The problem

$$ \min _ {x \in X } \max _ {y \geq 0 } L ( x,y ) $$

is equivalent to the original one. The dual problem is

$ D $) $ v ( D ) = \max _ {y \geq 0 } M ( y ) $, $ M ( y ) = \min _ {x \in X } L ( x,y ) $, and one has $ v ( D ) \leq v ( P ) $. For a convex programming problem under regularity conditions, $ v ( D ) = v ( P ) $, i.e., the duality gap $ v ( P ) - v ( D ) $ is zero. In general this is not the case. The dual problem $ D $) is often called the Lagrangean relaxation (of $ P $)), especially in integer programming.

Two surveys of Lagrangean relaxation techniques are [a3], [a4].

References

[a1] "Modern mathermatical methods of optimization" K.-H. Elster (ed.) , Akademie Verlag (1993)
[a2] S. Walukievicz, "Integer programming" , Kluwer Acad. Publ. (1991)
[a3] M.L. Fisher, "The Lagrangian relaxation method for solving integer programming problems" Management Sci. , 27 (1981) pp. 1–18
[a4] J.F. Shapiro, "A survey of Lagrangian techniques for discrete optimization" Ann. Discrete Math. , 5 (1979) pp. 113–138 (Special issue: Discrete Optimization II; eds: P.L. Hummer and E.L. Johnson and B.H. Korte)
[a5] G.L. Nemhauser, L.A. Wolsey, "Integer programming" G.L. Nemhauser (ed.) A.H.G. Rinnooy Kan (ed.) M.J. Todd (ed.) , Optimization , North-Holland (1989) pp. 447–528
How to Cite This Entry:
Lagrangean relaxation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Lagrangean_relaxation&oldid=47562
This article was adapted from an original article by M. Hazewinkel (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article