Difference between revisions of "Lagrangean relaxation"
(Importing text file) |
Ulf Rehmann (talk | contribs) m (tex encoded by computer) |
||
Line 1: | Line 1: | ||
+ | <!-- | ||
+ | l1100501.png | ||
+ | $#A+1 = 59 n = 0 | ||
+ | $#C+1 = 59 : ~/encyclopedia/old_files/data/L110/L.1100050 Lagrangean relaxation, | ||
+ | Automatically converted into TeX, above some diagnostics. | ||
+ | Please remove this comment and the {{TEX|auto}} line below, | ||
+ | if TeX found to be correct. | ||
+ | --> | ||
+ | |||
+ | {{TEX|auto}} | ||
+ | {{TEX|done}} | ||
+ | |||
''Lagrangian relaxation'' | ''Lagrangian relaxation'' | ||
Consider the mixed integer programming problem | Consider the mixed integer programming problem | ||
− | + | $ P $) | |
+ | $ v ( P ) = \min c ^ {T} x $ | ||
+ | subject to | ||
− | + | $$ | |
+ | A _ {1} x \geq b _ {1} , | ||
+ | $$ | ||
− | + | $$ | |
+ | A _ {2} x \geq b _ {2} , | ||
+ | $$ | ||
− | + | $$ | |
+ | x \geq 0, | ||
+ | $$ | ||
− | + | $$ | |
+ | x _ {_ j } \in \mathbf Z, \quad j \in N ^ \prime \subset \{ 1 \dots n \} , | ||
+ | $$ | ||
− | with | + | with $ x \in \mathbf R ^ {n} $, |
+ | $ b _ {1} \in \mathbf R ^ {m _ {1} } $, | ||
+ | $ b _ {2} \in \mathbf R ^ {m _ {2} } $, | ||
+ | $ c \in \mathbf R ^ {n} $, | ||
+ | i.e., $ c ^ {T} $, | ||
+ | the transpose of $ c $, | ||
+ | is a row vector of length $ n $, | ||
+ | and $ A _ {1} $, | ||
+ | $ A _ {2} $ | ||
+ | are matrices of the right sizes. One assumes that in some way the constraints $ A _ {1} x \geq b _ {1} $ | ||
+ | are hard (to deal with) and that the $ A _ {2} x \geq b _ {2} $ | ||
+ | are easy. Form the Lagrangean function $ c ^ {T} x + u ^ {T} ( b - Ax ) $ | ||
+ | and consider the so-called Lagrangean relaxation | ||
− | + | $ L _ {u} $) | |
+ | $ v ( L _ {u} ) = \min ( c ^ {T} x + u ^ {T} ( b _ {1} - A _ {1} x ) ) $ | ||
+ | subject to | ||
− | + | $$ | |
+ | A _ {2} x \geq b _ {2} , | ||
+ | $$ | ||
− | + | $$ | |
+ | x \geq 0, | ||
+ | $$ | ||
− | + | $$ | |
+ | x _ {j} \in \mathbf Z, \quad j \in N ^ \prime , | ||
+ | $$ | ||
− | where | + | where $ u \in \mathbf R ^ {m _ {1} } $. |
+ | Note that $ L _ {0} $) | ||
+ | is simply the problem $ P $) | ||
+ | obtained by dropping the "complicated constraints" $ A _ {1} x \geq b _ {1} $. | ||
+ | The relaxed problem gives a lower bound for $ v ( P ) $. | ||
+ | Better lower bounds can be obtained by choosing $ u \neq 0 $. | ||
+ | Problem $ L _ {u} $) | ||
+ | is indeed a relaxation of the original programming problem $ P $) | ||
+ | because the feasible set of $ L _ {u} $) | ||
+ | contains that of $ P $) | ||
+ | and for all $ u \geq 0 $ | ||
+ | and feasible $ x $, | ||
+ | $ c ^ {T} x + u ^ {T} ( b - Ax ) \leq c ^ {T} x $, | ||
+ | because $ u ^ {T} ( b - Ax ) \leq 0 $. | ||
+ | Thus, $ v ( L _ {u} ) \leq v ( P ) $. | ||
+ | The partition of the constraints should be such that $ L _ {u} $) | ||
+ | is much easier to solve than $ P $). | ||
+ | For an example of this in the case of the knapsack problem see, e.g., [[#References|[a2]]] (where $ L _ {u} $) | ||
+ | is solvable in $ O ( n ) $ | ||
+ | time whereas the knapsack problem itself is $ {\mathcal N} {\mathcal P} $- | ||
+ | hard). | ||
− | Lagrangean relaxation is often used in combination with an enumeration technique such as branch-and-bound (see [[Branch-and-bound algorithm|Branch-and-bound algorithm]]) to provide lower bounds. One should therefore choose | + | Lagrangean relaxation is often used in combination with an enumeration technique such as branch-and-bound (see [[Branch-and-bound algorithm|Branch-and-bound algorithm]]) to provide lower bounds. One should therefore choose $ u \in \mathbf R ^ {m _ {1} } $ |
+ | such that it solves the partial dual problem | ||
− | + | $$ | |
+ | v ( D ) = \max _ {u \geq 0 } v ( L _ {u} ) = | ||
+ | $$ | ||
− | + | $$ | |
+ | = | ||
+ | \max _ {u \geq 0 } \min _ {x \in { \mathop{\rm Feas} } ( L _ {u} ) } ( cx + u ( b _ {1} - A _ {1} x ) ) . | ||
+ | $$ | ||
− | The difference | + | The difference $ v ( P ) - v ( D ) $ |
+ | is called the duality gap. | ||
Also, if there is no splitting up of the constraints into hard and easy parts, one can fruitfully consider Lagrangean relaxations. Take the general mathematical programming problem | Also, if there is no splitting up of the constraints into hard and easy parts, one can fruitfully consider Lagrangean relaxations. Take the general mathematical programming problem | ||
− | + | $ P $) | |
+ | $ v ( P ) = \min \{ {f ( x ) } : {g ( x ) \geq b, x \in X } \} $, | ||
+ | and the corresponding Lagrangean (Lagrangean function) | ||
− | + | $$ | |
+ | L ( x,y ) = f ( x ) + y ^ {T} ( b - g ( x ) ) . | ||
+ | $$ | ||
The problem | The problem | ||
− | + | $$ | |
+ | \min _ {x \in X } \max _ {y \geq 0 } L ( x,y ) | ||
+ | $$ | ||
is equivalent to the original one. The dual problem is | is equivalent to the original one. The dual problem is | ||
− | + | $ D $) | |
+ | $ v ( D ) = \max _ {y \geq 0 } M ( y ) $, | ||
+ | $ M ( y ) = \min _ {x \in X } L ( x,y ) $, | ||
+ | and one has $ v ( D ) \leq v ( P ) $. | ||
+ | For a convex programming problem under regularity conditions, $ v ( D ) = v ( P ) $, | ||
+ | i.e., the duality gap $ v ( P ) - v ( D ) $ | ||
+ | is zero. In general this is not the case. The dual problem $ D $) | ||
+ | is often called the Lagrangean relaxation (of $ P $)), | ||
+ | especially in [[Integer programming|integer programming]]. | ||
Two surveys of Lagrangean relaxation techniques are [[#References|[a3]]], [[#References|[a4]]]. | Two surveys of Lagrangean relaxation techniques are [[#References|[a3]]], [[#References|[a4]]]. |
Latest revision as of 22:15, 5 June 2020
Lagrangian relaxation
Consider the mixed integer programming problem
$ P $) $ v ( P ) = \min c ^ {T} x $ subject to
$$ A _ {1} x \geq b _ {1} , $$
$$ A _ {2} x \geq b _ {2} , $$
$$ x \geq 0, $$
$$ x _ {_ j } \in \mathbf Z, \quad j \in N ^ \prime \subset \{ 1 \dots n \} , $$
with $ x \in \mathbf R ^ {n} $, $ b _ {1} \in \mathbf R ^ {m _ {1} } $, $ b _ {2} \in \mathbf R ^ {m _ {2} } $, $ c \in \mathbf R ^ {n} $, i.e., $ c ^ {T} $, the transpose of $ c $, is a row vector of length $ n $, and $ A _ {1} $, $ A _ {2} $ are matrices of the right sizes. One assumes that in some way the constraints $ A _ {1} x \geq b _ {1} $ are hard (to deal with) and that the $ A _ {2} x \geq b _ {2} $ are easy. Form the Lagrangean function $ c ^ {T} x + u ^ {T} ( b - Ax ) $ and consider the so-called Lagrangean relaxation
$ L _ {u} $) $ v ( L _ {u} ) = \min ( c ^ {T} x + u ^ {T} ( b _ {1} - A _ {1} x ) ) $ subject to
$$ A _ {2} x \geq b _ {2} , $$
$$ x \geq 0, $$
$$ x _ {j} \in \mathbf Z, \quad j \in N ^ \prime , $$
where $ u \in \mathbf R ^ {m _ {1} } $. Note that $ L _ {0} $) is simply the problem $ P $) obtained by dropping the "complicated constraints" $ A _ {1} x \geq b _ {1} $. The relaxed problem gives a lower bound for $ v ( P ) $. Better lower bounds can be obtained by choosing $ u \neq 0 $. Problem $ L _ {u} $) is indeed a relaxation of the original programming problem $ P $) because the feasible set of $ L _ {u} $) contains that of $ P $) and for all $ u \geq 0 $ and feasible $ x $, $ c ^ {T} x + u ^ {T} ( b - Ax ) \leq c ^ {T} x $, because $ u ^ {T} ( b - Ax ) \leq 0 $. Thus, $ v ( L _ {u} ) \leq v ( P ) $. The partition of the constraints should be such that $ L _ {u} $) is much easier to solve than $ P $). For an example of this in the case of the knapsack problem see, e.g., [a2] (where $ L _ {u} $) is solvable in $ O ( n ) $ time whereas the knapsack problem itself is $ {\mathcal N} {\mathcal P} $- hard).
Lagrangean relaxation is often used in combination with an enumeration technique such as branch-and-bound (see Branch-and-bound algorithm) to provide lower bounds. One should therefore choose $ u \in \mathbf R ^ {m _ {1} } $ such that it solves the partial dual problem
$$ v ( D ) = \max _ {u \geq 0 } v ( L _ {u} ) = $$
$$ = \max _ {u \geq 0 } \min _ {x \in { \mathop{\rm Feas} } ( L _ {u} ) } ( cx + u ( b _ {1} - A _ {1} x ) ) . $$
The difference $ v ( P ) - v ( D ) $ is called the duality gap.
Also, if there is no splitting up of the constraints into hard and easy parts, one can fruitfully consider Lagrangean relaxations. Take the general mathematical programming problem
$ P $) $ v ( P ) = \min \{ {f ( x ) } : {g ( x ) \geq b, x \in X } \} $, and the corresponding Lagrangean (Lagrangean function)
$$ L ( x,y ) = f ( x ) + y ^ {T} ( b - g ( x ) ) . $$
The problem
$$ \min _ {x \in X } \max _ {y \geq 0 } L ( x,y ) $$
is equivalent to the original one. The dual problem is
$ D $) $ v ( D ) = \max _ {y \geq 0 } M ( y ) $, $ M ( y ) = \min _ {x \in X } L ( x,y ) $, and one has $ v ( D ) \leq v ( P ) $. For a convex programming problem under regularity conditions, $ v ( D ) = v ( P ) $, i.e., the duality gap $ v ( P ) - v ( D ) $ is zero. In general this is not the case. The dual problem $ D $) is often called the Lagrangean relaxation (of $ P $)), especially in integer programming.
Two surveys of Lagrangean relaxation techniques are [a3], [a4].
References
[a1] | "Modern mathermatical methods of optimization" K.-H. Elster (ed.) , Akademie Verlag (1993) |
[a2] | S. Walukievicz, "Integer programming" , Kluwer Acad. Publ. (1991) |
[a3] | M.L. Fisher, "The Lagrangian relaxation method for solving integer programming problems" Management Sci. , 27 (1981) pp. 1–18 |
[a4] | J.F. Shapiro, "A survey of Lagrangian techniques for discrete optimization" Ann. Discrete Math. , 5 (1979) pp. 113–138 (Special issue: Discrete Optimization II; eds: P.L. Hummer and E.L. Johnson and B.H. Korte) |
[a5] | G.L. Nemhauser, L.A. Wolsey, "Integer programming" G.L. Nemhauser (ed.) A.H.G. Rinnooy Kan (ed.) M.J. Todd (ed.) , Optimization , North-Holland (1989) pp. 447–528 |
Lagrangean relaxation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Lagrangean_relaxation&oldid=47562