Namespaces
Variants
Actions

Difference between revisions of "Dantzig-Wolfe decomposition"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (AUTOMATIC EDIT (latexlist): Replaced 255 formulas out of 276 by TEX code with an average confidence of 2.0 and a minimal confidence of 2.0.)
Line 1: Line 1:
 +
<!--This article has been texified automatically. Since there was no Nroff source code for this article,
 +
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
 +
was used.
 +
If the TeX and formula formatting is correct, please remove this message and the {{TEX|semi-auto}} category.
 +
 +
Out of 276 formulas, 255 were replaced by TEX code.-->
 +
 +
{{TEX|semi-auto}}{{TEX|partial}}
 
''Dantzig–Wolfe decomposition algorithm''
 
''Dantzig–Wolfe decomposition algorithm''
  
 
Consider the following [[Linear programming|linear programming]] problem (LP problem), with a row structure as indicated by the two sets of constraints.
 
Consider the following [[Linear programming|linear programming]] problem (LP problem), with a row structure as indicated by the two sets of constraints.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d1200201.png" /></td> </tr></table>
+
\begin{equation*} ( \text{P} ) v ^ { * } = \left\{ \begin{array} { c c } { \operatorname { min } } &amp; { c ^ { T } x } \\ { \text { s.t. } } &amp; { A _ { 1 } x \leq b _ { 1 }, } \\ { } &amp; { A _ { 2 } x \leq b _ { 2 }, } \\ {} &amp; { x \geq 0. } \end{array} \right. \end{equation*}
  
The Dantzig–Wolfe approach is often used for the case when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d1200202.png" /> is a block-angular (linear) programming problem. Then the second constraint set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d1200203.png" /> is separable into a number of parts, containing disjoint sets of variables.
+
The Dantzig–Wolfe approach is often used for the case when $( \text{P} )$ is a block-angular (linear) programming problem. Then the second constraint set $A _ { 2 } x \leq b _ { 2 }$ is separable into a number of parts, containing disjoint sets of variables.
  
The optimal solution is usually denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d1200204.png" />. One denotes the LP-dual of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d1200205.png" /> by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d1200206.png" /> and the optimal dual solution by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d1200207.png" />.
+
The optimal solution is usually denoted by $x ^ { * }$. One denotes the LP-dual of $( \text{P} )$ by $( \operatorname{PD} )$ and the optimal dual solution by $( u _ { 1 } ^ { * } , u _ { 2 } ^ { * } )$.
  
The row structure can be utilized by applying Lagrangean duality to the first constraint set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d1200208.png" />:
+
The row structure can be utilized by applying Lagrangean duality to the first constraint set $A _ { 1 } x \leq b _ { 1 }$:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d1200209.png" /></td> </tr></table>
+
\begin{equation*} ( \operatorname{L} ) v ^ { * } = \left\{ \begin{array} { l l } { \operatorname { max } } &amp; { g ( u _ { 1 } ) } \\ { s.t. } &amp; { u _ { 1 } \in U _ { 1 }, } \end{array} \right. \end{equation*}
  
where, for any fixed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002010.png" />,
+
where, for any fixed $\overline { u } _ { 1 } \in U _ { 1 }$,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002011.png" /></td> </tr></table>
+
\begin{equation*} ( \text{S} ) g ( \overline { u } _ { 1 } ) = \left\{ \begin{array} { c l } { \operatorname { min } } &amp; { c ^ { T } x + \overline { u } _{1} ^ { T } ( A _ { 1 } x - b _ { 1 } ) } \\ { \text { s.t. } } &amp; { A _ { 2 } x \leq b _ { 2 }, } \\ { } &amp; { x \geq 0, } \end{array} \right. \end{equation*}
  
and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002012.png" />.
+
and $U _ { 1 } = \{ u _ { 1 } \geq 0 : g ( u _ { 1 } ) &gt; - \infty \}$.
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002013.png" /> is called the subproblem and must be solved in order to evaluate the dual function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002014.png" /> at a certain point, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002015.png" />.
+
$( \operatorname{S} )$ is called the subproblem and must be solved in order to evaluate the dual function $g ( u _ { 1 } )$ at a certain point, $\overline { u } _ { 1 } \in U _ { 1 }$.
  
One can show that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002016.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002017.png" />. The controllability of the subproblem is limited. Inserting <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002018.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002019.png" /> yields <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002020.png" />, but possibly not <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002021.png" />.
+
One can show that $g ( u _ { 1 } ) \leq v ^ { * }$ for all $u _ { 1 } \geq 0$. The controllability of the subproblem is limited. Inserting $\overline { u } _ { 1 } = u _ { 1 } ^ { * }$ in $( \operatorname{S} )$ yields $u _ { 2 } = u _ { 2 } ^ { * }$, but possibly not $x = x ^ { * }$.
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002022.png" /> is the region where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002023.png" /> is bounded, i.e. where its dual is feasible, leading to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002024.png" />.
+
$U _ { 1 }$ is the region where $( \operatorname{S} )$ is bounded, i.e. where its dual is feasible, leading to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002024.png"/>.
  
Now, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002025.png" />, which is the feasible set of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002026.png" />. Since <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002027.png" /> is a [[Polyhedron|polyhedron]], it has a finite number of extreme points, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002028.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002029.png" />, and extreme rays, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002030.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002031.png" />. Any point in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002032.png" /> can be described as
+
Now, let $X = \{ x : A _ { 2 } x \leq b _ { 2 } , x \geq 0 \}$, which is the feasible set of $( \operatorname{S} )$. Since $X$ is a [[Polyhedron|polyhedron]], it has a finite number of extreme points, $x ^ { ( k ) }$, $k \in P$, and extreme rays, $\tilde{x} ^ { ( k ) }$, $k \in R$. Any point in $X$ can be described as
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002033.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
\begin{equation} \tag{a1} v ^ { * } = \sum _ { k \in P } \lambda _ { k } x ^ { ( k ) } + \sum _ { k \in R } \mu _ { k } \tilde{x} ^ { ( k ) }, \end{equation}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002034.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002035.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002036.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002037.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002038.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002039.png" />, then the optimal solution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002040.png" /> is bounded, and since <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002041.png" /> is a linear programming problem, the optimum is clearly attained in one of the extreme points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002042.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002043.png" />. So,
+
where $\sum _ { k \in P } \lambda _ { k } = 1$, $\lambda _ { k } \geq 0$ for all $k \in P$ and $\mu _ { k } \geq 0$ for all $k \in R$. If $\overline { u } _ { 1 } \in U _ { 1 }$, then the optimal solution of $( \operatorname{S} )$ is bounded, and since $( \operatorname{S} )$ is a linear programming problem, the optimum is clearly attained in one of the extreme points $x ^ { ( k ) }$ of $X$. So,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002044.png" /></td> </tr></table>
+
\begin{equation*} g ( u _ { 1 } ) = \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002045.png" /></td> </tr></table>
+
\begin{equation*} = \operatorname { min } _ { x \in X } c ^ { T } x + u _ { 1 } ^ { T } ( A _ { 1 } x - b _ { 1 } ) = \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002046.png" /></td> </tr></table>
+
\begin{equation*} = \operatorname { min } _ { k \in P } c ^ { T } x ^ { ( k ) } + u _ { 1 } ^ { T } ( A _ { 1 } x ^ { ( k ) } - b _ { 1 } ). \end{equation*}
  
Also, it is sufficient to include the extreme directions, yielding <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002047.png" />.
+
Also, it is sufficient to include the extreme directions, yielding $U _ { 1 } = \{ u _ { 1 } \geq 0 : c ^ { T } \tilde { x } ^{ ( k ) } + u _ { 1 } A _ { 1 } \tilde{x} ^ { ( k ) } \geq 0 \text { for all } k \in R \}$.
  
These descriptions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002048.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002049.png" /> can be used to form a problem that is (by construction) equivalent to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002050.png" />, in the sense that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002051.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002052.png" /> is an optimal solution:
+
These descriptions of $g ( u _ { 1 } )$ and $U _ { 1 }$ can be used to form a problem that is (by construction) equivalent to $( \text{L} )$, in the sense that $q = v ^ { * }$ and $u _ { 1 } = u _ { 1 } ^ { * }$ is an optimal solution:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002053.png" /></td> </tr></table>
+
\begin{equation*} ( \text{LD} ) v ^ { * } = \left\{ \begin{array} { c l } { \operatorname { max } } &amp; { q } \\ { s.t. } &amp; { q \leq c ^ { T } x ^ { ( k ) } + u _ { 1 } ^ { T } ( A _ { 1 } x ^ { ( k ) } - b _ { 1 } ), } \\ { } &amp; { \forall k \in P, } \\ { 0 \leq } &amp; { c ^ { T } \tilde{x} ^ { ( k ) } + u _ { 1 } ^ { T } A _ { 1 } \tilde{x} ^ { ( k ) } , \forall k \in R, } \\ { u _ { 1 } \geq 0. } \end{array} \right. \end{equation*}
  
The dual variables of the first and second constraints in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002054.png" /> are actually the weights <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002055.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002056.png" /> used above, so the LP-dual of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002057.png" /> is as given below:
+
The dual variables of the first and second constraints in $( \text{P} )$ are actually the weights $\lambda _ { k }$ and $\mu _ { k }$ used above, so the LP-dual of $( \operatorname {LD} )$ is as given below:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002058.png" /></td> </tr></table>
+
<table class="eq" style="width:100%;"> <tr><td style="width:94%;text-align:center;" valign="top"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002058.png"/></td> </tr></table>
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002059.png" /> can be directly obtained from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002060.png" /> by doing the substitution (a1) above.
+
$( \operatorname{LP} )$ can be directly obtained from $( \text{P} )$ by doing the substitution (a1) above.
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002061.png" /> is optimal in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002062.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002063.png" /> is optimal in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002064.png" />.
+
If $( \overline { \lambda } , \overline { \mu } )$ is optimal in $( \operatorname{LP} )$, then $\overline{x} = \sum _ { k \in P } \overline { \lambda } _ { k } x ^ { ( k ) } + \sum _ { k \in R } \overline { \mu } _ { k }  \tilde{x} ^ { ( k ) }$ is optimal in $( \text{P} )$.
  
The number of extreme points, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002065.png" />, and the number of extreme directions, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002066.png" />, are generally very large, so in most cases <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002067.png" /> is too large to be solved directly with a linear programming code. Therefore one solves <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002068.png" /> with column generation. An approximation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002069.png" /> is obtained by including only a subset of the variables. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002070.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002071.png" /> be indices of extreme points and extreme directions that are included.
+
The number of extreme points, $| P |$, and the number of extreme directions, $| R |$, are generally very large, so in most cases $( \operatorname{LP} )$ is too large to be solved directly with a linear programming code. Therefore one solves $( \operatorname{LP} )$ with column generation. An approximation of $( \operatorname{LP} )$ is obtained by including only a subset of the variables. Let $P ^ { \prime } \subseteq P$ and $R ^ { \prime } \subseteq R$ be indices of extreme points and extreme directions that are included.
  
Replacing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002072.png" /> by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002073.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002074.png" /> by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002075.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002076.png" /> yields the restricted master problem or the Dantzig–Wolfe master problem, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002077.png" />, with objective function value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002078.png" />. Doing the same in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002079.png" /> yields the problem <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002080.png" />. The constraints of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002081.png" /> are called dual value cuts and dual feasibility cuts. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002082.png" /> can have the same optimal <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002083.png" />-solution as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002084.png" /> (and thus as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002085.png" />) even if a large number of unnecessary dual cuts are missing. If the descriptions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002086.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002087.png" /> are insufficient at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002088.png" />, the solution obtained will violate at least one of the missing cuts. Since <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002089.png" /> is obtained from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002090.png" /> by simply dropping a number of constraints, it is easy to see that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002091.png" /> and that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002092.png" /> will converge towards <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002093.png" /> if the sets of cuts grow towards <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002094.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002095.png" />.
+
Replacing $P$ by $P ^ { \prime }$ and $R$ by $R ^ { \prime }$ in $( \operatorname {LD} )$ yields the restricted master problem or the Dantzig–Wolfe master problem, $( \operatorname { M} )$, with objective function value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002078.png"/>. Doing the same in $( \operatorname{LP} )$ yields the problem $( \operatorname{MP} )$. The constraints of $( \operatorname { M} )$ are called dual value cuts and dual feasibility cuts. $( \operatorname { M} )$ can have the same optimal $u_1$-solution as $( \operatorname {LD} )$ (and thus as $( \operatorname{PD} )$) even if a large number of unnecessary dual cuts are missing. If the descriptions of $g ( u _ { 1 } )$ or $U _ { 1 }$ are insufficient at $u_{1}^{*}$, the solution obtained will violate at least one of the missing cuts. Since $( \operatorname { M} )$ is obtained from $( \operatorname {LD} )$ by simply dropping a number of constraints, it is easy to see that $v _ { \operatorname{M} } \geq v ^ { * }$ and that $v _ { \text{M} }$ will converge towards <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002093.png"/> if the sets of cuts grow towards $P$ and $R$.
  
Since <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002096.png" /> is a linear programming problem, one can calculate the reduced cost for each variable, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002097.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002098.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d12002099.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020100.png" />, for a certain basic solution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020101.png" />, i.e. for a certain dual solution, denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020102.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020103.png" />, as
+
Since $( \operatorname{LP} )$ is a linear programming problem, one can calculate the reduced cost for each variable, $\hat{c} ^ { 1_k } $, $k \in P$, and $\hat { c } ^ { 2 }_k$, $k \in R$, for a certain basic solution of $( \operatorname{LP} )$, i.e. for a certain dual solution, denoted by $\overline { u }_1$ and $\overline { q }$, as
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020104.png" /></td> </tr></table>
+
\begin{equation*} \hat { c } _ { k } ^ { 1 } = c ^ { T } x ^ { ( k ) } + ( A _ { 1 } x ^ { ( k ) } - b _ { 1 } ) ^ { T } \overline { u } _ { 1 } - \overline { q }, \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020105.png" /></td> </tr></table>
+
<table class="eq" style="width:100%;"> <tr><td style="width:94%;text-align:center;" valign="top"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020105.png"/></td> </tr></table>
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020106.png" /> is a minimization problem, so the optimum is reached if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020107.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020108.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020109.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020110.png" />. A variable with a negative reduced cost can improve the solution, if it is entered into the basis. The variable with the most negative reduced cost can be found by
+
$( \operatorname{LP} )$ is a minimization problem, so the optimum is reached if $\widehat { c } ^ { 1 } k \geq 0$ for all $k \in P$ and $\hat{c}_{k} ^ { 2 } \geq 0$ for all $k \in R$. A variable with a negative reduced cost can improve the solution, if it is entered into the basis. The variable with the most negative reduced cost can be found by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020111.png" /></td> </tr></table>
+
<table class="eq" style="width:100%;"> <tr><td style="width:94%;text-align:center;" valign="top"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020111.png"/></td> </tr></table>
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020112.png" /></td> </tr></table>
+
<table class="eq" style="width:100%;"> <tr><td style="width:94%;text-align:center;" valign="top"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020112.png"/></td> </tr></table>
  
Both of these, and the least of them, can be found by solving the dual subproblem <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020113.png" />.
+
Both of these, and the least of them, can be found by solving the dual subproblem $( \operatorname{S} )$.
  
So, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020114.png" /> has a bounded solution, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020115.png" />, this yields a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020116.png" />-variable that should enter the basis if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020117.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020118.png" /> has an unbounded solution, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020119.png" />, this yields a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020120.png" />-variable, that should enter the basis if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020121.png" />.
+
So, if $( \operatorname{S} )$ has a bounded solution, $x ^ { ( l ) }$, this yields a $\lambda_{l}$-variable that should enter the basis if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020117.png"/>. If $( \operatorname{S} )$ has an unbounded solution, $\tilde{x} ^ { ( l ) }$, this yields a $\mu_{l}$-variable, that should enter the basis if $\hat {  c  } _ { l } ^ { 2 } &lt; 0$.
  
The <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020122.png" />-variable should enter the basis if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020123.png" />. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020124.png" /> is the objective function value of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020125.png" /> at the present basis, which might not be optimal, so <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020126.png" />. Secondly, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020127.png" />, which is the optimal objective function value of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020128.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020129.png" />. Thus,
+
The $\lambda_{l}$-variable should enter the basis if $\hat { c } _ { l } ^ { 1 } = c ^ { T } x ^ { ( l ) } + ( A _ { 1 } x ^ { ( l ) } - b _ { 1 } ) ^ { T } \overline { u } _ { 1 } - \overline { q } &lt; 0$. $\overline { q }$ is the objective function value of $( \operatorname{LP} )$ at the present basis, which might not be optimal, so $\overline { q } \geq v ^ { * }$. Secondly, $g ( \overline { u } _ { 1 } ) = c ^ { T } x ^ { ( l ) } + ( A _ { 1 } x ^ { ( l ) } - b _ { 1 } ) ^ { T } \overline { u _1}$, which is the optimal objective function value of $( \operatorname{S} )$, and $g ( u _ { 1 } ) \leq v ^ { * }$. Thus,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020130.png" /></td> </tr></table>
+
<table class="eq" style="width:100%;"> <tr><td style="width:94%;text-align:center;" valign="top"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020130.png"/></td> </tr></table>
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020131.png" /></td> </tr></table>
+
\begin{equation*} = g ( \overline { u } _ { 1 } ) - \overline { q } = g ( \overline { u } _ { 1 } ) - v _ { \text{M} }, \end{equation*}
  
and since <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020132.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020133.png" />. Equality is only possible if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020134.png" />, i.e. if the present basis is optimal. In other words, the existence of a variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020135.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020136.png" /> is equivalent to the lower bound obtained from the subproblem being strictly less than the upper bound obtained from the master problem.
+
and since $g ( \overline { u } _ { 1 } ) \leq v ^ { * } \leq \overline { q }$, $\hat{c}_{k}^{1} \leq 0$. Equality is only possible if $g ( \overline { u } _ { 1 } ) = v ^ { * } = \overline { q } = v _ { \text{M} }$, i.e. if the present basis is optimal. In other words, the existence of a variable $\lambda_{l}$ with $\hat {  c  }_k^1&lt; 0$ is equivalent to the lower bound obtained from the subproblem being strictly less than the upper bound obtained from the master problem.
  
Finding variables with negative reduced costs to introduce into the basis in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020137.png" /> (i.e. to introduce in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020138.png" />) can thus be done by solving <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020139.png" />, since <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020140.png" /> yields the column with the most negative reduced cost (or the most violated cut of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020141.png" />). This forms the basis of the Dantzig–Wolfe algorithm.
+
Finding variables with negative reduced costs to introduce into the basis in $( \operatorname{LP} )$ (i.e. to introduce in $( \operatorname{MP} )$) can thus be done by solving $( \operatorname{S} )$, since $( \operatorname{LP} )$ yields the column with the most negative reduced cost (or the most violated cut of $( \operatorname {LD} )$). This forms the basis of the Dantzig–Wolfe algorithm.
  
In the [[Simplex method|simplex method]], the entering variable does not need to be the one with the most negative reduced cost. Any negative reduced cost suffices. This means that instead of solving <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020142.png" /> to optimality, it can be sufficient to either find a direction <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020143.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020144.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020145.png" />, or a point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020146.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020147.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020148.png" />.
+
In the [[Simplex method|simplex method]], the entering variable does not need to be the one with the most negative reduced cost. Any negative reduced cost suffices. This means that instead of solving $( \operatorname{S} )$ to optimality, it can be sufficient to either find a direction $\tilde{x}$ of $X$ such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020145.png"/>, or a point $\bar{x}$ in $X$ such that $c ^ { T } \overline{x} + \overline { u } _1^ { T } ( A _ { 1 } \overline{x} - b _ { 1 } ) &lt; \overline { q }$.
  
In the Dantzig–Wolfe decomposition algorithm, there might occur unbounded dual solutions to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020149.png" />. They can generate information about <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020150.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020151.png" /> (i.e. generate cuts for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020152.png" /> or columns for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020153.png" />), or they can be optimal in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020154.png" /> (when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020155.png" /> is infeasible). The modified subproblem that accepts an unbounded solution as input (and yields a cut that eliminates it, unless it is optimal in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020156.png" />) is as follows. Let the unbounded solution be the non-negative (extreme) direction <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020157.png" />,
+
In the Dantzig–Wolfe decomposition algorithm, there might occur unbounded dual solutions to $( \operatorname { M} )$. They can generate information about $g ( u _ { 1 } )$ and $U _ { 1 }$ (i.e. generate cuts for $( \operatorname { M} )$ or columns for $( \operatorname{MP} )$), or they can be optimal in $( \operatorname{PD} )$ (when $( \text{P} )$ is infeasible). The modified subproblem that accepts an unbounded solution as input (and yields a cut that eliminates it, unless it is optimal in $( \operatorname{PD} )$) is as follows. Let the unbounded solution be the non-negative (extreme) direction $\tilde{u}_1$,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020158.png" /></td> </tr></table>
+
<table class="eq" style="width:100%;"> <tr><td style="width:94%;text-align:center;" valign="top"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020158.png"/></td> </tr></table>
  
Assume that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020159.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020160.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020161.png" /> and that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020162.png" /> is feasible. Then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020163.png" /> if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020164.png" /> is infeasible and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020165.png" /> is a part of an unbounded solution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020166.png" />. The algorithm should then be terminated.
+
Assume that $\tilde{u} _ { 1 } \geq 0$, $\tilde{u} _ { 1 } \neq 0$, $X \neq \emptyset$ and that $( \operatorname{PD} )$ is feasible. Then $\tilde{v} ( \tilde { u } _ { 1 } ) &gt; 0$ if and only if $( \text{P} )$ is infeasible and $\tilde{u}_1$ is a part of an unbounded solution of $( \operatorname{PD} )$. The algorithm should then be terminated.
  
Assume that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020167.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020168.png" /> then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020169.png" /> will yield a cut for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020170.png" /> that ensures that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020171.png" /> is not an unbounded solution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020172.png" />.
+
Assume that $\tilde{u} _ { 1 } \geq 0$. If $\tilde{v} ( \tilde{u} _ { 1 } ) \leq 0$ then $( \text{US} )$ will yield a cut for $( \operatorname { M} )$ that ensures that $\tilde{u}_1$ is not an unbounded solution of $( \operatorname { M} )$.
  
In the Dantzig–Wolfe algorithm, [[#References|[a1]]], one iterates between the dual subproblem, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020173.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020174.png" />, and the dual master problem, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020175.png" /> (or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020176.png" />). The master problem indicates if there are necessary extreme points or directions missing, and the subproblem generates such points and directions, i.e. the cuts or columns that are needed. Since there is only a finite number of possible cuts, this algorithm will converge to the exact optimum in a finite number of steps.
+
In the Dantzig–Wolfe algorithm, [[#References|[a1]]], one iterates between the dual subproblem, $( \operatorname{S} )$ or $( \text{US} )$, and the dual master problem, $( \operatorname { M} )$ (or $( \operatorname{MP} )$). The master problem indicates if there are necessary extreme points or directions missing, and the subproblem generates such points and directions, i.e. the cuts or columns that are needed. Since there is only a finite number of possible cuts, this algorithm will converge to the exact optimum in a finite number of steps.
  
==Algorithmic description of the Dantzig–Wolfe decomposition method for solving <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020177.png" />.==
+
==Algorithmic description of the Dantzig–Wolfe decomposition method for solving $( \text{P} )$.==
  
  
1) Generate a dual starting solution, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020178.png" />, and, optionally, a starting set of primal extreme solutions, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020179.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020180.png" />. Initialize <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020181.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020182.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020183.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020184.png" />. Set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020185.png" />.
+
1) Generate a dual starting solution, $\overline { u }_1$, and, optionally, a starting set of primal extreme solutions, $x ^ { ( k ) }$ and $\tilde{x} ^ { ( k ) }$. Initialize $P ^ { \prime }$, $R ^ { \prime }$, $\underline { v } = - \infty$ and $\overline { v } = \infty$. Set $k = 1$.
  
2) Solve the dual subproblem <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020186.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020187.png" />.
+
2) Solve the dual subproblem $( \operatorname{S} )$ with $\overline { u }_1$.
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020188.png" /> is infeasible: Stop. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020189.png" /> is infeasible.
+
If $( \operatorname{S} )$ is infeasible: Stop. $( \text{P} )$ is infeasible.
  
Else, a primal extreme solution is obtained: either a point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020190.png" /> and the value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020191.png" /> or a direction <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020192.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020193.png" />, set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020194.png" />. Go to 4).
+
Else, a primal extreme solution is obtained: either a point $x ^ { ( k ) }$ and the value $g(\overline{u}_1)$ or a direction $\tilde{x} ^ { ( k ) }$. If $g ( \overline { u } _ { 1 } ) &gt; \underline { v }$, set $\underline { v } = g ( \overline { u } _ { 1 } )$. Go to 4).
  
3) Solve the dual subproblem <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020195.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020196.png" />. A primal extreme solution is obtained: either a point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020197.png" /> and the value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020198.png" /> or a direction <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020199.png" />.
+
3) Solve the dual subproblem $( \text{US} )$ with $\tilde{u}_1$. A primal extreme solution is obtained: either a point $x ^ { ( k ) }$ and the value $\tilde{v} ( \tilde { u } _ { 1 } )$ or a direction $\tilde{x} ^ { ( k ) }$.
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020200.png" />: Stop. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020201.png" /> is infeasible. Else, go to 5).
+
If $\tilde{v} ( \tilde { u } _ { 1 } ) &gt; 0$: Stop. $( \text{P} )$ is infeasible. Else, go to 5).
  
4) Optimality test: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020202.png" />, then go to 7).
+
4) Optimality test: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020202.png"/>, then go to 7).
  
5) Solve the dual master problem <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020203.png" /> (and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020204.png" />) with all the known primal solutions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020205.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020206.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020207.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020208.png" />.
+
5) Solve the dual master problem $( \operatorname { M} )$ (and $( \operatorname{MP} )$) with all the known primal solutions $x ^ { ( k ) }$, $k \in P ^ { \prime }$ and $\tilde{x} ^ { ( k ) }$, $k \in R ^ { \prime }$.
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020209.png" /> is infeasible: Stop. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020210.png" /> is unbounded.
+
If $( \operatorname { M} )$ is infeasible: Stop. $( \text{P} )$ is unbounded.
  
Else, this yields <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020211.png" /> and a new dual solution: either a point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020212.png" /> and an upper bound <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020213.png" /> or a direction <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020214.png" />.
+
Else, this yields $( \overline { \lambda } , \overline { \mu } )$ and a new dual solution: either a point $\overline { u }_1$ and an upper bound <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020213.png"/> or a direction $\tilde{u}_1$.
  
6) Optimality test: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020215.png" /> then go to 7). Else, set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020216.png" />.
+
6) Optimality test: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020215.png"/> then go to 7). Else, set $ k  = k + 1$.
  
If the dual solution is a point, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020217.png" />, go to 2), and if it is a direction, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020218.png" />, go to 3).
+
If the dual solution is a point, $\overline { u }_1$, go to 2), and if it is a direction, $\tilde{u}_1$, go to 3).
  
7) Stop. The optimal solution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020219.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020220.png" />.
+
7) Stop. The optimal solution of $( \text{P} )$ is $\overline{x} = \sum _ { k \in P ^ { \prime } } \overline { \lambda } _ { k } x ^ { ( k ) } + \sum _ { k \in R ^ { \prime } } \overline { \mu } _ { k } \tilde{x} ^ { ( k ) }$.
  
 
===Comments.===
 
===Comments.===
In step 5) one always sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020221.png" /> because the objective function value of M) is decreasing (since in each iteration a constraint is added), but in step 2) one only sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020222.png" /> if this yields an increase in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020223.png" />, since the increase in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020224.png" /> is probably not monotone. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020225.png" /> is infeasible, the direction of the unbounded solution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020226.png" /> can be obtained from the unbounded solution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020227.png" /> <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020228.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020229.png" />.
+
In step 5) one always sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020221.png"/> because the objective function value of M) is decreasing (since in each iteration a constraint is added), but in step 2) one only sets $\underline { v } = g ( \overline { u } _ { 1 } )$ if this yields an increase in $\underline{v}$, since the increase in $g(\overline{u}_1)$ is probably not monotone. If $( \operatorname { M} )$ is infeasible, the direction of the unbounded solution of $( \text{P} )$ can be obtained from the unbounded solution of $( \operatorname{MP} )$ $\tilde{\pi}$ as $\tilde { x } = \sum _ { k \in R ^ { \prime } } \overline { \mu } _ { k }\, \tilde { x } ^ { ( k ) }$.
  
 
===Convergence.===
 
===Convergence.===
 
The algorithm has finite convergence, as described below.
 
The algorithm has finite convergence, as described below.
  
Assume that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020230.png" /> is feasible and that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020231.png" /> is a solution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020232.png" />. Then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020233.png" /> if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020234.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020235.png" /> is a part of the optimal solution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020236.png" />. If this is true and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020237.png" /> is the solution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020238.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020239.png" /> is an optimal solution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020240.png" />.
+
Assume that $( \text{P} )$ is feasible and that $\overline { u }_1$ is a solution of $( \operatorname { M} )$. Then $g ( \overline { u } _ { 1 } ) = v _ { M }$ if and only if $v _ { M } = v ^ { * }$ and $\overline { u }_1$ is a part of the optimal solution of $( \operatorname{PD} )$. If this is true and $( \overline { \lambda } , \overline { \mu } )$ is the solution of $( \operatorname{MP} )$, then $\overline{x} = \sum _ { k \in P ^ { \prime } } \overline { \lambda } _ { k } x ^ { ( k ) } + \sum _ { k \in R ^ { \prime } } \overline { \mu } _ { k } \tilde{x} ^ { ( k ) }$ is an optimal solution of $( \text{P} )$.
  
Assume that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020241.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020242.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020243.png" /> will yield a cut that ensures that (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020244.png" />) is not a solution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020245.png" />.
+
Assume that $\overline { u } _ { 1 } \geq 0$. If $g ( \overline { u } _ { 1 } ) &lt; v _ { M } = \overline { q }$, then $( \operatorname{S} )$ will yield a cut that ensures that ($\overline { u }_{ 1 } , \overline { q }$) is not a solution of $( \operatorname { M} )$.
  
It is important to know that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020246.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020247.png" /> do not yield any solutions more than once (unless they are optimal). This depends on the inputs to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020248.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020249.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020250.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020251.png" />, which are obtained from the dual master problem, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020252.png" />.
+
It is important to know that $( \operatorname{S} )$ and $( \text{US} )$ do not yield any solutions more than once (unless they are optimal). This depends on the inputs to $( \operatorname{S} )$ and $( \text{US} )$, $\overline { u }_1$ and $\tilde{u}_1$, which are obtained from the dual master problem, $( \operatorname { M} )$.
  
Assume that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020253.png" /> is a solution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020254.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020255.png" />, then solving <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020256.png" /> at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020257.png" /> will yield a so far unknown cut that ensures that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020258.png" /> will not have the same solution in the next iteration, i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020259.png" /> will yield either a different <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020260.png" />-solution or the same, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020261.png" />, with a lower value of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020262.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020263.png" />).
+
Assume that $\overline { u } _ { 1 } \geq 0$ is a solution of $( \operatorname { M} )$. If $v _ { M } &gt; v ^ { * }$, then solving $( \operatorname{S} )$ at $\overline { u }_1$ will yield a so far unknown cut that ensures that $( \operatorname { M} )$ will not have the same solution in the next iteration, i.e. $( \operatorname { M} )$ will yield either a different $u_1$-solution or the same, $\overline { u }_1$, with a lower value of $\overline { q }$ ($= v _ { M }$).
  
Assume that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020264.png" /> is an unbounded solution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020265.png" />. Then solving <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020266.png" /> at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020267.png" /> will yield a so far unknown cut that ensures that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020268.png" /> will not have the same solution in the next iteration, or will prove that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020269.png" /> is infeasible.
+
Assume that $\tilde{u} _ { 1 } \geq 0$ is an unbounded solution of $( \operatorname { M} )$. Then solving $( \text{US} )$ at $\tilde{u}_1$ will yield a so far unknown cut that ensures that $( \operatorname { M} )$ will not have the same solution in the next iteration, or will prove that $( \text{P} )$ is infeasible.
  
All this shows that the Dantzig–Wolfe decomposition algorithm for solving <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120020/d120020270.png" /> will terminate with the correct solution within a finite number of iterations.
+
All this shows that the Dantzig–Wolfe decomposition algorithm for solving $( \text{P} )$ will terminate with the correct solution within a finite number of iterations.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  G.B. Dantzig,  P. Wolfe,  "Decomposition principle for linear programs."  ''Oper. Res.'' , '''8'''  (1960)  pp. 101–111</TD></TR></table>
+
<table><tr><td valign="top">[a1]</td> <td valign="top">  G.B. Dantzig,  P. Wolfe,  "Decomposition principle for linear programs."  ''Oper. Res.'' , '''8'''  (1960)  pp. 101–111</td></tr></table>

Revision as of 16:46, 1 July 2020

Dantzig–Wolfe decomposition algorithm

Consider the following linear programming problem (LP problem), with a row structure as indicated by the two sets of constraints.

\begin{equation*} ( \text{P} ) v ^ { * } = \left\{ \begin{array} { c c } { \operatorname { min } } & { c ^ { T } x } \\ { \text { s.t. } } & { A _ { 1 } x \leq b _ { 1 }, } \\ { } & { A _ { 2 } x \leq b _ { 2 }, } \\ {} & { x \geq 0. } \end{array} \right. \end{equation*}

The Dantzig–Wolfe approach is often used for the case when $( \text{P} )$ is a block-angular (linear) programming problem. Then the second constraint set $A _ { 2 } x \leq b _ { 2 }$ is separable into a number of parts, containing disjoint sets of variables.

The optimal solution is usually denoted by $x ^ { * }$. One denotes the LP-dual of $( \text{P} )$ by $( \operatorname{PD} )$ and the optimal dual solution by $( u _ { 1 } ^ { * } , u _ { 2 } ^ { * } )$.

The row structure can be utilized by applying Lagrangean duality to the first constraint set $A _ { 1 } x \leq b _ { 1 }$:

\begin{equation*} ( \operatorname{L} ) v ^ { * } = \left\{ \begin{array} { l l } { \operatorname { max } } & { g ( u _ { 1 } ) } \\ { s.t. } & { u _ { 1 } \in U _ { 1 }, } \end{array} \right. \end{equation*}

where, for any fixed $\overline { u } _ { 1 } \in U _ { 1 }$,

\begin{equation*} ( \text{S} ) g ( \overline { u } _ { 1 } ) = \left\{ \begin{array} { c l } { \operatorname { min } } & { c ^ { T } x + \overline { u } _{1} ^ { T } ( A _ { 1 } x - b _ { 1 } ) } \\ { \text { s.t. } } & { A _ { 2 } x \leq b _ { 2 }, } \\ { } & { x \geq 0, } \end{array} \right. \end{equation*}

and $U _ { 1 } = \{ u _ { 1 } \geq 0 : g ( u _ { 1 } ) > - \infty \}$.

$( \operatorname{S} )$ is called the subproblem and must be solved in order to evaluate the dual function $g ( u _ { 1 } )$ at a certain point, $\overline { u } _ { 1 } \in U _ { 1 }$.

One can show that $g ( u _ { 1 } ) \leq v ^ { * }$ for all $u _ { 1 } \geq 0$. The controllability of the subproblem is limited. Inserting $\overline { u } _ { 1 } = u _ { 1 } ^ { * }$ in $( \operatorname{S} )$ yields $u _ { 2 } = u _ { 2 } ^ { * }$, but possibly not $x = x ^ { * }$.

$U _ { 1 }$ is the region where $( \operatorname{S} )$ is bounded, i.e. where its dual is feasible, leading to .

Now, let $X = \{ x : A _ { 2 } x \leq b _ { 2 } , x \geq 0 \}$, which is the feasible set of $( \operatorname{S} )$. Since $X$ is a polyhedron, it has a finite number of extreme points, $x ^ { ( k ) }$, $k \in P$, and extreme rays, $\tilde{x} ^ { ( k ) }$, $k \in R$. Any point in $X$ can be described as

\begin{equation} \tag{a1} v ^ { * } = \sum _ { k \in P } \lambda _ { k } x ^ { ( k ) } + \sum _ { k \in R } \mu _ { k } \tilde{x} ^ { ( k ) }, \end{equation}

where $\sum _ { k \in P } \lambda _ { k } = 1$, $\lambda _ { k } \geq 0$ for all $k \in P$ and $\mu _ { k } \geq 0$ for all $k \in R$. If $\overline { u } _ { 1 } \in U _ { 1 }$, then the optimal solution of $( \operatorname{S} )$ is bounded, and since $( \operatorname{S} )$ is a linear programming problem, the optimum is clearly attained in one of the extreme points $x ^ { ( k ) }$ of $X$. So,

\begin{equation*} g ( u _ { 1 } ) = \end{equation*}

\begin{equation*} = \operatorname { min } _ { x \in X } c ^ { T } x + u _ { 1 } ^ { T } ( A _ { 1 } x - b _ { 1 } ) = \end{equation*}

\begin{equation*} = \operatorname { min } _ { k \in P } c ^ { T } x ^ { ( k ) } + u _ { 1 } ^ { T } ( A _ { 1 } x ^ { ( k ) } - b _ { 1 } ). \end{equation*}

Also, it is sufficient to include the extreme directions, yielding $U _ { 1 } = \{ u _ { 1 } \geq 0 : c ^ { T } \tilde { x } ^{ ( k ) } + u _ { 1 } A _ { 1 } \tilde{x} ^ { ( k ) } \geq 0 \text { for all } k \in R \}$.

These descriptions of $g ( u _ { 1 } )$ and $U _ { 1 }$ can be used to form a problem that is (by construction) equivalent to $( \text{L} )$, in the sense that $q = v ^ { * }$ and $u _ { 1 } = u _ { 1 } ^ { * }$ is an optimal solution:

\begin{equation*} ( \text{LD} ) v ^ { * } = \left\{ \begin{array} { c l } { \operatorname { max } } & { q } \\ { s.t. } & { q \leq c ^ { T } x ^ { ( k ) } + u _ { 1 } ^ { T } ( A _ { 1 } x ^ { ( k ) } - b _ { 1 } ), } \\ { } & { \forall k \in P, } \\ { 0 \leq } & { c ^ { T } \tilde{x} ^ { ( k ) } + u _ { 1 } ^ { T } A _ { 1 } \tilde{x} ^ { ( k ) } , \forall k \in R, } \\ { u _ { 1 } \geq 0. } \end{array} \right. \end{equation*}

The dual variables of the first and second constraints in $( \text{P} )$ are actually the weights $\lambda _ { k }$ and $\mu _ { k }$ used above, so the LP-dual of $( \operatorname {LD} )$ is as given below:

$( \operatorname{LP} )$ can be directly obtained from $( \text{P} )$ by doing the substitution (a1) above.

If $( \overline { \lambda } , \overline { \mu } )$ is optimal in $( \operatorname{LP} )$, then $\overline{x} = \sum _ { k \in P } \overline { \lambda } _ { k } x ^ { ( k ) } + \sum _ { k \in R } \overline { \mu } _ { k } \tilde{x} ^ { ( k ) }$ is optimal in $( \text{P} )$.

The number of extreme points, $| P |$, and the number of extreme directions, $| R |$, are generally very large, so in most cases $( \operatorname{LP} )$ is too large to be solved directly with a linear programming code. Therefore one solves $( \operatorname{LP} )$ with column generation. An approximation of $( \operatorname{LP} )$ is obtained by including only a subset of the variables. Let $P ^ { \prime } \subseteq P$ and $R ^ { \prime } \subseteq R$ be indices of extreme points and extreme directions that are included.

Replacing $P$ by $P ^ { \prime }$ and $R$ by $R ^ { \prime }$ in $( \operatorname {LD} )$ yields the restricted master problem or the Dantzig–Wolfe master problem, $( \operatorname { M} )$, with objective function value . Doing the same in $( \operatorname{LP} )$ yields the problem $( \operatorname{MP} )$. The constraints of $( \operatorname { M} )$ are called dual value cuts and dual feasibility cuts. $( \operatorname { M} )$ can have the same optimal $u_1$-solution as $( \operatorname {LD} )$ (and thus as $( \operatorname{PD} )$) even if a large number of unnecessary dual cuts are missing. If the descriptions of $g ( u _ { 1 } )$ or $U _ { 1 }$ are insufficient at $u_{1}^{*}$, the solution obtained will violate at least one of the missing cuts. Since $( \operatorname { M} )$ is obtained from $( \operatorname {LD} )$ by simply dropping a number of constraints, it is easy to see that $v _ { \operatorname{M} } \geq v ^ { * }$ and that $v _ { \text{M} }$ will converge towards if the sets of cuts grow towards $P$ and $R$.

Since $( \operatorname{LP} )$ is a linear programming problem, one can calculate the reduced cost for each variable, $\hat{c} ^ { 1_k } $, $k \in P$, and $\hat { c } ^ { 2 }_k$, $k \in R$, for a certain basic solution of $( \operatorname{LP} )$, i.e. for a certain dual solution, denoted by $\overline { u }_1$ and $\overline { q }$, as

\begin{equation*} \hat { c } _ { k } ^ { 1 } = c ^ { T } x ^ { ( k ) } + ( A _ { 1 } x ^ { ( k ) } - b _ { 1 } ) ^ { T } \overline { u } _ { 1 } - \overline { q }, \end{equation*}

$( \operatorname{LP} )$ is a minimization problem, so the optimum is reached if $\widehat { c } ^ { 1 } k \geq 0$ for all $k \in P$ and $\hat{c}_{k} ^ { 2 } \geq 0$ for all $k \in R$. A variable with a negative reduced cost can improve the solution, if it is entered into the basis. The variable with the most negative reduced cost can be found by

Both of these, and the least of them, can be found by solving the dual subproblem $( \operatorname{S} )$.

So, if $( \operatorname{S} )$ has a bounded solution, $x ^ { ( l ) }$, this yields a $\lambda_{l}$-variable that should enter the basis if . If $( \operatorname{S} )$ has an unbounded solution, $\tilde{x} ^ { ( l ) }$, this yields a $\mu_{l}$-variable, that should enter the basis if $\hat { c } _ { l } ^ { 2 } < 0$.

The $\lambda_{l}$-variable should enter the basis if $\hat { c } _ { l } ^ { 1 } = c ^ { T } x ^ { ( l ) } + ( A _ { 1 } x ^ { ( l ) } - b _ { 1 } ) ^ { T } \overline { u } _ { 1 } - \overline { q } < 0$. $\overline { q }$ is the objective function value of $( \operatorname{LP} )$ at the present basis, which might not be optimal, so $\overline { q } \geq v ^ { * }$. Secondly, $g ( \overline { u } _ { 1 } ) = c ^ { T } x ^ { ( l ) } + ( A _ { 1 } x ^ { ( l ) } - b _ { 1 } ) ^ { T } \overline { u _1}$, which is the optimal objective function value of $( \operatorname{S} )$, and $g ( u _ { 1 } ) \leq v ^ { * }$. Thus,

\begin{equation*} = g ( \overline { u } _ { 1 } ) - \overline { q } = g ( \overline { u } _ { 1 } ) - v _ { \text{M} }, \end{equation*}

and since $g ( \overline { u } _ { 1 } ) \leq v ^ { * } \leq \overline { q }$, $\hat{c}_{k}^{1} \leq 0$. Equality is only possible if $g ( \overline { u } _ { 1 } ) = v ^ { * } = \overline { q } = v _ { \text{M} }$, i.e. if the present basis is optimal. In other words, the existence of a variable $\lambda_{l}$ with $\hat { c }_k^1< 0$ is equivalent to the lower bound obtained from the subproblem being strictly less than the upper bound obtained from the master problem.

Finding variables with negative reduced costs to introduce into the basis in $( \operatorname{LP} )$ (i.e. to introduce in $( \operatorname{MP} )$) can thus be done by solving $( \operatorname{S} )$, since $( \operatorname{LP} )$ yields the column with the most negative reduced cost (or the most violated cut of $( \operatorname {LD} )$). This forms the basis of the Dantzig–Wolfe algorithm.

In the simplex method, the entering variable does not need to be the one with the most negative reduced cost. Any negative reduced cost suffices. This means that instead of solving $( \operatorname{S} )$ to optimality, it can be sufficient to either find a direction $\tilde{x}$ of $X$ such that , or a point $\bar{x}$ in $X$ such that $c ^ { T } \overline{x} + \overline { u } _1^ { T } ( A _ { 1 } \overline{x} - b _ { 1 } ) < \overline { q }$.

In the Dantzig–Wolfe decomposition algorithm, there might occur unbounded dual solutions to $( \operatorname { M} )$. They can generate information about $g ( u _ { 1 } )$ and $U _ { 1 }$ (i.e. generate cuts for $( \operatorname { M} )$ or columns for $( \operatorname{MP} )$), or they can be optimal in $( \operatorname{PD} )$ (when $( \text{P} )$ is infeasible). The modified subproblem that accepts an unbounded solution as input (and yields a cut that eliminates it, unless it is optimal in $( \operatorname{PD} )$) is as follows. Let the unbounded solution be the non-negative (extreme) direction $\tilde{u}_1$,

Assume that $\tilde{u} _ { 1 } \geq 0$, $\tilde{u} _ { 1 } \neq 0$, $X \neq \emptyset$ and that $( \operatorname{PD} )$ is feasible. Then $\tilde{v} ( \tilde { u } _ { 1 } ) > 0$ if and only if $( \text{P} )$ is infeasible and $\tilde{u}_1$ is a part of an unbounded solution of $( \operatorname{PD} )$. The algorithm should then be terminated.

Assume that $\tilde{u} _ { 1 } \geq 0$. If $\tilde{v} ( \tilde{u} _ { 1 } ) \leq 0$ then $( \text{US} )$ will yield a cut for $( \operatorname { M} )$ that ensures that $\tilde{u}_1$ is not an unbounded solution of $( \operatorname { M} )$.

In the Dantzig–Wolfe algorithm, [a1], one iterates between the dual subproblem, $( \operatorname{S} )$ or $( \text{US} )$, and the dual master problem, $( \operatorname { M} )$ (or $( \operatorname{MP} )$). The master problem indicates if there are necessary extreme points or directions missing, and the subproblem generates such points and directions, i.e. the cuts or columns that are needed. Since there is only a finite number of possible cuts, this algorithm will converge to the exact optimum in a finite number of steps.

Algorithmic description of the Dantzig–Wolfe decomposition method for solving $( \text{P} )$.

1) Generate a dual starting solution, $\overline { u }_1$, and, optionally, a starting set of primal extreme solutions, $x ^ { ( k ) }$ and $\tilde{x} ^ { ( k ) }$. Initialize $P ^ { \prime }$, $R ^ { \prime }$, $\underline { v } = - \infty$ and $\overline { v } = \infty$. Set $k = 1$.

2) Solve the dual subproblem $( \operatorname{S} )$ with $\overline { u }_1$.

If $( \operatorname{S} )$ is infeasible: Stop. $( \text{P} )$ is infeasible.

Else, a primal extreme solution is obtained: either a point $x ^ { ( k ) }$ and the value $g(\overline{u}_1)$ or a direction $\tilde{x} ^ { ( k ) }$. If $g ( \overline { u } _ { 1 } ) > \underline { v }$, set $\underline { v } = g ( \overline { u } _ { 1 } )$. Go to 4).

3) Solve the dual subproblem $( \text{US} )$ with $\tilde{u}_1$. A primal extreme solution is obtained: either a point $x ^ { ( k ) }$ and the value $\tilde{v} ( \tilde { u } _ { 1 } )$ or a direction $\tilde{x} ^ { ( k ) }$.

If $\tilde{v} ( \tilde { u } _ { 1 } ) > 0$: Stop. $( \text{P} )$ is infeasible. Else, go to 5).

4) Optimality test: If , then go to 7).

5) Solve the dual master problem $( \operatorname { M} )$ (and $( \operatorname{MP} )$) with all the known primal solutions $x ^ { ( k ) }$, $k \in P ^ { \prime }$ and $\tilde{x} ^ { ( k ) }$, $k \in R ^ { \prime }$.

If $( \operatorname { M} )$ is infeasible: Stop. $( \text{P} )$ is unbounded.

Else, this yields $( \overline { \lambda } , \overline { \mu } )$ and a new dual solution: either a point $\overline { u }_1$ and an upper bound or a direction $\tilde{u}_1$.

6) Optimality test: If then go to 7). Else, set $ k = k + 1$.

If the dual solution is a point, $\overline { u }_1$, go to 2), and if it is a direction, $\tilde{u}_1$, go to 3).

7) Stop. The optimal solution of $( \text{P} )$ is $\overline{x} = \sum _ { k \in P ^ { \prime } } \overline { \lambda } _ { k } x ^ { ( k ) } + \sum _ { k \in R ^ { \prime } } \overline { \mu } _ { k } \tilde{x} ^ { ( k ) }$.

Comments.

In step 5) one always sets because the objective function value of M) is decreasing (since in each iteration a constraint is added), but in step 2) one only sets $\underline { v } = g ( \overline { u } _ { 1 } )$ if this yields an increase in $\underline{v}$, since the increase in $g(\overline{u}_1)$ is probably not monotone. If $( \operatorname { M} )$ is infeasible, the direction of the unbounded solution of $( \text{P} )$ can be obtained from the unbounded solution of $( \operatorname{MP} )$ $\tilde{\pi}$ as $\tilde { x } = \sum _ { k \in R ^ { \prime } } \overline { \mu } _ { k }\, \tilde { x } ^ { ( k ) }$.

Convergence.

The algorithm has finite convergence, as described below.

Assume that $( \text{P} )$ is feasible and that $\overline { u }_1$ is a solution of $( \operatorname { M} )$. Then $g ( \overline { u } _ { 1 } ) = v _ { M }$ if and only if $v _ { M } = v ^ { * }$ and $\overline { u }_1$ is a part of the optimal solution of $( \operatorname{PD} )$. If this is true and $( \overline { \lambda } , \overline { \mu } )$ is the solution of $( \operatorname{MP} )$, then $\overline{x} = \sum _ { k \in P ^ { \prime } } \overline { \lambda } _ { k } x ^ { ( k ) } + \sum _ { k \in R ^ { \prime } } \overline { \mu } _ { k } \tilde{x} ^ { ( k ) }$ is an optimal solution of $( \text{P} )$.

Assume that $\overline { u } _ { 1 } \geq 0$. If $g ( \overline { u } _ { 1 } ) < v _ { M } = \overline { q }$, then $( \operatorname{S} )$ will yield a cut that ensures that ($\overline { u }_{ 1 } , \overline { q }$) is not a solution of $( \operatorname { M} )$.

It is important to know that $( \operatorname{S} )$ and $( \text{US} )$ do not yield any solutions more than once (unless they are optimal). This depends on the inputs to $( \operatorname{S} )$ and $( \text{US} )$, $\overline { u }_1$ and $\tilde{u}_1$, which are obtained from the dual master problem, $( \operatorname { M} )$.

Assume that $\overline { u } _ { 1 } \geq 0$ is a solution of $( \operatorname { M} )$. If $v _ { M } > v ^ { * }$, then solving $( \operatorname{S} )$ at $\overline { u }_1$ will yield a so far unknown cut that ensures that $( \operatorname { M} )$ will not have the same solution in the next iteration, i.e. $( \operatorname { M} )$ will yield either a different $u_1$-solution or the same, $\overline { u }_1$, with a lower value of $\overline { q }$ ($= v _ { M }$).

Assume that $\tilde{u} _ { 1 } \geq 0$ is an unbounded solution of $( \operatorname { M} )$. Then solving $( \text{US} )$ at $\tilde{u}_1$ will yield a so far unknown cut that ensures that $( \operatorname { M} )$ will not have the same solution in the next iteration, or will prove that $( \text{P} )$ is infeasible.

All this shows that the Dantzig–Wolfe decomposition algorithm for solving $( \text{P} )$ will terminate with the correct solution within a finite number of iterations.

References

[a1] G.B. Dantzig, P. Wolfe, "Decomposition principle for linear programs." Oper. Res. , 8 (1960) pp. 101–111
How to Cite This Entry:
Dantzig-Wolfe decomposition. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Dantzig-Wolfe_decomposition&oldid=22319
This article was adapted from an original article by K. Holmberg (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article