Namespaces
Variants
Actions

Dantzig-Wolfe decomposition

From Encyclopedia of Mathematics
Revision as of 17:10, 7 February 2011 by 127.0.0.1 (talk) (Importing text file)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Dantzig–Wolfe decomposition algorithm

Consider the following linear programming problem (LP problem), with a row structure as indicated by the two sets of constraints.

The Dantzig–Wolfe approach is often used for the case when is a block-angular (linear) programming problem. Then the second constraint set is separable into a number of parts, containing disjoint sets of variables.

The optimal solution is usually denoted by . One denotes the LP-dual of by and the optimal dual solution by .

The row structure can be utilized by applying Lagrangean duality to the first constraint set :

where, for any fixed ,

and .

is called the subproblem and must be solved in order to evaluate the dual function at a certain point, .

One can show that for all . The controllability of the subproblem is limited. Inserting in yields , but possibly not .

is the region where is bounded, i.e. where its dual is feasible, leading to .

Now, let , which is the feasible set of . Since is a polyhedron, it has a finite number of extreme points, , , and extreme rays, , . Any point in can be described as

(a1)

where , for all and for all . If , then the optimal solution of is bounded, and since is a linear programming problem, the optimum is clearly attained in one of the extreme points of . So,

Also, it is sufficient to include the extreme directions, yielding .

These descriptions of and can be used to form a problem that is (by construction) equivalent to , in the sense that and is an optimal solution:

The dual variables of the first and second constraints in are actually the weights and used above, so the LP-dual of is as given below:

can be directly obtained from by doing the substitution (a1) above.

If is optimal in , then is optimal in .

The number of extreme points, , and the number of extreme directions, , are generally very large, so in most cases is too large to be solved directly with a linear programming code. Therefore one solves with column generation. An approximation of is obtained by including only a subset of the variables. Let and be indices of extreme points and extreme directions that are included.

Replacing by and by in yields the restricted master problem or the Dantzig–Wolfe master problem, , with objective function value . Doing the same in yields the problem . The constraints of are called dual value cuts and dual feasibility cuts. can have the same optimal -solution as (and thus as ) even if a large number of unnecessary dual cuts are missing. If the descriptions of or are insufficient at , the solution obtained will violate at least one of the missing cuts. Since is obtained from by simply dropping a number of constraints, it is easy to see that and that will converge towards if the sets of cuts grow towards and .

Since is a linear programming problem, one can calculate the reduced cost for each variable, , , and , , for a certain basic solution of , i.e. for a certain dual solution, denoted by and , as

is a minimization problem, so the optimum is reached if for all and for all . A variable with a negative reduced cost can improve the solution, if it is entered into the basis. The variable with the most negative reduced cost can be found by

Both of these, and the least of them, can be found by solving the dual subproblem .

So, if has a bounded solution, , this yields a -variable that should enter the basis if . If has an unbounded solution, , this yields a -variable, that should enter the basis if .

The -variable should enter the basis if . is the objective function value of at the present basis, which might not be optimal, so . Secondly, , which is the optimal objective function value of , and . Thus,

and since , . Equality is only possible if , i.e. if the present basis is optimal. In other words, the existence of a variable with is equivalent to the lower bound obtained from the subproblem being strictly less than the upper bound obtained from the master problem.

Finding variables with negative reduced costs to introduce into the basis in (i.e. to introduce in ) can thus be done by solving , since yields the column with the most negative reduced cost (or the most violated cut of ). This forms the basis of the Dantzig–Wolfe algorithm.

In the simplex method, the entering variable does not need to be the one with the most negative reduced cost. Any negative reduced cost suffices. This means that instead of solving to optimality, it can be sufficient to either find a direction of such that , or a point in such that .

In the Dantzig–Wolfe decomposition algorithm, there might occur unbounded dual solutions to . They can generate information about and (i.e. generate cuts for or columns for ), or they can be optimal in (when is infeasible). The modified subproblem that accepts an unbounded solution as input (and yields a cut that eliminates it, unless it is optimal in ) is as follows. Let the unbounded solution be the non-negative (extreme) direction ,

Assume that , , and that is feasible. Then if and only if is infeasible and is a part of an unbounded solution of . The algorithm should then be terminated.

Assume that . If then will yield a cut for that ensures that is not an unbounded solution of .

In the Dantzig–Wolfe algorithm, [a1], one iterates between the dual subproblem, or , and the dual master problem, (or ). The master problem indicates if there are necessary extreme points or directions missing, and the subproblem generates such points and directions, i.e. the cuts or columns that are needed. Since there is only a finite number of possible cuts, this algorithm will converge to the exact optimum in a finite number of steps.

Algorithmic description of the Dantzig–Wolfe decomposition method for solving .

1) Generate a dual starting solution, , and, optionally, a starting set of primal extreme solutions, and . Initialize , , and . Set .

2) Solve the dual subproblem with .

If is infeasible: Stop. is infeasible.

Else, a primal extreme solution is obtained: either a point and the value or a direction . If , set . Go to 4).

3) Solve the dual subproblem with . A primal extreme solution is obtained: either a point and the value or a direction .

If : Stop. is infeasible. Else, go to 5).

4) Optimality test: If , then go to 7).

5) Solve the dual master problem (and ) with all the known primal solutions , and , .

If is infeasible: Stop. is unbounded.

Else, this yields and a new dual solution: either a point and an upper bound or a direction .

6) Optimality test: If then go to 7). Else, set .

If the dual solution is a point, , go to 2), and if it is a direction, , go to 3).

7) Stop. The optimal solution of is .

Comments.

In step 5) one always sets because the objective function value of M) is decreasing (since in each iteration a constraint is added), but in step 2) one only sets if this yields an increase in , since the increase in is probably not monotone. If is infeasible, the direction of the unbounded solution of can be obtained from the unbounded solution of as .

Convergence.

The algorithm has finite convergence, as described below.

Assume that is feasible and that is a solution of . Then if and only if and is a part of the optimal solution of . If this is true and is the solution of , then is an optimal solution of .

Assume that . If , then will yield a cut that ensures that () is not a solution of .

It is important to know that and do not yield any solutions more than once (unless they are optimal). This depends on the inputs to and , and , which are obtained from the dual master problem, .

Assume that is a solution of . If , then solving at will yield a so far unknown cut that ensures that will not have the same solution in the next iteration, i.e. will yield either a different -solution or the same, , with a lower value of ().

Assume that is an unbounded solution of . Then solving at will yield a so far unknown cut that ensures that will not have the same solution in the next iteration, or will prove that is infeasible.

All this shows that the Dantzig–Wolfe decomposition algorithm for solving will terminate with the correct solution within a finite number of iterations.

References

[a1] G.B. Dantzig, P. Wolfe, "Decomposition principle for linear programs." Oper. Res. , 8 (1960) pp. 101–111
How to Cite This Entry:
Dantzig-Wolfe decomposition. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Dantzig-Wolfe_decomposition&oldid=14930
This article was adapted from an original article by K. Holmberg (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article