Namespaces
Variants
Actions

Simplex method

From Encyclopedia of Mathematics
Revision as of 08:14, 6 June 2020 by Ulf Rehmann (talk | contribs) (tex encoded by computer)
Jump to: navigation, search


method of sequential plan improvement

A method for solving the general linear programming problem:

$$ \sum _ {j = 1 } ^ { n } c _ {i} x _ {j} \mapsto \max ; \ \ \sum _ {j = 1 } ^ { n } A _ {j} x _ {j} = A _ {0} ; $$

$$ x _ {j} \geq 0,\ j = 1 \dots n, $$

where $ A _ {j} = ( a _ {1j} \dots a _ {mj} ) ^ {T} $ for $ j = 0 \dots n $.

The simplex method is the most widespread linear programming method. It consists of moving over adjacent vertices of the polyhedral set defined by the constraints of the linear programming problem and requires a finite sequence of iterations. The basis of a vertex $ x = ( x _ {1} \dots x _ {n} ) $ of the polyhedral feasible set of the problem is the system of $ m $ linearly independent vectors $ A _ {s _ {1} } \dots A _ {s _ {m} } $( $ s _ \alpha \geq 1 $, $ \alpha = 1 \dots m $) such that $ x _ {j} = 0 $ if $ j \notin \{ s _ {1} \dots s _ {m} \} $. The input for each iteration of the method is a combination of the basis $ A _ {x} = ( A _ {s _ {1} } \dots A _ {s _ {m} } ) $ of $ x $, the parameters $ x _ {ij} $ defined by

$$ A _ {j} = \ \sum _ {i = 1 } ^ { m } x _ {ij} A _ {s _ {i} } ,\ \ j = 0 \dots n $$

(in particular, $ x _ {i0} = x _ {s _ {i} } $ are the basis components of $ x $), and the parameters

$$ \Delta _ {j} = \ \sum _ {i = 1 } ^ { m } c _ {s _ {i} } x _ {ji} - c _ {j} ,\ \ j = 1 \dots n. $$

If $ \Delta _ {j} \geq 0 $ for all $ j $, then $ x $ is the desired solution of the problem. Otherwise one chooses a negative parameter $ \Delta _ {k} $. If none of the $ x _ {ik} $, $ i = 1 \dots m $, are positive , then the problem is unsolvable, due to the unboundedness of the objective function of the problem on its polyhedral set. In the case when some $ x _ {ik} $ is positive , $ x $ is replaced by $ x ^ \prime = x + \theta x ^ {k} $, where

$$ x ^ {k} = ( x _ {1} ^ {k} \dots x _ {n} ^ {k} ),\ \ x _ {s _ {i} } ^ {k} = - x _ {ik} ,\ \ i = 1 \dots m,\ \ x _ {k} ^ {k} = 1, $$

the remaining components $ x ^ {k} $ being zero, and

$$ \theta = \ \min _ {i: x _ {ik} > 0 } \ \frac{x _ {i0} }{x _ {ik} } = \ \frac{x _ {r0} }{x _ {rk} } . $$

Then the vertex $ x ^ \prime $ has a basis $ A _ {x ^ \prime } $, which differs from $ A _ {x} $ in that $ A _ {s _ {r} } $ is replaced by $ A _ {k} $. The parameters $ x _ {ij} ^ \prime $ and $ \Delta _ {j} ^ \prime $ for $ A _ {x ^ \prime } $ are expressed in terms of $ x _ {ij} $ and $ \Delta _ {j} $ by simple recurrence formulas.

Case

means that along every edge of the polyhedral set beginning at $ x $, the objective function does not increase. Cases

and

correspond to the existence of an edge along which the objective function increases, where in case

this edge is a ray, and in case

an interval with $ x ^ \prime $ being the other end. Iterations are continued until one obtains the optimal vertex or has to conclude that the problem is unsolvable.

The implementation of the simplex method, used when the problem is of a sufficiently large size, is usually based on another of its algorithmic implementations, in which the basis of the input for every iteration is the inverse matrix $ A _ {x} ^ {-} 1 $ of $ A _ {x} $( the inverse matrix method). It is designed for linear programming problems whose matrix $ A = ( A _ {1} {} \dots A _ {n} ) $ has the property of sparseness (the number of non-zero $ a _ {ij} $ is small compared to $ mn $). Sparse matrices can be kept in the computer's memory in a compact form, in which only the non-zero elements and their positions are fixed. Special ways of storing the inverse of the basis have been developed, allowing one to reduce the memory used for the description of $ A _ {x} ^ {-} 1 $. They are based on representing $ A _ {x} ^ {-} 1 $ as a product of matrix multipliers (multiplicators) that differ from the identity matrix in only one column. The number of non-zero elements of the multiplicators depends on the order in which the vectors are introduced into the basis. Therefore, after the accumulation of a certain number of multiplicators, a so-called repetition is organized, which gives a shorter multiplicative representation of $ A _ {x} ^ {-} 1 $.

An important part of the algorithm of the simplex method is the strategy used to select vectors $ A _ {k} $ for introduction in the basis. On the one hand, it must promote the reduction of the memory used for the description of $ A _ {x} ^ {-} 1 $, and on the other, it must prevent hitting on an ill-conditioned basis.

There are implementations of the simplex method for the solution of linear programming problems having sparse constraint matrices, with $ m $, $ n $ of order thousand and tens of thousand, respectively.

Several variants of the simplex method have been developed that take into account the peculiarities of various special classes of linear programming problems (block problems, transportation problems and others).

In spite of the fact that the simplex method is theoretically not sufficiently effective (its worst-case efficiency estimate on the class of linear programming problems is exponential, although the algorithmic complexity of the class as a whole is only polynomial), until now (1990) no serious competitors have been suggested.

References

[1] D.B. Yudin, E.G. Gol'shtein, "Linear programming" , Israel Program Sci. Transl. (1965) (Translated from Russian)
[2] J. Danzig, "Linear programming and extensions" , Princeton Univ. Press (1963)
[3] S.A. Ashmanov, "Linear programming" , Moscow (1981) (In Russian)

Comments

For negative results on the worst-case performance of the simplex algorithm, see [a1]; for positive results on its average-case performance, see [a2], [a3]. Alternative algorithms for linear programming with a polynomial-time worst-case behaviour have been proposed by L.G. Khachiyan [a4] and N. Karmarkar [a5]. While Khachiyan's result settled the question of the computational complexity of linear programming, Karmarkar's method seems to be practically competitive with the simplex method. For a recent survey of the simplex algorithm, the Karmarkar algorithm (interior algorithms) and ellipsoid methods in relation to each other, cf. [a8].

References

[a1] V.L. Klee, G.J. Minty, "How good is the simplex algorithm?" O. Shisha (ed.) , Inequalities , III , Acad. Press (1972) pp. 159–175
[a2] K.H. Borgwardt, "The average number of pivot steps required by the simplex-method is polynomial" Z. Oper. Res. , 26 (1982) pp. 157–177
[a3] R. Shamir, "The efficiency of the simplex method: a survey" Management Science , 33 : 3 (1987) pp. 301–334
[a4] L.G. Khachiyan, "A polynomial algorithm in linear programming" Soviet Math. Dokl. , 20 : 1 (1979) pp. 191–194 Dokl. Akad. Nauk SSSR , 244 (1979) pp. 1093–1096
[a5] N. Karmarkar, "A new polynomial-time algorithm for linear programming" Combinatorica , 4 : 4 (1984) pp. 373–395
[a6] A.R.G. Heesterman, "Matrices and simplex algorithms" , Reidel (1983)
[a7] S. Zionts, "Linear and integer programming" , Prentice-Hall (1974)
[a8] M.J. Todd, "Recent developments and new directions in linear programming" M. Iri (ed.) K. Tanabe (ed.) , Mathematical Programming , Kluwer (1989) pp. 109–157
How to Cite This Entry:
Simplex method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Simplex_method&oldid=18646
This article was adapted from an original article by E.G. Gol'shtein (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article