Namespaces
Variants
Actions

Difference between revisions of "Simplex method"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (fixing superscripts)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
 +
<!--
 +
s0853401.png
 +
$#A+1 = 54 n = 0
 +
$#C+1 = 54 : ~/encyclopedia/old_files/data/S085/S.0805340 Simplex method,
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''method of sequential plan improvement''
 
''method of sequential plan improvement''
  
 
A method for solving the general [[Linear programming|linear programming]] problem:
 
A method for solving the general [[Linear programming|linear programming]] problem:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s0853401.png" /></td> </tr></table>
+
$$
 +
\sum _ {j = 1 } ^ { n }
 +
c _ {i} x _ {j}  \mapsto  \max ; \ \
 +
\sum _ {j = 1 } ^ { n }
 +
A _ {j} x _ {j}  = A _ {0} ;
 +
$$
 +
 
 +
$$
 +
x _ {j}  \geq  0,\  j = 1, \dots, n,
 +
$$
 +
 
 +
where  $  A _ {j} = ( a _ {1j}, \dots, a _ {mj} )  ^ {T} $
 +
for  $  j = 0, \dots, n $.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s0853402.png" /></td> </tr></table>
+
The simplex method is the most widespread linear programming method. It consists of moving over adjacent vertices of the polyhedral set defined by the constraints of the linear programming problem and requires a finite sequence of iterations. The basis of a vertex  $  x = ( x _ {1}, \dots, x _ {n} ) $
 +
of the polyhedral feasible set of the problem is the system of  $  m $
 +
linearly independent vectors  $  A _ {s _ {1}  }, \dots, A _ {s _ {m}  } $ ($  s _  \alpha  \geq  1 $,
 +
$  \alpha = 1, \dots, m $)
 +
such that  $  x _ {j} = 0 $
 +
if  $  j \notin \{ s _ {1}, \dots, s _ {m} \} $.
 +
The input for each iteration of the method is a combination of the basis  $  A _ {x} = ( A _ {s _ {1}  }, \dots, A _ {s _ {m}  } ) $
 +
of  $  x $,
 +
the parameters  $  x _ {ij} $
 +
defined by
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s0853403.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s0853404.png" />.
+
$$
 +
A _ {j}  = \
 +
\sum _ {i = 1 } ^ { m }
 +
x _ {ij} A _ {s _ {i}  } ,\ \
 +
j = 0, \dots, n
 +
$$
  
The simplex method is the most widespread linear programming method. It consists of moving over adjacent vertices of the polyhedral set defined by the constraints of the linear programming problem and requires a finite sequence of iterations. The basis of a vertex <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s0853405.png" /> of the polyhedral feasible set of the problem is the system of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s0853406.png" /> linearly independent vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s0853407.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s0853408.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s0853409.png" />) such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534010.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534011.png" />. The input for each iteration of the method is a combination of the basis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534012.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534013.png" />, the parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534014.png" /> defined by
+
(in particular, $  x _ {i0} = x _ {s _ {i}  } $
 +
are the basis components of $  x $),  
 +
and the parameters
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534015.png" /></td> </tr></table>
+
$$
 +
\Delta _ {j}  = \
 +
\sum _ {i = 1 } ^ { m }
 +
c _ {s _ {i}  }
 +
x _ {ji} - c _ {j} ,\ \
 +
j = 1, \dots, n.
 +
$$
  
(in particular, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534016.png" /> are the basis components of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534017.png" />), and the parameters
+
If  $  \Delta _ {j} \geq  0 $
 +
for all  $  j $,  
 +
then  $  x $
 +
is the desired solution of the problem. Otherwise one chooses a negative parameter  $  \Delta _ {k} $.  
 +
If none of the  $  x _ {ik} $,
 +
$  i = 1, \dots, m $,
 +
are positive , then the problem is unsolvable, due to the unboundedness of the objective function of the problem on its polyhedral set. In the case when some  $  x _ {ik} $
 +
is positive ,  $  x $
 +
is replaced by  $  x  ^  \prime  = x + \theta x  ^ {k} $,  
 +
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534018.png" /></td> </tr></table>
+
$$
 +
x  ^ {k}  = ( x _ {1}  ^ {k} ,\dots, x _ {n}  ^ {k} ),\ \
 +
x _ {s _ {i}  }  ^ {k}  = - x _ {ik} ,\ \
 +
i = 1, \dots, m,\ \
 +
x _ {k}  ^ {k} = 1,
 +
$$
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534019.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534020.png" /> , then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534021.png" /> is the desired solution of the problem. Otherwise one chooses a negative parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534022.png" />. If none of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534023.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534024.png" />, are positive , then the problem is unsolvable, due to the unboundedness of the objective function of the problem on its polyhedral set. In the case when some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534025.png" /> is positive , <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534026.png" /> is replaced by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534027.png" />, where
+
the remaining components  $  x  ^ {k} $
 +
being zero, and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534028.png" /></td> </tr></table>
+
$$
 +
\theta  = \
 +
\min _ {i: x _ {ik} > 0 } \
  
the remaining components <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534029.png" /> being zero, and
+
\frac{x _ {i0} }{x _ {ik} }
 +
  = \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534030.png" /></td> </tr></table>
+
\frac{x _ {r0} }{x _ {rk} }
 +
.
 +
$$
  
Then the vertex <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534031.png" /> has a basis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534032.png" />, which differs from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534033.png" /> in that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534034.png" /> is replaced by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534035.png" />. The parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534036.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534037.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534038.png" /> are expressed in terms of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534039.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534040.png" /> by simple recurrence formulas.
+
Then the vertex $  x  ^  \prime  $
 +
has a basis $  A _ {x  ^  \prime  } $,  
 +
which differs from $  A _ {x} $
 +
in that $  A _ {s _ {r}  } $
 +
is replaced by $  A _ {k} $.  
 +
The parameters $  x _ {ij}  ^  \prime  $
 +
and $  \Delta _ {j}  ^  \prime  $
 +
for $  A _ {x  ^  \prime  } $
 +
are expressed in terms of $  x _ {ij} $
 +
and $  \Delta _ {j} $
 +
by simple recurrence formulas.
  
 
Case
 
Case
  
means that along every edge of the polyhedral set beginning at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534041.png" />, the objective function does not increase. Cases
+
means that along every edge of the polyhedral set beginning at $  x $,  
 +
the objective function does not increase. Cases
  
 
and
 
and
Line 37: Line 114:
 
this edge is a ray, and in case
 
this edge is a ray, and in case
  
an interval with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534042.png" /> being the other end. Iterations are continued until one obtains the optimal vertex or has to conclude that the problem is unsolvable.
+
an interval with $  x  ^  \prime  $
 +
being the other end. Iterations are continued until one obtains the optimal vertex or has to conclude that the problem is unsolvable.
  
The implementation of the simplex method, used when the problem is of a sufficiently large size, is usually based on another of its algorithmic implementations, in which the basis of the input for every iteration is the inverse matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534043.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534044.png" /> (the inverse matrix method). It is designed for linear programming problems whose matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534045.png" /> has the property of sparseness (the number of non-zero <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534046.png" /> is small compared to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534047.png" />). Sparse matrices can be kept in the computer's memory in a compact form, in which only the non-zero elements and their positions are fixed. Special ways of storing the inverse of the basis have been developed, allowing one to reduce the memory used for the description of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534048.png" />. They are based on representing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534049.png" /> as a product of matrix multipliers (multiplicators) that differ from the identity matrix in only one column. The number of non-zero elements of the multiplicators depends on the order in which the vectors are introduced into the basis. Therefore, after the accumulation of a certain number of multiplicators, a so-called repetition is organized, which gives a shorter multiplicative representation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534050.png" />.
+
The implementation of the simplex method, used when the problem is of a sufficiently large size, is usually based on another of its algorithmic implementations, in which the basis of the input for every iteration is the inverse matrix $  A _ {x}  ^ {- 1} $
 +
of $  A _ {x} $ (the inverse matrix method). It is designed for linear programming problems whose matrix $  A = ( A _ {1}, \dots, A _ {n} ) $
 +
has the property of sparseness (the number of non-zero $  a _ {ij} $
 +
is small compared to $  mn $).  
 +
Sparse matrices can be kept in the computer's memory in a compact form, in which only the non-zero elements and their positions are fixed. Special ways of storing the inverse of the basis have been developed, allowing one to reduce the memory used for the description of $  A _ {x}  ^ {- 1} $.  
 +
They are based on representing $  A _ {x}  ^ {- 1} $
 +
as a product of matrix multipliers (multiplicators) that differ from the identity matrix in only one column. The number of non-zero elements of the multiplicators depends on the order in which the vectors are introduced into the basis. Therefore, after the accumulation of a certain number of multiplicators, a so-called repetition is organized, which gives a shorter multiplicative representation of $  A _ {x}  ^ {- 1} $.
  
An important part of the algorithm of the simplex method is the strategy used to select vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534051.png" /> for introduction in the basis. On the one hand, it must promote the reduction of the memory used for the description of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534052.png" />, and on the other, it must prevent hitting on an ill-conditioned basis.
+
An important part of the algorithm of the simplex method is the strategy used to select vectors $  A _ {k} $
 +
for introduction in the basis. On the one hand, it must promote the reduction of the memory used for the description of $  A _ {x}  ^ {- 1} $,  
 +
and on the other, it must prevent hitting on an ill-conditioned basis.
  
There are implementations of the simplex method for the solution of linear programming problems having sparse constraint matrices, with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534053.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085340/s08534054.png" /> of order thousand and tens of thousand, respectively.
+
There are implementations of the simplex method for the solution of linear programming problems having sparse constraint matrices, with $  m $,  
 +
$  n $
 +
of order thousand and tens of thousand, respectively.
  
 
Several variants of the simplex method have been developed that take into account the peculiarities of various special classes of linear programming problems (block problems, transportation problems and others).
 
Several variants of the simplex method have been developed that take into account the peculiarities of various special classes of linear programming problems (block problems, transportation problems and others).
Line 51: Line 139:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  D.B. Yudin,  E.G. Gol'shtein,  "Linear programming" , Israel Program Sci. Transl.  (1965)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  J. Danzig,  "Linear programming and extensions" , Princeton Univ. Press  (1963)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  S.A. Ashmanov,  "Linear programming" , Moscow  (1981)  (In Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  D.B. Yudin,  E.G. Gol'shtein,  "Linear programming" , Israel Program Sci. Transl.  (1965)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  J. Danzig,  "Linear programming and extensions" , Princeton Univ. Press  (1963)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  S.A. Ashmanov,  "Linear programming" , Moscow  (1981)  (In Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====

Latest revision as of 06:35, 7 May 2022


method of sequential plan improvement

A method for solving the general linear programming problem:

$$ \sum _ {j = 1 } ^ { n } c _ {i} x _ {j} \mapsto \max ; \ \ \sum _ {j = 1 } ^ { n } A _ {j} x _ {j} = A _ {0} ; $$

$$ x _ {j} \geq 0,\ j = 1, \dots, n, $$

where $ A _ {j} = ( a _ {1j}, \dots, a _ {mj} ) ^ {T} $ for $ j = 0, \dots, n $.

The simplex method is the most widespread linear programming method. It consists of moving over adjacent vertices of the polyhedral set defined by the constraints of the linear programming problem and requires a finite sequence of iterations. The basis of a vertex $ x = ( x _ {1}, \dots, x _ {n} ) $ of the polyhedral feasible set of the problem is the system of $ m $ linearly independent vectors $ A _ {s _ {1} }, \dots, A _ {s _ {m} } $ ($ s _ \alpha \geq 1 $, $ \alpha = 1, \dots, m $) such that $ x _ {j} = 0 $ if $ j \notin \{ s _ {1}, \dots, s _ {m} \} $. The input for each iteration of the method is a combination of the basis $ A _ {x} = ( A _ {s _ {1} }, \dots, A _ {s _ {m} } ) $ of $ x $, the parameters $ x _ {ij} $ defined by

$$ A _ {j} = \ \sum _ {i = 1 } ^ { m } x _ {ij} A _ {s _ {i} } ,\ \ j = 0, \dots, n $$

(in particular, $ x _ {i0} = x _ {s _ {i} } $ are the basis components of $ x $), and the parameters

$$ \Delta _ {j} = \ \sum _ {i = 1 } ^ { m } c _ {s _ {i} } x _ {ji} - c _ {j} ,\ \ j = 1, \dots, n. $$

If $ \Delta _ {j} \geq 0 $ for all $ j $, then $ x $ is the desired solution of the problem. Otherwise one chooses a negative parameter $ \Delta _ {k} $. If none of the $ x _ {ik} $, $ i = 1, \dots, m $, are positive , then the problem is unsolvable, due to the unboundedness of the objective function of the problem on its polyhedral set. In the case when some $ x _ {ik} $ is positive , $ x $ is replaced by $ x ^ \prime = x + \theta x ^ {k} $, where

$$ x ^ {k} = ( x _ {1} ^ {k} ,\dots, x _ {n} ^ {k} ),\ \ x _ {s _ {i} } ^ {k} = - x _ {ik} ,\ \ i = 1, \dots, m,\ \ x _ {k} ^ {k} = 1, $$

the remaining components $ x ^ {k} $ being zero, and

$$ \theta = \ \min _ {i: x _ {ik} > 0 } \ \frac{x _ {i0} }{x _ {ik} } = \ \frac{x _ {r0} }{x _ {rk} } . $$

Then the vertex $ x ^ \prime $ has a basis $ A _ {x ^ \prime } $, which differs from $ A _ {x} $ in that $ A _ {s _ {r} } $ is replaced by $ A _ {k} $. The parameters $ x _ {ij} ^ \prime $ and $ \Delta _ {j} ^ \prime $ for $ A _ {x ^ \prime } $ are expressed in terms of $ x _ {ij} $ and $ \Delta _ {j} $ by simple recurrence formulas.

Case

means that along every edge of the polyhedral set beginning at $ x $, the objective function does not increase. Cases

and

correspond to the existence of an edge along which the objective function increases, where in case

this edge is a ray, and in case

an interval with $ x ^ \prime $ being the other end. Iterations are continued until one obtains the optimal vertex or has to conclude that the problem is unsolvable.

The implementation of the simplex method, used when the problem is of a sufficiently large size, is usually based on another of its algorithmic implementations, in which the basis of the input for every iteration is the inverse matrix $ A _ {x} ^ {- 1} $ of $ A _ {x} $ (the inverse matrix method). It is designed for linear programming problems whose matrix $ A = ( A _ {1}, \dots, A _ {n} ) $ has the property of sparseness (the number of non-zero $ a _ {ij} $ is small compared to $ mn $). Sparse matrices can be kept in the computer's memory in a compact form, in which only the non-zero elements and their positions are fixed. Special ways of storing the inverse of the basis have been developed, allowing one to reduce the memory used for the description of $ A _ {x} ^ {- 1} $. They are based on representing $ A _ {x} ^ {- 1} $ as a product of matrix multipliers (multiplicators) that differ from the identity matrix in only one column. The number of non-zero elements of the multiplicators depends on the order in which the vectors are introduced into the basis. Therefore, after the accumulation of a certain number of multiplicators, a so-called repetition is organized, which gives a shorter multiplicative representation of $ A _ {x} ^ {- 1} $.

An important part of the algorithm of the simplex method is the strategy used to select vectors $ A _ {k} $ for introduction in the basis. On the one hand, it must promote the reduction of the memory used for the description of $ A _ {x} ^ {- 1} $, and on the other, it must prevent hitting on an ill-conditioned basis.

There are implementations of the simplex method for the solution of linear programming problems having sparse constraint matrices, with $ m $, $ n $ of order thousand and tens of thousand, respectively.

Several variants of the simplex method have been developed that take into account the peculiarities of various special classes of linear programming problems (block problems, transportation problems and others).

In spite of the fact that the simplex method is theoretically not sufficiently effective (its worst-case efficiency estimate on the class of linear programming problems is exponential, although the algorithmic complexity of the class as a whole is only polynomial), until now (1990) no serious competitors have been suggested.

References

[1] D.B. Yudin, E.G. Gol'shtein, "Linear programming" , Israel Program Sci. Transl. (1965) (Translated from Russian)
[2] J. Danzig, "Linear programming and extensions" , Princeton Univ. Press (1963)
[3] S.A. Ashmanov, "Linear programming" , Moscow (1981) (In Russian)

Comments

For negative results on the worst-case performance of the simplex algorithm, see [a1]; for positive results on its average-case performance, see [a2], [a3]. Alternative algorithms for linear programming with a polynomial-time worst-case behaviour have been proposed by L.G. Khachiyan [a4] and N. Karmarkar [a5]. While Khachiyan's result settled the question of the computational complexity of linear programming, Karmarkar's method seems to be practically competitive with the simplex method. For a recent survey of the simplex algorithm, the Karmarkar algorithm (interior algorithms) and ellipsoid methods in relation to each other, cf. [a8].

References

[a1] V.L. Klee, G.J. Minty, "How good is the simplex algorithm?" O. Shisha (ed.) , Inequalities , III , Acad. Press (1972) pp. 159–175
[a2] K.H. Borgwardt, "The average number of pivot steps required by the simplex-method is polynomial" Z. Oper. Res. , 26 (1982) pp. 157–177
[a3] R. Shamir, "The efficiency of the simplex method: a survey" Management Science , 33 : 3 (1987) pp. 301–334
[a4] L.G. Khachiyan, "A polynomial algorithm in linear programming" Soviet Math. Dokl. , 20 : 1 (1979) pp. 191–194 Dokl. Akad. Nauk SSSR , 244 (1979) pp. 1093–1096
[a5] N. Karmarkar, "A new polynomial-time algorithm for linear programming" Combinatorica , 4 : 4 (1984) pp. 373–395
[a6] A.R.G. Heesterman, "Matrices and simplex algorithms" , Reidel (1983)
[a7] S. Zionts, "Linear and integer programming" , Prentice-Hall (1974)
[a8] M.J. Todd, "Recent developments and new directions in linear programming" M. Iri (ed.) K. Tanabe (ed.) , Mathematical Programming , Kluwer (1989) pp. 109–157
How to Cite This Entry:
Simplex method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Simplex_method&oldid=18646
This article was adapted from an original article by E.G. Gol'shtein (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article