Namespaces
Variants
Actions

Difference between revisions of "Algorithm, computational complexity of an"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (→‎Comments: links)
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
a0118001.png
 +
$#A+1 = 99 n = 0
 +
$#C+1 = 99 : ~/encyclopedia/old_files/data/A011/A.0101800 Algorithm, computational complexity of an
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
A function giving a numerical estimate of the difficulty (amount of time/storage involved) of the process of application of an algorithm to the inputs. A more precise definition of the computational complexity of an algorithm is the concept of a cost function (step-counting function) — defined as a decidable relation between the objects to which the algorithm is applicable and the natural numbers, and which has a range of definition coinciding with the range of applicability of the algorithm.
 
A function giving a numerical estimate of the difficulty (amount of time/storage involved) of the process of application of an algorithm to the inputs. A more precise definition of the computational complexity of an algorithm is the concept of a cost function (step-counting function) — defined as a decidable relation between the objects to which the algorithm is applicable and the natural numbers, and which has a range of definition coinciding with the range of applicability of the algorithm.
  
Usually, the time and space characteristics of algorithmic processes are considered. Thus, for a [[Turing machine|Turing machine]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a0118001.png" /> the time cost function (duration of work) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a0118002.png" /> is the number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a0118003.png" /> of cycles of work of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a0118004.png" /> required for the conversion of the initial configuration of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a0118005.png" /> into the final configuration; the memory cost function (or space function) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a0118006.png" /> is defined as the number of cells on the tape scanned by the head of the machine. Time and storage costs for normal algorithms (cf. [[Normal algorithm|Normal algorithm]]), iterative arrays, multi-head and multi-tape Turing machines, etc., are defined in a similar manner.
+
Usually, the time and space characteristics of algorithmic processes are considered. Thus, for a [[Turing machine|Turing machine]] $  M $
 +
the time cost function (duration of work) $  T _ {M} (P) $
 +
is the number $  t $
 +
of cycles of work of $  M $
 +
required for the conversion of the initial configuration of $  P $
 +
into the final configuration; the memory cost function (or space function) $  S _ {M} (P) $
 +
is defined as the number of cells on the tape scanned by the head of the machine. Time and storage costs for normal algorithms (cf. [[Normal algorithm|Normal algorithm]]), iterative arrays, multi-head and multi-tape Turing machines, etc., are defined in a similar manner.
  
The common feature of such cost functions is the fact that there exists an effective procedure establishing for any algorithm <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a0118007.png" /> (i.e. in particular for a Turing machine or, rather, for its program), any input <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a0118008.png" /> and any non-negative integer <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a0118009.png" />, whether or not the process of applying <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180010.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180011.png" /> will terminate with complexity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180012.png" />. This observation leads to the abstract theory of computational complexity [[#References|[1]]]. An effective procedure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180013.png" /> is called a computational measure if: 1) it always ends in the result 0 or 1 when applied to triplets of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180014.png" />; and 2) it has the property that for any algorithm <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180015.png" /> and any input <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180016.png" /> the equality <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180017.png" /> is true for not more than one natural number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180018.png" />, this <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180019.png" /> existing if and only if the process of application of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180020.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180021.png" /> eventually terminates. The cost function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180022.png" /> with respect to the measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180023.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180024.png" /> is introduced if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180025.png" />, and then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180026.png" />.
+
The common feature of such cost functions is the fact that there exists an effective procedure establishing for any algorithm $  \alpha $(
 +
i.e. in particular for a Turing machine or, rather, for its program), any input $  x $
 +
and any non-negative integer $  t $,  
 +
whether or not the process of applying $  \alpha $
 +
to $  x $
 +
will terminate with complexity $  t $.  
 +
This observation leads to the abstract theory of computational complexity [[#References|[1]]]. An effective procedure $  r $
 +
is called a computational measure if: 1) it always ends in the result 0 or 1 when applied to triplets of the form $  \langle  \textrm{ algorithm, input, natural number  } \rangle $;  
 +
and 2) it has the property that for any algorithm $  \alpha $
 +
and any input $  x $
 +
the equality $  r ( \alpha , x, t) = 1 $
 +
is true for not more than one natural number $  t $,  
 +
this $  t $
 +
existing if and only if the process of application of $  \alpha $
 +
to $  x $
 +
eventually terminates. The cost function $  R _  \alpha  $
 +
with respect to the measure $  r $
 +
for $  \alpha $
 +
is introduced if and only if $  r ( \alpha , x, t) = 1 $,  
 +
and then $  R _  \alpha  (x) = t $.
  
 
This last equality is equivalent to the statement  "the computational complexity a on x is t (in the measure r)" .
 
This last equality is equivalent to the statement  "the computational complexity a on x is t (in the measure r)" .
  
Given some computational measure, one can consider the complexity of computation of a given function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180027.png" /> — for example, that of finding an algorithm <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180028.png" /> which computes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180029.png" /> "better than other algorithms" . However, as exemplified by the speed-up theorem (see below), such a formulation is not always well-posed. The real problem may be the description of the rate of growth of the cost function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180030.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180031.png" /> computes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180032.png" />. For example, find upper and lower bounds for the computational complexity of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180033.png" />, i.e. two functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180034.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180035.png" /> such that there exists a computation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180036.png" /> for the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180037.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180038.png" />, and such that for any algorithm <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180039.png" /> which computes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180040.png" /> the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180041.png" /> in a certain sense majorizes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180042.png" />.
+
Given some computational measure, one can consider the complexity of computation of a given function $  f $—  
 +
for example, that of finding an algorithm $  \alpha $
 +
which computes  $  f $"
 +
better than other algorithms" . However, as exemplified by the speed-up theorem (see below), such a formulation is not always well-posed. The real problem may be the description of the rate of growth of the cost function $  R _  \alpha  $,  
 +
where $  \alpha $
 +
computes $  f $.  
 +
For example, find upper and lower bounds for the computational complexity of $  f $,  
 +
i.e. two functions $  G(x) $
 +
and $  g(x) $
 +
such that there exists a computation $  \alpha $
 +
for the function $  f $
 +
for which $  R _  \alpha  (x) \leq  G(x) $,  
 +
and such that for any algorithm $  \beta $
 +
which computes $  f $
 +
the function $  R _  \beta  $
 +
in a certain sense majorizes $  g $.
  
 
Another important branch of the theory of computational complexity is the study of complexity classes — sets of functions for which there exist computations with a complexity not exceeding some bound out of the set of bounds which defines the class. This branch also includes problems of comparative computational complexity for various types of algorithms (automata): the transition to another algorithmic system and another measure of complexity is usually equivalent to considering a new measure for the first algorithmic system.
 
Another important branch of the theory of computational complexity is the study of complexity classes — sets of functions for which there exist computations with a complexity not exceeding some bound out of the set of bounds which defines the class. This branch also includes problems of comparative computational complexity for various types of algorithms (automata): the transition to another algorithmic system and another measure of complexity is usually equivalent to considering a new measure for the first algorithmic system.
  
Below a few fundamental results are given which are independent of the choice of measure of complexity (and of the formalization), as long as there exists, for example, an effective mutual simulation of the algorithms of the type under consideration and ordinary Turing machines. (For the sake of simplicity, it is assumed that the subject is the computation time of Turing machines computing functions from non-negative integers to non-negative integers.) Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180043.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180044.png" /> be computable integer-valued functions (cf. [[Computable function|Computable function]]) on the domain of the algorithms, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180045.png" /> be a function defined on this domain which can assume two values only — say, 0 and 1.
+
Below a few fundamental results are given which are independent of the choice of measure of complexity (and of the formalization), as long as there exists, for example, an effective mutual simulation of the algorithms of the type under consideration and ordinary Turing machines. (For the sake of simplicity, it is assumed that the subject is the computation time of Turing machines computing functions from non-negative integers to non-negative integers.) Let $  T $
 +
and $  h $
 +
be computable integer-valued functions (cf. [[Computable function|Computable function]]) on the domain of the algorithms, and let $  f $
 +
be a function defined on this domain which can assume two values only — say, 0 and 1.
  
First, there exist functions whose computation may be as complex as one pleases. More exactly, for any function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180046.png" /> there exists a computable function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180047.png" /> such that for any algorithm <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180048.png" /> which computes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180049.png" />, the inequality
+
First, there exist functions whose computation may be as complex as one pleases. More exactly, for any function $  T $
 +
there exists a computable function $  f $
 +
such that for any algorithm $  \alpha $
 +
which computes $  f $,  
 +
the inequality
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180050.png" /></td> </tr></table>
+
$$
 +
R _  \alpha  ( x )  > T ( x )
 +
$$
  
 
is false only in a finite number of cases.
 
is false only in a finite number of cases.
  
Secondly, there exist functions, any computation of which may in principle be improved as much as one pleases. More precisely, the speed-up theorem is valid: For any function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180051.png" /> (e.g. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180052.png" />) it is possible to find a computable function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180053.png" /> such that for any algorithm <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180054.png" /> which computes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180055.png" /> it will always be possible to find (no effective procedure is assumed) an algorithm <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180056.png" /> which also computes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180057.png" /> and such that for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180058.png" /> (except for a finite set)
+
Secondly, there exist functions, any computation of which may in principle be improved as much as one pleases. More precisely, the speed-up theorem is valid: For any function $  h $(
 +
e.g. $  h(t) = 2  ^ {t} $)  
 +
it is possible to find a computable function $  f $
 +
such that for any algorithm $  \alpha $
 +
which computes $  f $
 +
it will always be possible to find (no effective procedure is assumed) an algorithm $  \beta $
 +
which also computes $  f $
 +
and such that for all $  x $(
 +
except for a finite set)
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180059.png" /></td> </tr></table>
+
$$
 +
h ( R _  \beta  ( x ) )  < R _  \alpha  ( x ) .
 +
$$
  
In case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180060.png" /> this yields
+
In case $  h(t) = 2  ^ {t} $
 +
this yields
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180061.png" /></td> </tr></table>
+
$$
 +
R _  \beta  ( x )  <   \mathop{\rm log} _ {2}  R _  \alpha  ( x )
 +
$$
  
for almost all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180062.png" />.
+
for almost all $  x $.
  
For the computational measures of time and space (for Turing machines), the result of the speed-up theorem is valid for most computations in the following form: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180063.png" /> is computable with complexity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180064.png" />, it is also computable with complexity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180065.png" /> (one says that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180066.png" /> is computable with complexity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180067.png" /> if, for some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180068.png" /> which computes it, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180069.png" /> for all words <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180070.png" /> of length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180071.png" /> in the alphabet under consideration). For this reason time and space computational complexities of concrete functions are often order of magnitude estimates. Some specific results are quoted below.
+
For the computational measures of time and space (for Turing machines), the result of the speed-up theorem is valid for most computations in the following form: If $  f $
 +
is computable with complexity $  R(n) $,  
 +
it is also computable with complexity $  R (n) /2 $(
 +
one says that $  f $
 +
is computable with complexity $  R(n) $
 +
if, for some $  \alpha $
 +
which computes it, $  R _  \alpha  (P) \leq  R(n) $
 +
for all words $  P $
 +
of length $  n $
 +
in the alphabet under consideration). For this reason time and space computational complexities of concrete functions are often order of magnitude estimates. Some specific results are quoted below.
  
The time to decide the equality of words (of length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180072.png" />) is of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180073.png" /> for Turing machines and normal algorithms; any property of words which may be decided on Turing machines in a time asymptotically less than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180074.png" /> can also be decided in time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180075.png" />; computations by a Turing machine of time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180076.png" /> (with the order of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180077.png" />) can be simulated by a Turing machine with space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180078.png" />. Furthermore, the multiplication of two <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180079.png" />-digit numbers on multi-tape Turing machines can be performed in time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180080.png" />, but not in time order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180081.png" />, but iterative arrays can achieve this in real time, i.e. the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180082.png" />-th digit of the product is produced before the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180083.png" />-th digits of the factors are inputted. When algorithmic processes of one type are modeled by other processes, the computational complexity usually changes. Thus, a reduction in the dimension of the iterative arrays results in longer computation time. Multi-head Turing machines and multi-tape Turing machines can simulate each other in real time. Many types of algorithms can be simulated on Turing machines, such that the computation time increases polynomially (usually quadratically) with an insignificant increase in space complexity (usually, an additional constant factor).
+
The time to decide the equality of words (of length $  n $)  
 +
is of order $  n  ^ {2} $
 +
for Turing machines and normal algorithms; any property of words which may be decided on Turing machines in a time asymptotically less than $  n  \mathop{\rm log} _ {2}  n $
 +
can also be decided in time $  n $;  
 +
computations by a Turing machine of time $  T(n) $(
 +
with the order of $  T(n) \geq  n  ^ {2} $)  
 +
can be simulated by a Turing machine with space $  \sqrt {T(n) } $.  
 +
Furthermore, the multiplication of two $  n $-
 +
digit numbers on multi-tape Turing machines can be performed in time $  n ^ {1 + \epsilon } $,  
 +
but not in time order $  n $,  
 +
but iterative arrays can achieve this in real time, i.e. the $  i $-
 +
th digit of the product is produced before the $  (i+1) $-
 +
th digits of the factors are inputted. When algorithmic processes of one type are modeled by other processes, the computational complexity usually changes. Thus, a reduction in the dimension of the iterative arrays results in longer computation time. Multi-head Turing machines and multi-tape Turing machines can simulate each other in real time. Many types of algorithms can be simulated on Turing machines, such that the computation time increases polynomially (usually quadratically) with an insignificant increase in space complexity (usually, an additional constant factor).
  
 
The complexity approach is useful in studying certain hierarchies of classes of general recursive functions (cf. [[General recursive function|General recursive function]]), in connection with their logical and algebraic features. For instance, primitive recursive functions coincide with the functions computable by Turing machines in space bounded by a primitive recursive function. The existence of arbitrarily complex functions is established by diagonalization methods. Natural analogues for such lower bounds on computational complexity were found for the validity problem in certain decidable theories and in areas related to mathematical linguistics. However, for many problems of practical importance the problem of obtaining non-trivial lower bounds presents considerable difficulties. This is the case, in particular, for certain combinatorial problems which are formalized (in one variant) as problems of finding, out of all words of a given word length, those which satisfy a predicate computable in polynomial time. One can also mention the complexity of the computations performed by non-deterministic automata and probabilistic automata (cf. [[Automaton, probabilistic|Automaton, probabilistic]]). In the non-deterministic case the computational device can choose both ways when presented with a choice of how to continue the computation, by  "splitting"  into two independent copies, each of which has the same capabilities as its parent. Its computational complexity is understood to mean the complexity of the  "simplest"  successful computation trace. In the probabilistic case, the actual course of the computation is determined not only by the program and the argument, but also by a random sequence of numbers, and the result should coincide, to a great degree of probability, with the value of the function being computed. In both cases the computational complexity can sometimes be proved to be less than that of deterministic computations for the same problem.
 
The complexity approach is useful in studying certain hierarchies of classes of general recursive functions (cf. [[General recursive function|General recursive function]]), in connection with their logical and algebraic features. For instance, primitive recursive functions coincide with the functions computable by Turing machines in space bounded by a primitive recursive function. The existence of arbitrarily complex functions is established by diagonalization methods. Natural analogues for such lower bounds on computational complexity were found for the validity problem in certain decidable theories and in areas related to mathematical linguistics. However, for many problems of practical importance the problem of obtaining non-trivial lower bounds presents considerable difficulties. This is the case, in particular, for certain combinatorial problems which are formalized (in one variant) as problems of finding, out of all words of a given word length, those which satisfy a predicate computable in polynomial time. One can also mention the complexity of the computations performed by non-deterministic automata and probabilistic automata (cf. [[Automaton, probabilistic|Automaton, probabilistic]]). In the non-deterministic case the computational device can choose both ways when presented with a choice of how to continue the computation, by  "splitting"  into two independent copies, each of which has the same capabilities as its parent. Its computational complexity is understood to mean the complexity of the  "simplest"  successful computation trace. In the probabilistic case, the actual course of the computation is determined not only by the program and the argument, but also by a random sequence of numbers, and the result should coincide, to a great degree of probability, with the value of the function being computed. In both cases the computational complexity can sometimes be proved to be less than that of deterministic computations for the same problem.
Line 39: Line 134:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  J. Hartmanis,  J.E. Hopcroft,  "An overview of the theory of computational complexity"  ''J. ACM'' , '''18'''  (1971)  pp. 444–475</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  J. Hartmanis,  J.E. Hopcroft,  "An overview of the theory of computational complexity"  ''J. ACM'' , '''18'''  (1971)  pp. 444–475</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
The area has considerably expanded from the state described in the article above. Two key references [[#References|[a1]]], [[#References|[a5]]] are listed below.
 
The area has considerably expanded from the state described in the article above. Two key references [[#References|[a1]]], [[#References|[a5]]] are listed below.
  
The results on Turing machines referred to, up to the result on multiplication, hold for Turing machines in the original version only (single input/work tape, no extra work tapes). In the West, the standard Turing machine is nowadays the (off-line) model with multiple work tapes. Turing machines are useful, because of their simplicity, to obtain lower bounds on the computational complexity of problems. Upper bounds, by exhibiting actual algorithms, are usually based on the RAM (random access machine), which more realistically models the  "von Neumann"  computer. The two alternatives can simulate each other with a constant multiplicative factor in space and a polynomial time overhead. The latter property characterizes all  "reasonable"  models (First Machine Class) for sequential computations (Invariance Thesis). The determination of such overheads is a subject of the theory of machine models. For instance, multi-tape Turing machines with one-way input (no backup on the input tape) can be simulated in square time by a Turing machine with one-way input and a single work tape in an obvious way, and it has been shown that less than square time is insufficient [[#References|[a2]]], [[#References|[a3]]]. One can simulate <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180084.png" /> work tapes by 2 work tapes in time order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180085.png" />; it is not known whether this is optimal, but the lower bound is non-linear. In contrast, multicounter machines can be simulated in real time (step-for-step) by a Turing machine with a single work tape [[#References|[a4]]]. (A counter is a storage device which can hold any integer and supports unit increments, unit decrements, and check for zero.) The Invariance Thesis yields a justification for the identification of the class of problems which can be solved deterministically in polynomial time (denoted by P) as the class of  "tractable"  problems. Adding the feature of non-determinism gives the class <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180087.png" /> (non-deterministic polynomial time), which is known to contain a large variety of problems which one likes to solve in daily life. The best known deterministic algorithms for solving <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180088.png" />-problems require exponential time. On the other hand, it is unknown whether <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180090.png" />. I.e., it is unknown whether these exponential time algorithms can be replaced by deterministic polynomial time ones. This question is generally considered the most important open problem in the theory of computation. It is known that the complexity of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180091.png" /> is exemplified in specific <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180092.png" /> problems. These <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180094.png" />-complete problems have the property that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180095.png" /> if and only if such a problem can be solved in deterministic polynomial time (is in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180096.png" />). Similar questions arise with respect to the relations between different space complexities, and the relations between time and space. The fundamental complexity classes related to space are [[PSPACE]] ([[NPSPACE]]), which consist of those problems which can be solved in polynomial space by deterministic (non-deterministic) machines from the First Machine Class. It is known (Savitch's theorem) that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180097.png" />. Clearly, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180098.png" /> contains <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a01180099.png" /> but it is unknown whether the inclusion is strict.
+
The results on Turing machines referred to, up to the result on multiplication, hold for Turing machines in the original version only (single input/work tape, no extra work tapes). In the West, the standard Turing machine is nowadays the (off-line) model with multiple work tapes. Turing machines are useful, because of their simplicity, to obtain lower bounds on the computational complexity of problems. Upper bounds, by exhibiting actual algorithms, are usually based on the RAM (random access machine), which more realistically models the  "von Neumann"  computer. The two alternatives can simulate each other with a constant multiplicative factor in space and a polynomial time overhead. The latter property characterizes all  "reasonable"  models (First Machine Class) for sequential computations (Invariance Thesis). The determination of such overheads is a subject of the theory of machine models. For instance, multi-tape Turing machines with one-way input (no backup on the input tape) can be simulated in square time by a Turing machine with one-way input and a single work tape in an obvious way, and it has been shown that less than square time is insufficient [[#References|[a2]]], [[#References|[a3]]]. One can simulate $  k $
 +
work tapes by 2 work tapes in time order $  n  \mathop{\rm log}  n $;  
 +
it is not known whether this is optimal, but the lower bound is non-linear. In contrast, multicounter machines can be simulated in real time (step-for-step) by a Turing machine with a single work tape [[#References|[a4]]]. (A counter is a storage device which can hold any integer and supports unit increments, unit decrements, and check for zero.) The Invariance Thesis yields a justification for the identification of the class of problems which can be solved deterministically in polynomial time (denoted by P) as the class of  "tractable"  problems. Adding the feature of non-determinism gives the class $  \textrm{ NP } $(
 +
non-deterministic polynomial time), which is known to contain a large variety of problems which one likes to solve in daily life. The best known deterministic algorithms for solving $  \textrm{ NP } $-
 +
problems require exponential time. On the other hand, it is unknown whether $  \textrm{ P } = \textrm{ NP } $.  
 +
I.e., it is unknown whether these exponential time algorithms can be replaced by deterministic polynomial time ones. This question is generally considered the most important open problem in the theory of computation. It is known that the complexity of $  \textrm{ NP } $
 +
is exemplified in specific $  \textrm{ NP } $
 +
problems. These $  \textrm{ NP } $-
 +
complete problems have the property that $  \textrm{ P } = \textrm{ NP } $
 +
if and only if such a problem can be solved in deterministic polynomial time (is in $  \textrm{ P } $).  
 +
Similar questions arise with respect to the relations between different space complexities, and the relations between time and space. The fundamental complexity classes related to space are [[PSPACE]] ([[NPSPACE]]), which consist of those problems which can be solved in polynomial space by deterministic (non-deterministic) machines from the First Machine Class. It is known (Savitch's theorem) that $  \textrm{ PSPACE } = \textrm{ NPSPACE } $.  
 +
Clearly, $  \textrm{ PSPACE } $
 +
contains $  \textrm{ NP } $
 +
but it is unknown whether the inclusion is strict.
  
There exists a large variety of models for parallel computation. Many of those models (Second Machine Class) have the property that time on the parallel model is equivalent to space on the sequential model, up to a polynomial overhead (Parallel Computation Thesis). Each problem in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a011800100.png" /> can be solved in deterministic polynomial time on a machine of the Second Machine Class. Other models seem to be even more powerfull, albeit totally unrealistic. The power of the parallel models is rooted in the possible activation of at least an exponential amount of hardware in a polynomial number of computation cycles (called  "time" ). As soon as one enforces realistic bounds (e.g., because of the bounded speed of light) on the amount of hardware which can be activated by a computation in a given timespan, the Parallel Computation Thesis becomes invalid. For many specific problems it has been shown that they can be solved on a parallel device with a polynomial amount of hardware in time order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a011800101.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011800/a011800102.png" /> complexity class).
+
There exists a large variety of models for parallel computation. Many of those models (Second Machine Class) have the property that time on the parallel model is equivalent to space on the sequential model, up to a polynomial overhead (Parallel Computation Thesis). Each problem in $  \textrm{ NP } $
 +
can be solved in deterministic polynomial time on a machine of the Second Machine Class. Other models seem to be even more powerfull, albeit totally unrealistic. The power of the parallel models is rooted in the possible activation of at least an exponential amount of hardware in a polynomial number of computation cycles (called  "time" ). As soon as one enforces realistic bounds (e.g., because of the bounded speed of light) on the amount of hardware which can be activated by a computation in a given timespan, the Parallel Computation Thesis becomes invalid. For many specific problems it has been shown that they can be solved on a parallel device with a polynomial amount of hardware in time order $  \mathop{\rm log}  n  ^ {k} $(
 +
$  \textrm{ NC } $
 +
complexity class).
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  M.R. Gary,  "Computers and intractability: a guide to the theory of NP-completeness" , Freeman  (1978)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  W. Maass,  "Combinatorial lower bound arguments for deterministic and nondeterministic Turing machines"  ''Trans. Amer. Math. Soc.'' , '''292'''  (1985)  pp. 675–693</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  P.M.B. Vitányi,  M. Li,  "Tape versus stack and queue: the lower bounds"  ''Information and Computation''  (To appear)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  P.M.B. Vitányi,  "An optimal simulation of counter machines"  ''SIAM J. on Computing'' , '''14'''  (1985)  pp. 1–33</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  K. Wagner,  G. Wechsung,  "Complexity theory" , Reidel  (1986)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  M.R. Gary,  "Computers and intractability: a guide to the theory of NP-completeness" , Freeman  (1978)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  W. Maass,  "Combinatorial lower bound arguments for deterministic and nondeterministic Turing machines"  ''Trans. Amer. Math. Soc.'' , '''292'''  (1985)  pp. 675–693</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  P.M.B. Vitányi,  M. Li,  "Tape versus stack and queue: the lower bounds"  ''Information and Computation''  (To appear)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  P.M.B. Vitányi,  "An optimal simulation of counter machines"  ''SIAM J. on Computing'' , '''14'''  (1985)  pp. 1–33</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  K. Wagner,  G. Wechsung,  "Complexity theory" , Reidel  (1986)</TD></TR></table>

Latest revision as of 16:10, 1 April 2020


A function giving a numerical estimate of the difficulty (amount of time/storage involved) of the process of application of an algorithm to the inputs. A more precise definition of the computational complexity of an algorithm is the concept of a cost function (step-counting function) — defined as a decidable relation between the objects to which the algorithm is applicable and the natural numbers, and which has a range of definition coinciding with the range of applicability of the algorithm.

Usually, the time and space characteristics of algorithmic processes are considered. Thus, for a Turing machine $ M $ the time cost function (duration of work) $ T _ {M} (P) $ is the number $ t $ of cycles of work of $ M $ required for the conversion of the initial configuration of $ P $ into the final configuration; the memory cost function (or space function) $ S _ {M} (P) $ is defined as the number of cells on the tape scanned by the head of the machine. Time and storage costs for normal algorithms (cf. Normal algorithm), iterative arrays, multi-head and multi-tape Turing machines, etc., are defined in a similar manner.

The common feature of such cost functions is the fact that there exists an effective procedure establishing for any algorithm $ \alpha $( i.e. in particular for a Turing machine or, rather, for its program), any input $ x $ and any non-negative integer $ t $, whether or not the process of applying $ \alpha $ to $ x $ will terminate with complexity $ t $. This observation leads to the abstract theory of computational complexity [1]. An effective procedure $ r $ is called a computational measure if: 1) it always ends in the result 0 or 1 when applied to triplets of the form $ \langle \textrm{ algorithm, input, natural number } \rangle $; and 2) it has the property that for any algorithm $ \alpha $ and any input $ x $ the equality $ r ( \alpha , x, t) = 1 $ is true for not more than one natural number $ t $, this $ t $ existing if and only if the process of application of $ \alpha $ to $ x $ eventually terminates. The cost function $ R _ \alpha $ with respect to the measure $ r $ for $ \alpha $ is introduced if and only if $ r ( \alpha , x, t) = 1 $, and then $ R _ \alpha (x) = t $.

This last equality is equivalent to the statement "the computational complexity a on x is t (in the measure r)" .

Given some computational measure, one can consider the complexity of computation of a given function $ f $— for example, that of finding an algorithm $ \alpha $ which computes $ f $" better than other algorithms" . However, as exemplified by the speed-up theorem (see below), such a formulation is not always well-posed. The real problem may be the description of the rate of growth of the cost function $ R _ \alpha $, where $ \alpha $ computes $ f $. For example, find upper and lower bounds for the computational complexity of $ f $, i.e. two functions $ G(x) $ and $ g(x) $ such that there exists a computation $ \alpha $ for the function $ f $ for which $ R _ \alpha (x) \leq G(x) $, and such that for any algorithm $ \beta $ which computes $ f $ the function $ R _ \beta $ in a certain sense majorizes $ g $.

Another important branch of the theory of computational complexity is the study of complexity classes — sets of functions for which there exist computations with a complexity not exceeding some bound out of the set of bounds which defines the class. This branch also includes problems of comparative computational complexity for various types of algorithms (automata): the transition to another algorithmic system and another measure of complexity is usually equivalent to considering a new measure for the first algorithmic system.

Below a few fundamental results are given which are independent of the choice of measure of complexity (and of the formalization), as long as there exists, for example, an effective mutual simulation of the algorithms of the type under consideration and ordinary Turing machines. (For the sake of simplicity, it is assumed that the subject is the computation time of Turing machines computing functions from non-negative integers to non-negative integers.) Let $ T $ and $ h $ be computable integer-valued functions (cf. Computable function) on the domain of the algorithms, and let $ f $ be a function defined on this domain which can assume two values only — say, 0 and 1.

First, there exist functions whose computation may be as complex as one pleases. More exactly, for any function $ T $ there exists a computable function $ f $ such that for any algorithm $ \alpha $ which computes $ f $, the inequality

$$ R _ \alpha ( x ) > T ( x ) $$

is false only in a finite number of cases.

Secondly, there exist functions, any computation of which may in principle be improved as much as one pleases. More precisely, the speed-up theorem is valid: For any function $ h $( e.g. $ h(t) = 2 ^ {t} $) it is possible to find a computable function $ f $ such that for any algorithm $ \alpha $ which computes $ f $ it will always be possible to find (no effective procedure is assumed) an algorithm $ \beta $ which also computes $ f $ and such that for all $ x $( except for a finite set)

$$ h ( R _ \beta ( x ) ) < R _ \alpha ( x ) . $$

In case $ h(t) = 2 ^ {t} $ this yields

$$ R _ \beta ( x ) < \mathop{\rm log} _ {2} R _ \alpha ( x ) $$

for almost all $ x $.

For the computational measures of time and space (for Turing machines), the result of the speed-up theorem is valid for most computations in the following form: If $ f $ is computable with complexity $ R(n) $, it is also computable with complexity $ R (n) /2 $( one says that $ f $ is computable with complexity $ R(n) $ if, for some $ \alpha $ which computes it, $ R _ \alpha (P) \leq R(n) $ for all words $ P $ of length $ n $ in the alphabet under consideration). For this reason time and space computational complexities of concrete functions are often order of magnitude estimates. Some specific results are quoted below.

The time to decide the equality of words (of length $ n $) is of order $ n ^ {2} $ for Turing machines and normal algorithms; any property of words which may be decided on Turing machines in a time asymptotically less than $ n \mathop{\rm log} _ {2} n $ can also be decided in time $ n $; computations by a Turing machine of time $ T(n) $( with the order of $ T(n) \geq n ^ {2} $) can be simulated by a Turing machine with space $ \sqrt {T(n) } $. Furthermore, the multiplication of two $ n $- digit numbers on multi-tape Turing machines can be performed in time $ n ^ {1 + \epsilon } $, but not in time order $ n $, but iterative arrays can achieve this in real time, i.e. the $ i $- th digit of the product is produced before the $ (i+1) $- th digits of the factors are inputted. When algorithmic processes of one type are modeled by other processes, the computational complexity usually changes. Thus, a reduction in the dimension of the iterative arrays results in longer computation time. Multi-head Turing machines and multi-tape Turing machines can simulate each other in real time. Many types of algorithms can be simulated on Turing machines, such that the computation time increases polynomially (usually quadratically) with an insignificant increase in space complexity (usually, an additional constant factor).

The complexity approach is useful in studying certain hierarchies of classes of general recursive functions (cf. General recursive function), in connection with their logical and algebraic features. For instance, primitive recursive functions coincide with the functions computable by Turing machines in space bounded by a primitive recursive function. The existence of arbitrarily complex functions is established by diagonalization methods. Natural analogues for such lower bounds on computational complexity were found for the validity problem in certain decidable theories and in areas related to mathematical linguistics. However, for many problems of practical importance the problem of obtaining non-trivial lower bounds presents considerable difficulties. This is the case, in particular, for certain combinatorial problems which are formalized (in one variant) as problems of finding, out of all words of a given word length, those which satisfy a predicate computable in polynomial time. One can also mention the complexity of the computations performed by non-deterministic automata and probabilistic automata (cf. Automaton, probabilistic). In the non-deterministic case the computational device can choose both ways when presented with a choice of how to continue the computation, by "splitting" into two independent copies, each of which has the same capabilities as its parent. Its computational complexity is understood to mean the complexity of the "simplest" successful computation trace. In the probabilistic case, the actual course of the computation is determined not only by the program and the argument, but also by a random sequence of numbers, and the result should coincide, to a great degree of probability, with the value of the function being computed. In both cases the computational complexity can sometimes be proved to be less than that of deterministic computations for the same problem.

The quality of algorithms is evaluated not only by their computational complexity, but also by the complexity of description an algorithm (cf. Algorithm, complexity of description of an). In some publications, both these approaches are combined.

References

[1] J. Hartmanis, J.E. Hopcroft, "An overview of the theory of computational complexity" J. ACM , 18 (1971) pp. 444–475

Comments

The area has considerably expanded from the state described in the article above. Two key references [a1], [a5] are listed below.

The results on Turing machines referred to, up to the result on multiplication, hold for Turing machines in the original version only (single input/work tape, no extra work tapes). In the West, the standard Turing machine is nowadays the (off-line) model with multiple work tapes. Turing machines are useful, because of their simplicity, to obtain lower bounds on the computational complexity of problems. Upper bounds, by exhibiting actual algorithms, are usually based on the RAM (random access machine), which more realistically models the "von Neumann" computer. The two alternatives can simulate each other with a constant multiplicative factor in space and a polynomial time overhead. The latter property characterizes all "reasonable" models (First Machine Class) for sequential computations (Invariance Thesis). The determination of such overheads is a subject of the theory of machine models. For instance, multi-tape Turing machines with one-way input (no backup on the input tape) can be simulated in square time by a Turing machine with one-way input and a single work tape in an obvious way, and it has been shown that less than square time is insufficient [a2], [a3]. One can simulate $ k $ work tapes by 2 work tapes in time order $ n \mathop{\rm log} n $; it is not known whether this is optimal, but the lower bound is non-linear. In contrast, multicounter machines can be simulated in real time (step-for-step) by a Turing machine with a single work tape [a4]. (A counter is a storage device which can hold any integer and supports unit increments, unit decrements, and check for zero.) The Invariance Thesis yields a justification for the identification of the class of problems which can be solved deterministically in polynomial time (denoted by P) as the class of "tractable" problems. Adding the feature of non-determinism gives the class $ \textrm{ NP } $( non-deterministic polynomial time), which is known to contain a large variety of problems which one likes to solve in daily life. The best known deterministic algorithms for solving $ \textrm{ NP } $- problems require exponential time. On the other hand, it is unknown whether $ \textrm{ P } = \textrm{ NP } $. I.e., it is unknown whether these exponential time algorithms can be replaced by deterministic polynomial time ones. This question is generally considered the most important open problem in the theory of computation. It is known that the complexity of $ \textrm{ NP } $ is exemplified in specific $ \textrm{ NP } $ problems. These $ \textrm{ NP } $- complete problems have the property that $ \textrm{ P } = \textrm{ NP } $ if and only if such a problem can be solved in deterministic polynomial time (is in $ \textrm{ P } $). Similar questions arise with respect to the relations between different space complexities, and the relations between time and space. The fundamental complexity classes related to space are PSPACE (NPSPACE), which consist of those problems which can be solved in polynomial space by deterministic (non-deterministic) machines from the First Machine Class. It is known (Savitch's theorem) that $ \textrm{ PSPACE } = \textrm{ NPSPACE } $. Clearly, $ \textrm{ PSPACE } $ contains $ \textrm{ NP } $ but it is unknown whether the inclusion is strict.

There exists a large variety of models for parallel computation. Many of those models (Second Machine Class) have the property that time on the parallel model is equivalent to space on the sequential model, up to a polynomial overhead (Parallel Computation Thesis). Each problem in $ \textrm{ NP } $ can be solved in deterministic polynomial time on a machine of the Second Machine Class. Other models seem to be even more powerfull, albeit totally unrealistic. The power of the parallel models is rooted in the possible activation of at least an exponential amount of hardware in a polynomial number of computation cycles (called "time" ). As soon as one enforces realistic bounds (e.g., because of the bounded speed of light) on the amount of hardware which can be activated by a computation in a given timespan, the Parallel Computation Thesis becomes invalid. For many specific problems it has been shown that they can be solved on a parallel device with a polynomial amount of hardware in time order $ \mathop{\rm log} n ^ {k} $( $ \textrm{ NC } $ complexity class).

References

[a1] M.R. Gary, "Computers and intractability: a guide to the theory of NP-completeness" , Freeman (1978)
[a2] W. Maass, "Combinatorial lower bound arguments for deterministic and nondeterministic Turing machines" Trans. Amer. Math. Soc. , 292 (1985) pp. 675–693
[a3] P.M.B. Vitányi, M. Li, "Tape versus stack and queue: the lower bounds" Information and Computation (To appear)
[a4] P.M.B. Vitányi, "An optimal simulation of counter machines" SIAM J. on Computing , 14 (1985) pp. 1–33
[a5] K. Wagner, G. Wechsung, "Complexity theory" , Reidel (1986)
How to Cite This Entry:
Algorithm, computational complexity of an. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Algorithm,_computational_complexity_of_an&oldid=41941
This article was adapted from an original article by N.V. Petri (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article