Namespaces
Variants
Actions

Complexity theory

From Encyclopedia of Mathematics
Revision as of 17:46, 4 June 2020 by Ulf Rehmann (talk | contribs) (tex encoded by computer)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


The classification of mathematical problems into decidable and undecidable ones is a most fundamental one. It is also of definite practical significance in discouraging attempts to design too-general systems, such as systems for deciding the equivalence of programs or the halting of a program.

However, this classification is too coarse in many respects. A finer classification of decidable problems in terms of their complexity is discussed below. Two problems might both be decidable and yet one might be enormously more difficult to compute, which in practice might make this problem resemble an undecidable one. For example, if instances of reasonable size of one problem take only a few seconds to compute, whereas instances of comparable size of the other problem take millions of years (even if best computers are available), it is clear that the latter problem should be considered intractable compared with the former one. Hence, having established the decidability of a particular problem, one should definitely study its complexity — that is, how difficult it is to settle specific instances of the problem. Complexity considerations are of crucial importance, for instance, in cryptography. If one is not able to decrypt a message within a certain time limit, one might as well forget the whole thing because, as time passes by, the situation might change entirely. Related material is discussed also in the article Algorithm, computational complexity of an.

Thus, typical questions in the theory of computational complexity would be: Is the algorithm $ A _ {1} $ better than the algorithm $ A _ {2} $( for the same problem) in the sense that it uses fewer resources, such as time or memory space? Does the problem $ P $ have a best algorithm? Is the problem $ P $ more difficult than the problem $ P _ {1} $ in the sense that every algorithm for solving $ P $ is more complex than some fixed reasonable algorithm for solving $ P _ {1} $?

Natural complexity measures are obtained by considering Turing machines (cf. Turing machine): How many steps (in terms of the length of the input) does a computation require? This is a natural formalization of the notion of time resource. Similarly, the number of squares visited during a computation constitutes a natural space measure. (Cf. Formal languages and automata and Computable function.)

Such machine-oriented complexity considerations will be considered in the second half of this article. The discussion will be restricted to the basic Turing-machine model. Variations such as machines with many tapes and many heads or random-access machines are important in more detailed and specific complexity considerations. However, from the point of view of the most fundamental issues, the particular Turing-machine model chosen is irrelevant.

In the first half of this article, axiomatic complexity theory is being studied. No specific complexity measure, such as time or memory space, will be defined. Instead, "abstract resource" used by an algorithm will be the term employed. The axioms applied are very natural. They also look very weak in the sense that they do not say much. However, quite a remarkable theory based on the axioms can be established. The theory was initial in [a1].

The third major aspect of complexity theory is not discussed at all: low-level complexity, or the complexity of some specific but practically important algorithms and problems. One is referred to [a5][a7] for this topic, as well as for more detailed information of the broad and highly developed area of complexity theory in general.

Consider an enumeration

$$ \tag{a1 } TM _ {1} \dots TM _ {i} , . . . $$

of all Turing machines. Enumeration (a1) determines an enumeration

$$ \tag{a2 } \overline{f}\; _ {1} \dots \overline{f}\; _ {i} , . . . $$

of all partial recursive functions of one variable (cf. Partial recursive function). An enumeration

$$ \tag{a3 } f _ {1} \dots f _ {i} , . . . $$

of all partial recursive functions of one variable is termed acceptable if and only if it is possible to go effectively from (a2) to (a3), and vice versa. In other words, given an index for a function in (a2), one can find an index for the same function in (a3), and vice versa.

A complexity measure is a pair $ CM = ( F, \Phi ) $, where $ F $ is an acceptable enumeration (a3) of partial recursive functions and $ \Phi $ is an infinite sequence

$$ \tag{a4 } \phi _ {1} \dots \phi _ {i} , . . . $$

of partial recursive functions such that the Blum axioms B1 and B2 are satisfied.

B1. For each $ i \geq 1 $, the domains of $ f _ {i} $ and $ \phi _ {i} $ coincide.

B2. The function $ g ( i, n, m) $ defined by

$$ g ( i, n, m) = \ \left \{ \begin{array}{cl} 1 & \textrm{ if } \phi _ {i} ( n) = m, \\ 0 & \textrm{ otherwise } , \\ \end{array} \right .$$

is recursive.

The function $ \phi _ {i} $ is referred to as the complexity function, or cost function, of $ f _ {i} $.

For instance, if one chooses $ \phi _ {i} ( n) $ to be the number of steps in the computation of a Turing machine for $ f _ {i} $ for the input $ n $, one clearly obtains a complexity measure. Similarly, a complexity measure results if one lets $ \phi _ {i} ( n) $ be the number of squares visited during the computation of the same Turing machine for the input $ n $, provided such a variant of Turing machines is considered where no machines loops using only a finite amount of tape.

On the other hand, the choice $ \phi _ {i} = f _ {i} $ does not yield a complexity measure: Axiom B2 is not satisfied. The choice $ \phi _ {i} ( n) = 1 $ for all $ i $ and $ n $ does not yield a complexity measure because axiom B1 is not satisfied. These examples also show that the two axioms are independent.

Clearly, every partial recursive function $ f $ occurs infinitely many times in the sequence (a3). If $ f = f _ {i} $, then the cost function $ \phi _ {i} $ is associated to an algorithm (for instance, a Turing machine) for computing $ f $, rather than to the function $ f $ itself. If $ f = f _ {j} $ as well, then the cost function $ \phi _ {j} $ might have essentially smaller values than $ \phi _ {i} $, showing that the algorithm corresponding to $ f _ {j} $ is essentially better. When and how is such a speedup possible? This question of speedup is one of the most interesting ones in complexity theory.

No matter what complexity measure and amount of speedup one considers, there are recursive functions $ f $ such that an arbitrary algorithm for $ f $ can be sped up by that amount. Suppose $ f = f _ {i} $ and consider the algorithm determined by $ f _ {i} $( for instance, the Turing machine $ TM _ {i} $) to be a particularly efficient one. This means that one regards the amount of resource defined by $ \phi _ {i} $ to be particularly small, in view of the complexity of the function $ f $. Suppose, further, that $ h ( x) $ is a very rapidly increasing function, for instance,

$$ h ( x) = \ 2 ^ {2 ^ {2 ^ {2 ^ {x} } } } . $$

Then there is another algorithm for $ f $ using so much less resource as $ h $ indicates. In other words, one has also $ f = f _ {j} $ and

$$ \phi _ {i} ( x) \geq \ 2 ^ {2 ^ {2 ^ {2 ^ {\phi _ {j} ( x) } } } } . $$

However, this does not necessarily hold for all $ x $, but only almost-everywhere. Of course, the same procedure can be repeated for $ f _ {j} $. This gives rise to another speedup, but now $ f _ {j} $ is the algorithm using "much" resource. The speedup can be repeated again for the resulting function, etc. ad infinitum.

Thus, the speedup theorem implies that some functions $ f $ have no best algorithms: For every person's favourite algorithm $ f _ {i} $, a speedup is possible. However, this is only true for some functions $ f $ that might be considered "unnatural" .

Now machine-oriented complexity theory is considered. Consider a Turing machine $ TM $ that halts with all inputs $ w $. The time-complexity function associated with $ TM $ is defined by

$$ t _ {TM} ( n) = \ \max \{ m :\ TM \textrm{ halts } \textrm{ after } \ m \textrm{ steps } \textrm{ with } \ \textrm{ an } $$

$$ {} \textrm{ input } w \textrm{ such } \textrm{ that } | w | = n \} . $$

Thus $ t _ {TM} ( n) $ maps the set of non-negative integers into itself. One says that $ TM $ is polynomial bounded if and only if there is a polynomial $ P ( n) $ such that $ t _ {TM} ( n) \leq P ( n) $ holds for all $ n $. Denote by $ {\mathcal P} $ the family of languages acceptable by polynomially-bounded Turing machines. (So far only deterministic Turing machines have been considered — non-deterministic ones are introduced later.)

Although $ {\mathcal P} $ is defined as a family of languages, it can be visualized as the collection of problems for which there exists an algorithm operating in polynomial time. One can always identify a decision problem with the membership problem for the language of "positive instances" .

The family $ {\mathcal P} $ is very natural from a mathematical point of view. This is seen from the fact that it is highly invariant with respect to the underlying model of computation. For instance, Turing machines $ TM _ {1} $ with several tapes are faster than ordinary Turing machines — that is, their time-complexity function assumes smaller values. However, if the time-complexity function of such a $ TM _ {1} $ is bounded from above by a polynomial $ P _ {1} ( n) $, one can (effectively) construct an ordinary Turing machine $ TM $ with a polynomial bound $ P ( n) $ accepting the same language as $ TM _ {1} $. (In general, $ P ( n) $ assumes greater values than $ P _ {1} ( n) $ but is still a polynomial.) Similarly, every language that is polynomially bounded with respect to any reasonable model of computation belongs to the family $ {\mathcal P} $, as defined above with respect to the ordinary Turing machine.

The family $ {\mathcal P} $ is also of crucial importance because languages outside $ {\mathcal P} $ can be visualized as impossible to compute. In fact, one says that a recursive language is intractable if it does not belong to $ {\mathcal P} $.

Clearly, languages outside $ {\mathcal P} $ are intractable from the practical point of view. The same can be said about such languages in $ {\mathcal P} $ for which the polynomial bound is a huge one. However, it would not be natural to draw the borderline between tractability and intractability somewhere inside $ {\mathcal P} $. Such a definition would also be time-varying: Drastic developments in computers could change it. On the other hand, the family $ {\mathcal P} $ provides a very natural characterization of tractability.

Now non-deterministic Turing machines are considered: When scanning a specific symbol in a specific state, the machine may have several possibilities for its behaviour. Otherwise, a non-deterministic machine is defined as a deterministic one. A word $ w $ is accepted if and only if it gives rise to an accepting computation, independently of the fact that it might also give rise to computations leading to failure. Thus, in connection with non-deterministic machines in general, all roads to failure are disregarded if there is one possible road to success.

The time required by a non-deterministic Turing machine $ TM $ to accept a word $ w \in L ( TM) $ is defined to be the number of steps in the shortest computation of $ TM $ accepting $ w $. The time-complexity function of $ TM $ is now defined by

$$ t _ {TM} ( n) = \ \max \{ 1, m :\ TM \textrm{ accepts } \mathop{\rm in} \ \textrm{ time } m $$

$$ {} \textrm{ some } w \textrm{ where } | w | = n \} . $$

Thus, only accepting computations enter the definition of $ t _ {TM} ( n) $. If no words of length $ n $ are accepted, $ t _ {TM} ( n) = 1 $.

Further formal details of the definition of a non-deterministic Turing machine are omitted.

Having defined the time-complexity function, the notion of a polynomially-bounded non-deterministic Turing machine is defined exactly as before. Denote by $ {\mathcal N} {\mathcal P} $ the family of languages acceptable by non-deterministic polynomially-bounded Turing machines.

Problems in $ {\mathcal P} $ are tractable, whereas problems in $ {\mathcal N} {\mathcal P} $ have the property that it is tractable to check whether or not a good guess for the solution of the problem is correct. A non-deterministic Turing machine may be visualized as a device that checks whether or not a guess is correct: It makes a guess (or several guesses) at some stage during its computation, and the final outcome is acceptance only in case the guess was (or the guesses were) correct. Thus, a time bound for a non-deterministic Turing machine is, in fact, a time bound for checking whether or not a guess for the solution is correct.

Clearly, $ {\mathcal P} $ is contained in $ {\mathcal N} {\mathcal P} $. However, it is not known whether or not the containment is proper. The problem of whether or not $ {\mathcal P} $ equals $ {\mathcal N} {\mathcal P} $( $ {\mathcal P} = {\mathcal N} {\mathcal P} $?) can justly be called the most celebrated open problem in the theory of computation. The significance of this question is due to the fact that many practically important problems are known to be in $ {\mathcal N} {\mathcal P} $, whereas it is not known whether or not they are in $ {\mathcal P} $. In fact, all known deterministic algorithms for these problems are exponential as far as time is concerned. Thus, a proof of $ {\mathcal P} = {\mathcal N} {\mathcal P} $ would make all of these problems tractable.

Non-deterministic Turing machines and guessing are not, as such, intended to be modeling computation. Non-determinism is merely an auxiliary notion and, as will be seen, a very useful one. Indeed, if one wants to settle the question of whether or not $ {\mathcal P} = {\mathcal N} {\mathcal P} $, the following definitions and results show that it suffices to consider one particular language (might be a favourite one!) and determine whether or not it is in $ {\mathcal P} $. There is a great variety of such so-called $ {\mathcal N} {\mathcal P} $- complete languages (see below), resulting from practically all areas of mathematics.

A language $ L _ {1} \subseteq \Sigma _ {1} ^ {*} $ is polynomially reducible to a language $ L _ {2} \subseteq \Sigma _ {2} ^ {*} $— in symbols $ L _ {1} \leq _ {p} L _ {2} $— if and only if there is a deterministic polynomially-bounded Turing machine $ TM $ translating words $ w _ {1} $ over $ \Sigma _ {1} $ into words $ w _ {2} $ over $ \Sigma _ {2} $ in such a way that $ w _ {1} $ being in $ L _ {1} $ is equivalent to its image $ w _ {2} $ being in $ L _ {2} $.

Observe that the Turing machine $ TM $ introduced in the previous definition must halt with all inputs — this is a consequence of $ TM $ being deterministic and polynomially bounded.

Clearly, whenever $ L _ {1} \leq _ {p} L _ {2} $ and $ L _ {2} $ is in $ {\mathcal P} $, then $ L _ {1} $ is also in $ {\mathcal P} $.

A language $ L $ is called $ {\mathcal N} {\mathcal P} $- hard if $ L ^ \prime \leq _ {p} L $ holds for every language $ L ^ \prime $ in $ {\mathcal N} {\mathcal P} $. $ L $ is termed $ {\mathcal N} {\mathcal P} $- complete if it is $ {\mathcal N} {\mathcal P} $- hard and, in addition, belongs to $ {\mathcal N} {\mathcal P} $.

$ {\mathcal N} {\mathcal P} $- complete languages can be visualized to represent the hardest problems in $ {\mathcal N} {\mathcal P} $. Moreover, to settle the question of whether or not $ {\mathcal P} = {\mathcal N} {\mathcal P} $, it suffices to decide whether or not an arbitrary $ {\mathcal N} {\mathcal P} $- complete language $ L $ is in $ {\mathcal P} $. Indeed, consider such a language $ L $. If $ L $ is not in $ {\mathcal P} $, then clearly $ {\mathcal P} \neq {\mathcal N} {\mathcal P} $. If $ L $ is in $ {\mathcal P} $, then the definition of $ {\mathcal N} {\mathcal P} $- completeness shows that every language in $ {\mathcal N} {\mathcal P} $ is also in $ {\mathcal P} $. But this means that $ {\mathcal P} = {\mathcal N} {\mathcal P} $.

The study of $ {\mathcal N} {\mathcal P} $- completeness was initiated in [a2] and [a4], whereas [a3] is a pioneering paper on complexity theory as a whole. One is referred to [a6] for a basic list of $ {\mathcal N} {\mathcal P} $- complete problems. Typical ones are: the traveling sales-person problem, the satisfialility problem for propositional calculus, the Hamiltonian circuit problem for graphs, the knapsack problem, the scheduling problem, and the membership problem for TOL languages (cf. $ L $- systems).

Although a problem is decidable, it might still be intractable in the sense that any algorithm for solving it must use unmanageable amounts of computational resources. Intractability is defined formally by the condition of being outside $ {\mathcal P} $.

It has also been observed that, for very many important problems, all algorithms presently known operate in exponential time, whereas it has not been shown that these problems actually are intractable. All $ {\mathcal N} {\mathcal P} $- complete problems belong to this class. Indeed, establishing lower bounds is, in general, a difficult task in complexity theory. The complexity of a particular algorithm can usually be estimated, but it is harder to say something general about all algorithms for a particular problem — for instance, to give a lower bound for their complexity. Results such as the speedup theorem show that this might even be impossible in some cases. According to the theory of abstract complexity measures, one cannot expect to prove that any specific problem is intrinsically difficult in all measures. However, there are also a number of problems known to be intractable.

Space bounds are defined for Turing machines analogously as time bounds. The denotations $ {\mathcal P} $- SPACE and $ {\mathcal N} {\mathcal P} $- SPACE stand for languages acceptable by deterministic and non-deterministic Turing machines in polynomial space, respectively. The following relations hold:

$$ {\mathcal P} \subseteq {\mathcal N} {\mathcal P} \subseteq \ {\mathcal P}- \textrm{ SPACE } = \ {\mathcal N} {\mathcal P}- \textrm{ SPACE } . $$

It is a celebrated open problem whether or not the inclusions are strict.

References

[a1] M. Blum, "A machine-independent theory of the complexity of recursive functions" J. ACM , 14 (1967) pp. 322–336
[a2] S.A. Cook, "The complexity of theorem-proving procedures" , Proc. 3-rd Annual ACM Symp. Theory of Computing , ACM (1971) pp. 151–158
[a3] J. Hartmanis, R.E. Stearns, "On the computational complexity of algorithms" Trans. Amer. Math. Soc. , 117 (1965) pp. 285–306
[a4] R.M. Karp, "Reducibility among combinatorial problems" R.E. Miller (ed.) J.W. Tatcher (ed.) , Complexity of Computer Computations. Proc. Symp. IBM , Plenum (85–103)
[a5] A.V. Aho, J.E. Hopcroft, J.D. Ullman, "The design and analysis of computer algorithms" , Addison-Wesley (1974)
[a6] M.R. Garey, D.S. Johnson, "Computers and intractability: a guide to the theory of NP-completeness" , Freeman (1979)
[a7] M. Machtey, P. Young, "An introduction to the general theroy of algorithms" , North-Holland (1978)
[a8] A. Salomaa, "Computation and automata" , Cambridge Univ. Press (1986)
How to Cite This Entry:
Complexity theory. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Complexity_theory&oldid=46434
This article was adapted from an original article by G. RozenbergA. Salomaa (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article