Namespaces
Variants
Actions

Algorithm, computational complexity of an

From Encyclopedia of Mathematics
Jump to: navigation, search


A function giving a numerical estimate of the difficulty (amount of time/storage involved) of the process of application of an algorithm to the inputs. A more precise definition of the computational complexity of an algorithm is the concept of a cost function (step-counting function) — defined as a decidable relation between the objects to which the algorithm is applicable and the natural numbers, and which has a range of definition coinciding with the range of applicability of the algorithm.

Usually, the time and space characteristics of algorithmic processes are considered. Thus, for a Turing machine $ M $ the time cost function (duration of work) $ T _ {M} (P) $ is the number $ t $ of cycles of work of $ M $ required for the conversion of the initial configuration of $ P $ into the final configuration; the memory cost function (or space function) $ S _ {M} (P) $ is defined as the number of cells on the tape scanned by the head of the machine. Time and storage costs for normal algorithms (cf. Normal algorithm), iterative arrays, multi-head and multi-tape Turing machines, etc., are defined in a similar manner.

The common feature of such cost functions is the fact that there exists an effective procedure establishing for any algorithm $ \alpha $( i.e. in particular for a Turing machine or, rather, for its program), any input $ x $ and any non-negative integer $ t $, whether or not the process of applying $ \alpha $ to $ x $ will terminate with complexity $ t $. This observation leads to the abstract theory of computational complexity [1]. An effective procedure $ r $ is called a computational measure if: 1) it always ends in the result 0 or 1 when applied to triplets of the form $ \langle \textrm{ algorithm, input, natural number } \rangle $; and 2) it has the property that for any algorithm $ \alpha $ and any input $ x $ the equality $ r ( \alpha , x, t) = 1 $ is true for not more than one natural number $ t $, this $ t $ existing if and only if the process of application of $ \alpha $ to $ x $ eventually terminates. The cost function $ R _ \alpha $ with respect to the measure $ r $ for $ \alpha $ is introduced if and only if $ r ( \alpha , x, t) = 1 $, and then $ R _ \alpha (x) = t $.

This last equality is equivalent to the statement "the computational complexity a on x is t (in the measure r)" .

Given some computational measure, one can consider the complexity of computation of a given function $ f $— for example, that of finding an algorithm $ \alpha $ which computes $ f $" better than other algorithms" . However, as exemplified by the speed-up theorem (see below), such a formulation is not always well-posed. The real problem may be the description of the rate of growth of the cost function $ R _ \alpha $, where $ \alpha $ computes $ f $. For example, find upper and lower bounds for the computational complexity of $ f $, i.e. two functions $ G(x) $ and $ g(x) $ such that there exists a computation $ \alpha $ for the function $ f $ for which $ R _ \alpha (x) \leq G(x) $, and such that for any algorithm $ \beta $ which computes $ f $ the function $ R _ \beta $ in a certain sense majorizes $ g $.

Another important branch of the theory of computational complexity is the study of complexity classes — sets of functions for which there exist computations with a complexity not exceeding some bound out of the set of bounds which defines the class. This branch also includes problems of comparative computational complexity for various types of algorithms (automata): the transition to another algorithmic system and another measure of complexity is usually equivalent to considering a new measure for the first algorithmic system.

Below a few fundamental results are given which are independent of the choice of measure of complexity (and of the formalization), as long as there exists, for example, an effective mutual simulation of the algorithms of the type under consideration and ordinary Turing machines. (For the sake of simplicity, it is assumed that the subject is the computation time of Turing machines computing functions from non-negative integers to non-negative integers.) Let $ T $ and $ h $ be computable integer-valued functions (cf. Computable function) on the domain of the algorithms, and let $ f $ be a function defined on this domain which can assume two values only — say, 0 and 1.

First, there exist functions whose computation may be as complex as one pleases. More exactly, for any function $ T $ there exists a computable function $ f $ such that for any algorithm $ \alpha $ which computes $ f $, the inequality

$$ R _ \alpha ( x ) > T ( x ) $$

is false only in a finite number of cases.

Secondly, there exist functions, any computation of which may in principle be improved as much as one pleases. More precisely, the speed-up theorem is valid: For any function $ h $( e.g. $ h(t) = 2 ^ {t} $) it is possible to find a computable function $ f $ such that for any algorithm $ \alpha $ which computes $ f $ it will always be possible to find (no effective procedure is assumed) an algorithm $ \beta $ which also computes $ f $ and such that for all $ x $( except for a finite set)

$$ h ( R _ \beta ( x ) ) < R _ \alpha ( x ) . $$

In case $ h(t) = 2 ^ {t} $ this yields

$$ R _ \beta ( x ) < \mathop{\rm log} _ {2} R _ \alpha ( x ) $$

for almost all $ x $.

For the computational measures of time and space (for Turing machines), the result of the speed-up theorem is valid for most computations in the following form: If $ f $ is computable with complexity $ R(n) $, it is also computable with complexity $ R (n) /2 $( one says that $ f $ is computable with complexity $ R(n) $ if, for some $ \alpha $ which computes it, $ R _ \alpha (P) \leq R(n) $ for all words $ P $ of length $ n $ in the alphabet under consideration). For this reason time and space computational complexities of concrete functions are often order of magnitude estimates. Some specific results are quoted below.

The time to decide the equality of words (of length $ n $) is of order $ n ^ {2} $ for Turing machines and normal algorithms; any property of words which may be decided on Turing machines in a time asymptotically less than $ n \mathop{\rm log} _ {2} n $ can also be decided in time $ n $; computations by a Turing machine of time $ T(n) $( with the order of $ T(n) \geq n ^ {2} $) can be simulated by a Turing machine with space $ \sqrt {T(n) } $. Furthermore, the multiplication of two $ n $- digit numbers on multi-tape Turing machines can be performed in time $ n ^ {1 + \epsilon } $, but not in time order $ n $, but iterative arrays can achieve this in real time, i.e. the $ i $- th digit of the product is produced before the $ (i+1) $- th digits of the factors are inputted. When algorithmic processes of one type are modeled by other processes, the computational complexity usually changes. Thus, a reduction in the dimension of the iterative arrays results in longer computation time. Multi-head Turing machines and multi-tape Turing machines can simulate each other in real time. Many types of algorithms can be simulated on Turing machines, such that the computation time increases polynomially (usually quadratically) with an insignificant increase in space complexity (usually, an additional constant factor).

The complexity approach is useful in studying certain hierarchies of classes of general recursive functions (cf. General recursive function), in connection with their logical and algebraic features. For instance, primitive recursive functions coincide with the functions computable by Turing machines in space bounded by a primitive recursive function. The existence of arbitrarily complex functions is established by diagonalization methods. Natural analogues for such lower bounds on computational complexity were found for the validity problem in certain decidable theories and in areas related to mathematical linguistics. However, for many problems of practical importance the problem of obtaining non-trivial lower bounds presents considerable difficulties. This is the case, in particular, for certain combinatorial problems which are formalized (in one variant) as problems of finding, out of all words of a given word length, those which satisfy a predicate computable in polynomial time. One can also mention the complexity of the computations performed by non-deterministic automata and probabilistic automata (cf. Automaton, probabilistic). In the non-deterministic case the computational device can choose both ways when presented with a choice of how to continue the computation, by "splitting" into two independent copies, each of which has the same capabilities as its parent. Its computational complexity is understood to mean the complexity of the "simplest" successful computation trace. In the probabilistic case, the actual course of the computation is determined not only by the program and the argument, but also by a random sequence of numbers, and the result should coincide, to a great degree of probability, with the value of the function being computed. In both cases the computational complexity can sometimes be proved to be less than that of deterministic computations for the same problem.

The quality of algorithms is evaluated not only by their computational complexity, but also by the complexity of description an algorithm (cf. Algorithm, complexity of description of an). In some publications, both these approaches are combined.

References

[1] J. Hartmanis, J.E. Hopcroft, "An overview of the theory of computational complexity" J. ACM , 18 (1971) pp. 444–475

Comments

The area has considerably expanded from the state described in the article above. Two key references [a1], [a5] are listed below.

The results on Turing machines referred to, up to the result on multiplication, hold for Turing machines in the original version only (single input/work tape, no extra work tapes). In the West, the standard Turing machine is nowadays the (off-line) model with multiple work tapes. Turing machines are useful, because of their simplicity, to obtain lower bounds on the computational complexity of problems. Upper bounds, by exhibiting actual algorithms, are usually based on the RAM (random access machine), which more realistically models the "von Neumann" computer. The two alternatives can simulate each other with a constant multiplicative factor in space and a polynomial time overhead. The latter property characterizes all "reasonable" models (First Machine Class) for sequential computations (Invariance Thesis). The determination of such overheads is a subject of the theory of machine models. For instance, multi-tape Turing machines with one-way input (no backup on the input tape) can be simulated in square time by a Turing machine with one-way input and a single work tape in an obvious way, and it has been shown that less than square time is insufficient [a2], [a3]. One can simulate $ k $ work tapes by 2 work tapes in time order $ n \mathop{\rm log} n $; it is not known whether this is optimal, but the lower bound is non-linear. In contrast, multicounter machines can be simulated in real time (step-for-step) by a Turing machine with a single work tape [a4]. (A counter is a storage device which can hold any integer and supports unit increments, unit decrements, and check for zero.) The Invariance Thesis yields a justification for the identification of the class of problems which can be solved deterministically in polynomial time (denoted by P) as the class of "tractable" problems. Adding the feature of non-determinism gives the class $ \textrm{ NP } $( non-deterministic polynomial time), which is known to contain a large variety of problems which one likes to solve in daily life. The best known deterministic algorithms for solving $ \textrm{ NP } $- problems require exponential time. On the other hand, it is unknown whether $ \textrm{ P } = \textrm{ NP } $. I.e., it is unknown whether these exponential time algorithms can be replaced by deterministic polynomial time ones. This question is generally considered the most important open problem in the theory of computation. It is known that the complexity of $ \textrm{ NP } $ is exemplified in specific $ \textrm{ NP } $ problems. These $ \textrm{ NP } $- complete problems have the property that $ \textrm{ P } = \textrm{ NP } $ if and only if such a problem can be solved in deterministic polynomial time (is in $ \textrm{ P } $). Similar questions arise with respect to the relations between different space complexities, and the relations between time and space. The fundamental complexity classes related to space are PSPACE (NPSPACE), which consist of those problems which can be solved in polynomial space by deterministic (non-deterministic) machines from the First Machine Class. It is known (Savitch's theorem) that $ \textrm{ PSPACE } = \textrm{ NPSPACE } $. Clearly, $ \textrm{ PSPACE } $ contains $ \textrm{ NP } $ but it is unknown whether the inclusion is strict.

There exists a large variety of models for parallel computation. Many of those models (Second Machine Class) have the property that time on the parallel model is equivalent to space on the sequential model, up to a polynomial overhead (Parallel Computation Thesis). Each problem in $ \textrm{ NP } $ can be solved in deterministic polynomial time on a machine of the Second Machine Class. Other models seem to be even more powerfull, albeit totally unrealistic. The power of the parallel models is rooted in the possible activation of at least an exponential amount of hardware in a polynomial number of computation cycles (called "time" ). As soon as one enforces realistic bounds (e.g., because of the bounded speed of light) on the amount of hardware which can be activated by a computation in a given timespan, the Parallel Computation Thesis becomes invalid. For many specific problems it has been shown that they can be solved on a parallel device with a polynomial amount of hardware in time order $ \mathop{\rm log} n ^ {k} $( $ \textrm{ NC } $ complexity class).

References

[a1] M.R. Gary, "Computers and intractability: a guide to the theory of NP-completeness" , Freeman (1978)
[a2] W. Maass, "Combinatorial lower bound arguments for deterministic and nondeterministic Turing machines" Trans. Amer. Math. Soc. , 292 (1985) pp. 675–693
[a3] P.M.B. Vitányi, M. Li, "Tape versus stack and queue: the lower bounds" Information and Computation (To appear)
[a4] P.M.B. Vitányi, "An optimal simulation of counter machines" SIAM J. on Computing , 14 (1985) pp. 1–33
[a5] K. Wagner, G. Wechsung, "Complexity theory" , Reidel (1986)
How to Cite This Entry:
Algorithm, computational complexity of an. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Algorithm,_computational_complexity_of_an&oldid=45075
This article was adapted from an original article by N.V. Petri (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article