Computer, abstract
abstract machine
A mathematical concept describing a model of a computer, while ignoring the bounded capacities of storage registers and other technological parameters of electronic computers. Unlike real computers, abstract computers can compute functions defined on an infinite domain of constructive objects (e.g. integers, words over a finite alphabet, finite graphs, infinite trees, etc., cf. Constructive object). Abstract machines are used as a concept which defines more rigorously the intuitive idea of an algorithm. This concept is used in the study of problems on the existence of an algorithm (i.e. solvability of algorithmic problems, cf. Algorithmic problem), or in investigations on the quality of an algorithm (i.e. estimates on the complexity of the algorithm). It is used also for the formalization of the semantics of algorithmic languages (cf. Algorithmic language), the modelling of certain classes of abstract computers by other classes (e.g. in order to simulate parallel algorithms by sequential ones).
Abstract computers may be classified by the complexity of their structure and also by the type of information processed by the algorithms. The main classes of abstract computers are listed below.
1) Abstract computers which process words over a finite alphabet. The complexity of their structure most simple in order to meet the special needs of the theory. Typical representatives are the Turing machine, the Minsky machine, the normal algorithms of Markov (cf. Normal algorithm), and the Post canonical system. Abstract computers are often utilized to demonstrate the unsolvability of algorithmic problems, and in estimates of the complexity of algorithms (cf. Algorithm, computational complexity of an).
2) Abstract computers with the same complexity of structure as those in class 1), but which process matrices, finite graphs and arbitrary complexes. Typical representatives include the algorithms of Kolmogorov, von Neumann's cellular automata and growing automata. While in the algorithms of Kolmogorov information is processed locally, in von Neumann's cellular and in growing automata this takes place in parallel. Universal abstract computers have been built; they have satisfactory characteristics and are able to model arbitrary abstract computers of the same type. Maximum-speed (real-time) computations have been studied on certain classes of abstract computers.
3) Abstract computers which usually process words or integers and have an elaborate structure determined by practical requirements, having much in common with that of real computers. Typical representatives are the so-called operator algorithms [3] and machines with random access to the memory. Such abstract computers are used to estimate the complexity of algorithms employed in practice, and to model abstract computers.
4) Abstract computers with the same complexity of structure as those in class 3), but which process arbitrary graphs (most often infinite trees). They are the product of development of the abstract computers of class 3), designed for adaptation to the special features of algorithmic languages. Such abstract computers are used in the formalization of the semantics of algorithmic languages, and to demonstrate the correctness of programs. Typical representatives include abstract computers employed in the specification of the semantics of Algol-68 and PL/I.
References
[1] | Ya.M. Bardzin', "Universality theorems in growing automata theory" Dokl. Akad. Nauk SSSR , 157 : 3 (1964) pp. 542–545 (In Russian) |
[2] | A. van, et al. Wyngaarden, "Revised report on the algorithmic language ALGOL 68" Acta Inform. , 5 : 1–3 (1975) |
[3] | A.P. Ershov, "Operator algorithms I. Basic notions" Problemy Kibernet. , 3 (1960) pp. 5–48 (In Russian) |
[4] | A.N. Kolmogorov, V.A. Uspenskii, "On the definition of an algorithm" Uspekhi Mat. Nauk , 13 : 4 (1958) pp. 3–28 (In Russian) |
[5] | M. Minsky, "Computation: finite and infinite machines" , Prentice-Hall (1967) |
[6] | B.A. Trachtenbrot, "Algorithmes et machines à calculer" , Dunod (1963) (Translated from Russian) |
[7] | J. Hartmanis, "Computational complexity of random access stored program machines" Math. Systems Theory , 5 (1971) pp. 232–245 |
[8] | P. Wegner, "The Vienna definition language" Comput. Surveys , 4 : 1 (1972) pp. 5–63 |
Comments
The notion of an abstract computer as elaborated above includes not only the machine-based models which are in use nowadays but also more calculus-oriented formalisms. For example, the "agent" performing the reduction of terms in the $ \lambda $- calculus is an abstract algorithm in the sense of the above notion.
There exist some important features of classification which are considered today and which are not mentioned in the above.
Uniformity versus non-uniformity: in the study of formula complexity or network complexity one investigates for a given function its restriction to inputs of length $ k $ for a given integer $ k $, and next one investigates the size or depth of a formula or network which computes this restriction. It is conceivable that there exists no effective procedure which produces for an arbitrary length $ k $ a description of a formula or network of minimal complexity for this restriction, and therefore these minimal devices represent a non-uniform device for the original function. On the other hand, a given Turing machine algorithm presents a uniform description of the entire function.
Parallel versus non-parallel computation: In an abstract sequential computer there exists a single processor which operates during a single transition on a finite number of parts of a configuration which are considered to be atomic (but as observed by the author of the above section this still can represent a truly atomic object like a character, or a structured object like a number, a word or a graph). In a parallel processor there exists a potentially unbounded collection of processors which operate simultaneously on possibly disjoint data. Within the realm of parallel models one makes the further distinction between synchronous parallelism or asynchronous parallelism, depending on whether the processors operate in lock-step or operate each in its private time scale. Another distinction which is made deals with the instructions performed by the different processors. If all processors during a single operation cycle perform the same operation on different data, the machine is called a Single Instruction Multiple Data (SIMD) machine. If the different processors can perform arbitrary instructions on different data one has a Multiple Instruction Multiple Data (MIMD) machine. Parallel computers can operate on a common memory or, alternatively, each processor has a private memory and different processors exchange information by means of messages which are send and received over channels which connect specific pairs of processors.
References
[a1] | R.W. Hockney, C.R. Jesshope, "Parallel computers" , A. Hilger , Bristol (1981) |
[a2] | Comm. of the ACM. Special issue on parallelism , 29 : 12 December (1986) pp. 1168–1239 |
[a3] | J.E. Savage, "The complexity of computing" , Wiley (1976) |
[a4] | D. Bjorner, C.B. Jones, "Formal specification and software development" , Prentice-Hall (1982) |
[a5] | J.E. Hopcroft, J.D. Ulman, "Introduction to automata theory, languages, and computation" , Addison-Wesley (1979) |
[a6] | G.D. Plotkin, "A structural approach to operational semantics" TR Aarhus Univ. Dept. Computer Science FN-19 (1981) |
Computer, abstract. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Computer,_abstract&oldid=46436