Namespaces
Variants
Actions

Logic programming

From Encyclopedia of Mathematics
Revision as of 17:13, 7 February 2011 by 127.0.0.1 (talk) (Importing text file)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

LP

Logic programming came into existence in the early 1970s. Its ideal is to write programs in the language of logic, thereby aiming at an immediately obvious intuitive meaning, whereas execution can be left to an interpreter.

This ideal has been realized to a great extent in the case of positive logic programming, which forms the common core of systems of logic programming. The logical language employed here is that of Horn-clause (first-order) logic. Programs can be executed by linear resolution, which is the special case of the resolution procedure that not only is capable of refuting but has computing power at the same time.

Herbrand universa.

Suppose that is a finite set of function symbols. Every symbol in has an arity: a non-negative integer. Assume that contains at least one -ary symbol (a constant symbol). The set determines a corresponding Herbrand universe of closed -terms, which is the least set of formal expressions (built using the symbols from , brackets, and the comma) such that if and is -ary, then . constitutes the free -algebra in which a function symbol is interpreted as the function that assigns the value to arguments .

Most data types can be viewed as Herbrand universa. For instance, the non-negative integers can be identified with the terms generated from one constant symbol with the help of one unary function symbol . Similarly, one can view lists, trees, etc. as generated by means of suitable function symbols.

Horn-clause logic.

The set of -terms is generated from in the same way as is , but this time allowing variable symbols as terms as well.

Let be a finite set of relation symbols (with arities). An -atom is an expression of the form where is an -ary relation symbol and are terms.

A positive literal is the same as an atom. A negative literal is the negation of an atom .

A clause is a finite sequence of (positive and/or negative) literals. Logically, a clause is identified with the disjunctive formula , where variables are read as if they were quantified universally.

A rule or Horn clause is a clause in which exactly one literal is positive. Finally, a (logic) program is a finite set of rules.

Note that a Horn clause like is identified with the formula , which in turn is logically equivalent with the implication . It is logic programming custom to write this as

suggesting a ( "procedural" ) interpretation of rules as something to be executed ( "in order to execute A, execute B1…Bk" ).

Meaning of programs.

logic programming started with the observation that many operations over a Herbrand universe can be described using a program. For instance, consider the simple case of addition over the non-negative integers. This operation is determined by the following two recursion equations:

which enable a unique evaluation of every sum of two numbers expressed in terms of and . (E.g., .) The recursion equations transform into the following program involving a -ary relation symbol :

In what way do these rules express addition? First, they are clearly satisfied by the Herbrand model that consists of the Herbrand universe (the free algebra over and ) supplied with the graph of the addition operation as a meaning for the symbol . However, there are many more relations that similarly make into a Herbrand model of this program. Nevertheless, addition here stands out: it happens to make into the least Herbrand model (having fewest true atoms) of this program.

The following computational completeness theorem holds: For every computable operation over one can construct a program such that in its least Herbrand model the graph of the operation occurs as meaning of one of the relation symbols.

Execution of programs.

The above explains that least Herbrand models can be viewed as meanings of programs and that every computable operation surfaces in such a meaning. However, programs are meant to be executed. This is where the notion of linear resolution comes in.

A substitution is a finite function that maps variables to terms. If is an expression and a substitution, then is the expression obtained from by replacing every variable from the domain of occurring in by its -image.

The substitution unifies the expressions and if . It is a most general unifier (mgu) of and if it unifies them and for every other unifier there exists a substitution such that .

The following unification theorem holds: If two expressions have a unifier, then they have an mgu as well. Both unifiability and mgu can be determined algorithmically.

A query is a finite sequence of atoms. In particular, the empty sequence is a query. The intuitive meaning of a query is the conjunction of its atoms (variables conceived of as universally quantified). Therefore, the empty query stands for the logically true formula.

A query represents a condition to be solved. Linear resolution aims at finding such solutions.

The notation , query resolves to via mgu , is used to describe the circumstance that for some atom of and some rule of the program under consideration:

a) is an mgu of and ,

b) is obtained from by replacing by .

A derivation of the query relative to a program is a finite sequence

where every step employs a variant of a rule of the program in which all variables are fresh (i.e., not entering earlier steps).

Such a derivation is successful if its last query is empty. As an example, here is a successful derivation of the atomic query (that "asks" for the sum of and ) relative to the program of rules given earlier. Numbers are identified with suitable terms and indicates substituting for .

The first two steps of this derivation are executed using the second rule, whereas the last step employs the first one. Only the relevant portions of the mgu's are indicated.

Composing the three mgu's in this example results in the computed answer substitution of the derivation that sets to , i.e., to . The derivation is nothing but a transform of the earlier computation showing that and add up to .

Correspondence.

Meanings of programs and executions (computations) correspond in the following way (cf. the example derivation), thereby witnessing the successful accomplishment of the logic programming ideal in this context.

The soundness theorem: If is the computed answer substitution of a successful derivation of the query via a program, then follows logically from that program.

The converse is almost true.

The completeness theorem: If follows logically from a program, then there exists a computed answer substitution of a successful derivation of via that program such that for some substitution , .

(The reason that itself may not be computed is that resolution produces answers that are "as general as possible" .)

Prolog.

Linear resolution is the core of the Prolog system, which was developed more or less independently from, but at the same time as, logic programming theory. Thus, logic programming can be viewed as the theoretical basis underlying Prolog. However, many divergencies exist between logic programming theory and Prolog practice.

Occur check.

A relatively small difference is the fact that the Prolog implementation of the mgu algorithm is incorrect. For reasons of cheapness it omits the so-called occur check. In most situations in practice, though, this incorrectness will not surface.

Search strategy.

At every stage in the construction of a derivation, two choices have to be made:

i) which of the atoms in the given query should be resolved;

ii) which of the rules of the program should be employed to accomplish this. (The example derivation does not illustrate these choices.)

It can be shown that the choice of i) has no influence on the final outcome. (If a successful derivation exists, one with the same computed answer substitution will be found no matter how i) is dealt with. Usual Prolog implementations always resolve left-most atoms.) However, choice of ii) is of utmost importance. Given a selection rule that handles i), all derivations for a query via a given program form a tree. The tree splits if more than one rule is applicable to the selected atom. A branch through the tree can be successful (ending at the empty query), unsuccessful (ending at a non-empty query the selected atom of which is not resolvable using any rule), or infinite. Prolog's search of this tree is "depth-first" (cf. also Depth-first search), employing at every step the first rule in the listing of the program that is applicable to the selected (i.e., left-most) atom. At the end of an unsuccessful branch it backtracks to the last choice and tries the next rule. It goes without saying that Prolog may follow an infinite branch all the way, and, so doing, can miss a successful branch. This brings along the important question of which programs terminate on which queries.

Negation.

Although the logic programming model as described above is computationally complete, in many cases the need arises for more expressibility. In particular, it is sometimes desirable to allow rules and queries where one or more of the literals carry a negation symbol. Prolog does offer this possibility.

A main problem is that this move destroys the logic programming ideal: the intuitive meaning of such rules is no longer evident, although several proposals (some employing many-valued logic) have tried to restore this deficiency. The second problem, that of execution, is dealt with by Prolog in a practical way. A negated atom is resolved successfully only if the tree of derivations for the atom has no infinite branch and no successful derivation.

Although this way of executing negations has many useful applications, the general meaning of this way of handling negation remains problematic.

Prolog has many extra features (such as the cut, of which negation is but one application), most of which are equally cooperative in destroying the logic programming ideal described above. The problem to close the gap between logic programming theory and Prolog practice is a persistent one.

Where to look.

logic programming has its own periodical: the Journal of Logic Programming. A few introductory texts are [a1], [a5] and [a3]. The problem of negation in logic programming is reviewed in [a2]. One of the many texts on Prolog is [a7]. The history of logic programming theory can be traced to [a4]. The general notion of resolution, in particular that of most general unification, is due to [a6].

References

[a1] K.R. Apt, "Logic programming" J. van Leeuwen (ed.) , Handbook of Theoretical Computer Science , Elsevier (1990) pp. 493–574
[a2] K.R. Apt, R. Bol, "Logic programming and negation: a survey" J. Logic Programming , 19–20 (1994) pp. 9–71
[a3] K. Doets, "From logic to logic programming" , MIT (1994)
[a4] R.A. Kowalski, "Predicate logic as a programming language" , Proceedings IFIP'74 , North-Holland (1974) pp. 569–574
[a5] J.W. Lloyd, "Foundations of logic programming" , Springer (1987) (Edition: Second, extended)
[a6] J.A. Robinson, "A machine-oriented logic based on the resolution principle" J. ACM , 12 (1965) pp. 23–41
[a7] L. Sterling, E. Shapiro, "The art of Prolog" , MIT (1994) (Edition: Second)
How to Cite This Entry:
Logic programming. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Logic_programming&oldid=15538
This article was adapted from an original article by K. Doets (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article