** Xavier: ** What is logic programming?

** Yorick: ** Robert Kowalski defines it as a certain approach to knowledge representation, namely

Logic programming shares with mechanical theorem proving the use of logic to represent knowledge and the use of deduction to solve problems by deriving logical consequences. [1]

**X:** Ah, I see—a kind of Artificial Intelligence, which I am not so much interested in. Yet, I find a lot of interesting stuff in *Logic Programming*, the journal, in its first decade, and in several books of that time, Sterling and Shapiro’s *The Art of Prolog* and O’Keefe’s *The Craft of Prolog*. Why listen to Kowalski anyway?

**Y:** He invented “Logic Programming” as a term and substantiated his definition with his book [2]. Moreover, his “Predicate logic as a programming language” [3], established him as a co-inventor, with Colmerauer, of pure Prolog.

**X:** It seems that the term “Logic Programming” has been hijacked by people interested in what makes Prolog different from other programming languages and in how logic can be used as starting point in the design of programming languages.

**Y:** One of the ways in which Prolog is different from other programming languages is that it avoids the correctness problem.

**X:** Oh? How so?

**Y:** A program in pure Prolog is a sentence of logic. The results of running it are logical implications of that sentence. So, if you write the program as a specification, then the correctness problem is avoided because you are running the specification.

**X:** Not even Kowalski believes that. In “Algorithm = Logic + Control” [4] he uses as example the sorting of a list. He begins with the definition saying that the result of sorting is an ordered permutation of the input list. He refers to his 1974 paper [3], the one that launched (what came to be called later) logic programming. In [4] he points out that, though different controls give different algorithms, none is efficient enough to be acceptable. Then he presents a definition that reads just like, say, the C program for quicksort in Kernighan and Ritchie [13], and calls it “the *logic* of quicksort” (I have taken the liberty to add the emphasis).

**Y:** The fact remains that the pure specification of sortedness is executable.

**X:** In case you mean the sorting program in [3], here it is, with the minimal changes to make it run under SWI Prolog anno 2017:

pSort(X,Y) :- perm(X,Y), ord(Y). perm([],[]). perm(Z,[X|Y]) :- perm(Z1,Y), del(X,Z,Z1). del(X,[X|Y],Y). del(X,[Y|Z],[Y|Z1]) :- del(X,Z,Z1). ord([]). ord([X]). ord([X,Y|Z]) :- le(X,Y), ord([Y|Z]). le(1,2). le(2,3). le(1,3). le(X,X).

Sure, the definition of pSort is acceptable as a specification, but look at the definition of permutation! It is an *algorithm* for checking permutedness: keep removing elements from the one list and delete each from the other list. If all these deletions succeed and if the other list ends up empty, then the two lists you started out with are permutations of each other.

**Y:** What’s wrong with that?

**X:** A specification of sortedness would have to refer to a *specification* of permutedness and I see an algorithm instead. To be algorithmically neutral you would have to take the mathematical definition, namely “a permutation of a finite set is an invertible mapping onto itself”. Don’t ask me how you say that in logic.

**Y:** OK, supposing that the example of sorting shoots down the general notion of logic programs as runnable specifications. Is there any way in which logic programming can alleviate the correctness problem?

**X:** Funny I should be asked, but actually, there is. Not by logic programming primarily, but by logic generally, yes.

**Y:** Oh? How so?

**X:** For that we need to go back in the programming literature, to before Colmerauer’s collaboration with Kowalski, namely to Peter Naur’s “snapshots” [5] and Robert Floyd’s “verification conditions” [6]. Floyd considered properties of program fragments expressed as {P}S{Q} (in current notation), where S is the program fragment. The fragment can comprise almost all of a procedure body’s code or as little as a single assignment statement. P is the precondition and Q is the postcondition. These are conditions (also called “assertions”) in the form of a logic formula expressing a relation between values of variables occurring in S.

The meaning of {P}S{Q} is: if P is true at the start of an execution of S and if this execution terminates, then Q holds upon termination. The notation “{P}S{Q}” is the currently used variant of one introduced by C.A.R. Hoare [7]. We call such an expression a “(Hoare) triple”.

Floyd showed how to combine statements in logic about program fragments into a statement about the larger program fragment that results from combining the fragments.

**Y:** I trust it works for the examples given by Naur and Floyd, but do you know any other ones?

**X:** I know lots of other examples. My current favourite is the clever algorithm (due to Josef Stein [10]) for computing in logarithmic time the gcd (greatest common denominator) of two numbers. Here it is in C:

// Program 1 void s(int x, int y, int& z) { // precondition s: x == x0 && y == y0 && x0 > 0 && y0 > 0 // purpose: return z == gcd(x0, y0) int fac = 1; inv: if (x%2 == 0) goto a; else goto b; a: if (y%2 == 0) { x /= 2; y /= 2; fac *= 2; goto inv;} else { x /= 2; goto d; } b: if (y%2 == 0) { y /= 2; goto b; } else goto c; c: if (x == y) { z = x*fac; return; } if (x < y) { y = (y-x)/2; goto b; } else { x = (x-y)/2; goto d; } d: if (x%2 == 0) { x /= 2; goto d; } else goto c; }

**Y:** I see C code, I see labels, but where are the assertions?

**X:** Oops. Here it is with assertions inserted.

// Program 1a void s(int x, int y, int& z) { // precondition s: x == x0 && y == y0 && x0 > 0 && y0 > 0 // purpose: return z == gcd(x0, y0) // s: x > 0 && y > 0 && x == x0 && y == y0 int fac = 1; inv: // x > 0 && y > 0 && gcd(x0, y0) == fac * gcd(x,y) if (x%2 == 0) goto a; else goto b; a: // inv && even(x) if (y%2 == 0) { x /= 2; y /= 2; fac *= 2; goto inv;} else { x /= 2; goto d; } b: // inv && odd(x) if (y%2 == 0) { y /= 2; goto b; } else goto c; c: // inv && odd(x) && odd(y) if (x == y) { z = x*fac; return; } if (x < y) { y = (y-x)/2; goto b; } else { x = (x-y)/2; goto d; } d: // inv && odd(y) if (x%2 == 0) { x /= 2; goto d; } else goto c; }

Here are examples of the triples hiding here:

{a && even(y)} x /= 2; y /= 2; fac *= 2; {inv} {a && odd(y)} x /= 2; {d}

Floyd’s theorem applied to this example says that if the atomic triples are true, then the one for the whole function (which says that it returns the gcd) is true.

**Y:** I think I see that your example triples are justified. I suppose that, once the assertions are there, one could check all the triples. But how can I find the necessary assertions? To do that I would need to understand the utterly obscure code that you first presented. That is, I would need to already see that the function is correct before I can even start thinking about assertions.

**X:** I agree that it’s hard to find useful assertions in existing code. The difficulty was acknowledged by Edsger Dijkstra:

When concern for correctness comes as an afterthought, so that correctness proofs have to be given once the program is already completed, then the programmer can indeed expect severe troubles. If, however, he adheres to the discipline of producing correctness proofs as he writes his program, he will produce program and proof with less effort than just the programming alone would have taken [8].

**Y:** Discipline indeed! More like black magic: “… producing correctness proofs *as he writes his program*“. I would like to see examples of that.

**X:** He doesn’t give any in [8]. I suppose that his subsequent books [11, 12] illustrate his idea of “… producing correctness proofs as he writes his program”.

**Y:** I’d like to go back to that gcd program. You gave two triples as examples. There should be more. I’d like to see them all.

**X:** The program has access to variables x,y,z, and fac. On entry the values of x and y are x0 and y0. On exit we want to have z == gcd(x0,y0). First a list of the assertions with their labels:

s: x == x0 && y == y0 && x0 > 0 && y0 > 0 inv: x > 0 && y > 0 && gcd(x0, y0) == fac * gcd(x,y) a: inv && even(x) b: inv && odd(x) c: inv && odd(x) && odd(y) d: inv && odd(y) h: z == gcd(x0,y0)

Note that all the assertions except s imply inv. Next, the triples:

{s} fac = 1 {inv} {inv && even(x)}{a} {inv && odd(x)}{b} {a && even(y)} x /= 2; y /= 2; fac *= 2; {inv} {a && odd(y)} x /= 2; {d} {b && even(y)} y /= 2; {b} {b && odd(y)} {c} {c && x == y} z = fac*x; {h} {c && xy} x = (x-y)/2; {d} {d && even(x)} x /= 2; {d} {d && odd(x)} {c}

**Y:** I count eleven. Makes me think of one of the quotes from Alan Perlis: “If you have a procedure with ten parameters, you probably missed some.” How do you know that you have all triples?

**X:** There is a progression in the order they are written. In the beginning new assertions are needed. Then you have to add triples starting from the new assertions. But after the last four triples were added, no new assertions have appeared.

You don’t find a label s in the code because s is not in any postcondition. You don’t find a label h in the code because h is not in any precondition.

**Y:** You say that in {P}S{Q} P is the precondition, Q is the postcondition and S is the code. What does {P}S{Q} say as a formula of logic?

**X:** If P is true in state T and S transforms T to U, then Q is true in U. It connects to code by associating P and Q with locations in the code. Such a statement is true or false. Of course to be of any use, it has to be true. I wrote them down with a progression in mind, but once they are there, their order does not matter. I like to see them as independent “snippets of truth”.

**Y:** I like that: “programming by gathering snippets of truth”. I would like to make the generic snippet {P}S{Q} more precise. How about the following.

Q is true in a state T iff there is a satisfactorily terminating computation starting from location Q in state T.

And similarly for P. Then I could translate {P}S{Q} to the Prolog dialect of logic as:

p(X,Y,Fac,Z) :- ..., q(X1,Y1,Fac1,Z1).

Here the … stands for Prolog code that describes the code of

a triple from state (X,Y,Fac,Z) to state (X1,Y1,Fac1,Z1).

**X:**

It looks backward; the opposite of the reading of

{P}S{Q} in Floyd and Hoare, who reason from P to Q. In your

way of making it precise, this reads

P is true in a state T iff there is a

computation starting from the start location and ending

in location P in state T.

And similarly for Q.

Then I could translate {P}S{Q} to the Prolog dialect of logic as:

q(X1,Y1,Fac1,Z1) :- p(X,Y,Fac,Z), ... .

**Y:**

Right. There are apparently two translations, a forward one

like yours, and a backward one like mine.

As one can’t do both at the same time, I’ll pick the backward one.

The following is the snippet-by-snippet translation.

% Program 2 s(X, Y, Z) :- inv(X, Y, 1, Z). inv(X, Y, Fac, Z) :- even(X), !, a(X, Y, Fac, Z). inv(X, Y, Fac, Z) :- odd(X), b(X, Y, Fac, Z). a(X, Y, Fac, Z) :- even(Y), !, X1 is X/2, Y1 is Y/2, Fac1 is 2*Fac, inv(X1, Y1, Fac1, Z). a(X, Y, Fac, Z) :- odd(Y), X1 is X/2, d(X1, Y, Fac, Z). b(X, Y, Fac, Z) :- even(Y), !, Y1 is Y/2, b(X, Y1, Fac, Z). b(X, Y, Fac, Z) :- odd(Y), c(X, Y, Fac, Z). c(X, Y, Fac, Z) :- X =:= Y, !, Z is X*Fac. c(X, Y, Fac, Z) :- XY, X1 is (X-Y)/2, d(X1, Y, Fac, Z). d(X, Y, Fac, Z) :- even(X), !, X1 is X/2, d(X1, Y, Fac, Z). d(X, Y, Fac, Z) :- odd(X), c(X, Y, Fac, Z).

I’ve been so free as to smooth over the clumsy Prolog arithmetic by making use of, and adding:

even(X) :- X mod 2 =:= 0. odd(X) :- X mod 2 =:= 1.

The result is a Prolog program, ready to run, as in

?- X is 123*4567, Y is 123*5678, s(X,Y,Z). X = 561741, Y = 698394, Z = 123 .

**X:** I see what you did: you just translated every snippet to a procedure.

**Y:** Not a procedure in the conventional sense. For example, you have to somehow bundle the two snippets for inv into one conventional procedure. In Prolog you don’t have to combine any snippets; you translate each of them as is.

**X:** If you are willing to overlook this trivial difference, then I can do in C [14] what you just did in Prolog.

// Program 3 // the function declarations (not always necessary) void s(int x, int y, int& z); void inv(int x, int y, int fac, int& z); void a(int x, int y, int fac, int& z); void b(int x, int y, int fac, int& z); void c(int x, int y, int fac, int& z); void d(int x, int y, int fac, int& z); // the function definitions void s(int x, int y, int& z) { // precondition s: x == x0 && y == y0 && x0 > 0 && y0 > 0 // purpose: return z == gcd(x0, y0) inv(x, y, 1, z); } void inv(int x, int y, int fac, int& z) { // precondition: inv: x > 0 && y > 0 && fac*gcd(x,y) == gcd(x0,y0) if (x%2 == 0) a(x,y,fac,z); else b(x,y,fac,z); } void a(int x, int y, int fac, int& z) { // precondition: a: inv && even(x) if (y%2 == 0) inv(x/2,y/2,fac*2,z); else d(x/2,y,fac,z); } void b(int x, int y, int fac, int& z) { // precondition: b: a && odd(x) if (y%2 == 0) b(x,y/2,fac,z); else c(x,y,fac,z); } void c(int x, int y, int fac, int& z) { // precondition: c: inv && odd(x) && odd(y) if (x == y) { z = x*fac; return; } if (x < y) b(x,(y-x)/2,fac,z); else d((x-y)/2,y,fac,z); } void d(int x, int y, int fac, int& z) { // precondition: d: inv && odd(y) if (x%2 == 0) d(x/2, y, fac, z); else c(x,y,fac,z); }

Come to think of it, this is a way to structure code that is trivial to verify, given its close correspondence to the snippets of truth. We have discovered how to do better than “… producing correctness proofs as he writes his program” by writing a correctness proof *before* writing the program!

**Y:** Still, even such C code is at one remove from truth, because it is not itself logic. Pure Prolog is.

**X:** Minor quibble: the (essential) use of “is” is not part of pure Prolog, so disqualifies the Prolog program as logic. And what are those cuts doing there? Major quibble: I deny that the Prolog clauses are any more logic than properly written C. The Prolog clause states how a problem can be reduced to sub-problems. It’s great that you can do that in logic. But there is nothing more in the Prolog clause than that problem reduction, which can be expressed in C just as well. In fact, in 1932 Andrej Kolmogorov pointed out [9] that predicate calculus can be interpreted as a calculus of problems solved and to be solved.

**Y:** I see that your translation of the snippets to C expresses just as well as what the logic clauses express. The difference between Prolog and C is that in Prolog you can’t do anything but write sentences of logic. Prolog can’t force you to make these sentences true, but it does force you to write your snippets in such a way that they are true or false according to the formal semantics of logic. In C you can write things that have no logical interpretation—most C programmers do it all the time. It may be that just now you were the first ever to write logic in C.

**X:** Of course my translation of the snippets to C functions resulted in a ridiculous program. The complex structure of Program 1 is caused by the desire to avoid redundant tests for the parity of x and y. Such tests only require a single shift, one of the fastest instructions. To save some of these, the program repeatedly unleashes the whole protocols of procedure entry and exit.

However, inspection of that code shows that it is an easy victim of tail recursion optimization, a standard technique. So instead of the procedural version, I should have presented the result of this optimization:

// Program 4 void s(int x, int y, int& z) { // precondition s: x == x0 && y == y0 && x0 > 0 && y0 > 0 // purpose: return z == gcd(x0, y0) int fac = 1; inv: if (x%2 == 0) goto a; else goto b; a: if (y%2 == 0) { x /= 2; y /= 2; fac *= 2; goto inv;} else { x /= 2; goto d; } b: if (y%2 == 0) { y /= 2; goto b; } else goto c; c: if (x == y) { z = x*fac; return; } if (x < y) { y = (y-x)/2; goto b; } else { x = (x-y)/2; goto d; } d: if (x%2 == 0) { x /= 2; goto d; } else goto c; }

**Y:** Well, well. Quite a transformation! Are you sure that all these changes are necessitated by tail-recursion optimization?

**X:** Yes. It’s perfectly standard. If you are at all a programmer, you do it half asleep.

**Y:** Did you know that since around 1980 Prolog compilers perform tail recursion optimization? So what shows as procedure calls in my translation of the snippets is lost in compilation.

**X:** No, I didn’t know that. Does this mean that your Prolog program runs as fast as my optimized C program?

**Y:** Hey, wait a moment! That result of your manual tail-recursion optimization is identical to Program 1!

**X:** Isn’t it to be expected that you return to a similar program by following the route that we did?

**Y:** Of course, but the code is *identical*. Even the minor inconsistency in formatting is reproduced. What’s going on?

**X:** No need to freak out. We act under the illusion of free will while in actual fact we may be puppets in somebody’s game. Any time now the puppet master will close down the show: last chance for a wrap-up.

What’s new to me is that one can start by writing Hoare triples for an as yet unknown algorithm, guided by the goal and the constraints in achieving it. In this case the goal was computing the gcd in logarithmic time under the constraint of not repeating a test for parity. It was clear how the set of triples needed to be expanded and it was clear when this set was complete. We could have skipped the detour via Prolog and gone direct to C functions and then perform tail-recursion optimization. Because of the route taken it was clear how to verify this code, because we started with the assertions.

**Y:** For me this conversation has been useful in clarifying Prolog programming. For the *programmer*, logic has no advantage that is not available in C. The opposite is true for the *language designer. *Only by writing, with Kowalski’s input, a theorem-prover could Colmerauer and his team have produced the miracle that is the programming language Prolog.

### Acknowledgments

Thanks to Paul McJones and Richard O’Keefe for their help.

### References

[1] “The early years of logic programming” by R.A. Kowalski, Comm. ACM, January 1988.

[2] *Logic for Problem-Solving* by R.A. Kowalski. North Holland Elsevier, 1979.

[3] “Predicate logic as a programming language” by R.A. Kowalski. *Information Processing 74*, North-Holland, 1974, pp 569–574.

[4] “Algorithm = Logic + Control” by R.A. Kowalski. Comm. ACM 22:7(1979), pp 424–436.

[5] “Proof of algorithms by general snapshots” by P. Naur. *BIT Numerical Mathematics*, 6:4 (1966), pp 310–316.

[6] “Assigning meanings to programs” by R. Floyd. *Proc. Symp. Appl. Math.* vol. 19: “Mathematical Aspects of Computer Science”, pp 19–32.

[7] “An axiomatic basis for computer programming” by C.A.R. Hoare. Comm. ACM 12:10(1969), pp 576–580.

[8] “Concern for correctness as a guiding principle for program composition” by E.W. Dijkstra. Pages 359–367, *The Fourth Generation*, Infotech, 1971. See also EWD 288.

[9] “Zur Deutung der Intuitionistischen Logik” by A. Kolmogorov. Mathematische Zeitschrift 35.1 (1932): 58-65.

[10] * Elements of Programming * by A. Stepanov and P. McJones. Addison-Wesley, 2009.

[11] * A Discipline of Programming * by E.W. Dijkstra. Prentice-Hall, 1976.

[12] * A Method of Programming * by E.W. Dijkstra and W.H.J. Feijen. Addison-Wesley, 1988.

[13] *The C Programming Language* by B.W. Kernighan and D.M. Ritchie Prentice-Hall, 2nd edition 1988, p. 87.

[14] Strictly, C++ rather than C. Calling it C++ would wrongly call up associations with object-oriented programming. What we want here is C as-it-should-be, that is, with only call-by-reference added.

May 31, 2017 at 4:13 am |

This is enlightening, and the dialogue format very enjoyable.

I have a question though: does tail-call optimisation really make your program go faster (like red stripes do for cars)? I thought the effect was just that recursive procedures won’t eventually blow the stack.

MvE: I don’t think it is like red stripes on cars; that it does result in a gain in execution time. How much will depend on the Prolog compiler. The main advantage is the one you mention: saving of space.