Logic, Objects, and Programming

Logic programming presents better credentials than any other approach to the implementation of computer applications. In addition to providing general-purpose procedures, it is natural for defining and maintaining databases, for language processing, for knowledge representation, for parallel processing. It casually picks up numerical and combinatorial computation as special cases of constraint logic programming. Even objects that change state by interchange of messages are elegantly expressed [1].

Yet it was object-oriented programming that took off rather than logic programming. Why? Admittedly, during the eighties, when the crucial decisions were taken by industry and universities, not all of the strengths of logic programming were sufficiently developed. But that was only true of constraint logic programming and the lack of development in that particular branch is not the explanation.

Rather, to understand the ascendance of object-oriented programming, we need to remember some history, beginning with the 1960s, when the difficulty of software development was first identified as a significant problem. Rocket science was good enough to put a man on the moon; computer science was not good enough to maintain control over an important project like developing IBM’s System 360. The latter trauma led to the widely quoted book by F.P. Brooks [2].

The difficulties in programming can be divided under the main headings of control structure and data structure. That control structure has been a problem explains the furore of “structured programming” in the 1970s. More important to the solution of the problem of control structure is that it can be made subordinate to data structure. This was folk wisdom already in the 1960s, as for example expressed by the widely quoted passage in chapter 9 of Brooks’s book:

Show me your flowchart and conceal your tables [3], and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowchart; it’ll be obvious.

It follows that improvement in data structuring was the more urgent concern. Classes, as abstract data types, were readily seen as a solution. Objects with changing state were a welcome added bonus for the modeling of dynamic systems. In contrast, the most visible aspect of logic programming was the motto “Algorithm = Logic + Control” [4]. None of the three terms in the equation was seen to address data structuring. Indeed, logic programming can be viewed as a radical control alternative to structured programming.

Is it then the case that logic has nothing to say to programming? My conclusion here will be that logic programming is only one channel of communication between the two worlds. Another possibility is to take a wider view than the one of logic programming and follow McCabe [5] in using the multiplicity of theories of logic as starting point. A logic program is a single theory, implicit and unidentified. McCabe widens the point of view by making theories explicit and exploiting the freedom thus obtained by creating multiple theories and having them interact during logic program execution. But within each theory, logic programming is adopted unchanged.

In this paper I’m taking a more radical approach: I’m forgetting, at least temporarily, the route that logic programming took to get from logic to programming. The logic programming pioneers were not influenced by what little of object-oriented programming may have existed in the early 1970s. In choosing another route I’m very much aware of object-oriented programming and ask: Where do logic and object-oriented programming overlap?

An example of such an overlap is the definition and implementation of rational numbers. On the one hand, a class for rational numbers is such a handy example of object-oriented programming that it is a favourite introduction of textbook authors. On the other hand, the rational numbers are found in algebra as a concrete instance of the abstract concept of a field. In algebra the field is defined by means of axioms. Though usually stated informally, such axioms are readily translated to a formal theory of logic.

A formal logic theory for fields and similar algebras consists of the following components: a signature, a set of axioms, and an interpretation.

Signature: The signature is a collection of operations. For a field, it contains the binary operations + (addition) and × (multiplication) and the unary operation – (for the additive inverse) as well as the unary ´ (my choice of HTML makeshift for the multiplicative inverse). The constants 0 and 1 should be included in the signature as nullary operations.
Axioms: The signature only defines the language for expressions that name field elements. What gives the algebra its specific properties are the axioms. For a field, these state properties such as the distributive law a×(b+c) = a × b + a × c as well as other laws.
Interpretation: The signature and the axioms of the field are abstract in the sense that they are applicable equally well to all concrete instances of the field: rationals, reals, complex numbers, and integers modulo a prime number have equal claim on being regarded as a field. To give meaning to the operations of the signature and the axioms we interpret them by a concrete algebra. If such an algebra makes the field axioms true, then it is, by definition, a field. Each of the algebras just mentioned can be an interpretation that makes the axioms true.

My reason for considering the logical theory of the field as an abstract algebra is that the theory is readily modeled in a program. The signature translates to an abstract class. A non-abstract class that implements the abstract class plays the role of interpretation. This is worked out in the Appendix.

Let me give here a summary of what is to be found in the Appendix. My goal is to model the logical theory-cum-interpretation as closely as possible in a program. The test for the closeness of the modeling is whether a mathematical formula that denotes a rational can be transcribed to code that creates that rational. In other words, I made the implementation as declarative as possible. This leads to class Rational extends Field. It turns out that in the pursuit of declarativity it was natural to make the instances of the class immutable. In Scala, the language of my choice, this is a natural thing to do.

The declarative nature of Rational prevents this class from being an example of object-oriented programming. At least not if one takes as essential that objects have a state and that this state can change as a result of the execution of the class’s methods. As mutable state is generally regarded as a defining characteristic of objects in programming, I refrain from referring to instances of Rational as “objects”. Let’s call them “class instances”. Though it is probably possible to engineer immutable class instances in most object-oriented languages, they are unusual. In my limited experience Scala is the only object-oriented programming language that it makes it natural to define immutable class instances. In [6] they are promoted as “functional objects”. As advantages the authors mention that they are easier to reason about, that they can be passed around without fear that unknown code will cause unanticipated changes, that concurrent code accessing the same immutable class instance cannot lead to a race condition, and that hashtable keys of such objects can be counted on to remain valid.

These advantages are bought at a cost: when evaluating a complex expression in terms of Rational every intermediate result becomes a new class instance, which instantly turns to garbage. The modest approximation of the square root of 2 in the Appendix creates dozens of such instances.

This suggests an optimized version of Rational, called Rat in the Appendix where class instances do have state. The same complex expression is evaluated at the expense of a single instance of Rat with each new intermediate result overwriting the previous one. This instance is an object in the usual sense: it has a state, and the rational operations write their result over this state. Rat was conceived as an optimization of Rational, which, in turn, is a rendition of a logical theory.

In this way object-oriented programming has been re-invented as an optimization of what may be regarded as a logic program in the new sense. Once object-oriented programming has been re-invented in this way, it is free to branch out in a direction independent of logic, typically as a tool for simulating dynamic systems.

What lessons can be drawn from the example in the Appendix? Any such lessons arise from two sources: (1) places where I have not been able to faithfully render the logical theory and (2) the shortcomings of logic that have been hidden by the choice of such a meagre example of rationals-as-fields.

First, the difficulties that prevented me from faithfully rendering the logical theory as an object-oriented program. One was the apparent necessity, in Scala at least, to have the first argument of a function implied. This is natural for sending a message to an object changing state, but not for declarative, class-oriented, programming. From my point of view, class-oriented programming is important as link between logic and object-orientation. It should be added that any shortcomings as I perceive them may well be due to my lack of understanding the language of my choice and the small number of languages that I am familiar with.

It is easier to find in the example suggestions for improvements to logic, by which I understand first-order predicate logic.

I could only find a similarity between class structure and logical theory by choosing fields, even within algebra an unusually simple structure. Had I chosen vector spaces, I would have needed two domains in the logic interpretation, one for the scalars and one for the vectors. I would have needed many-sorted logic to formulate a logical theory. Many-sorted logic, if acknowledged at all (see Enderton [7] for one of the few examples), is dismissed as not adding anything of mathematical interest. And the logicians are right. For engineers, however, mathematical trivia can be of crucial importance.

Another reason why logic as currently conceived cannot take the place of classes in programming is the absence of hierarchy. Again, it may not be difficult to extend logic in this direction. In algebra hierarchy plays an important part. Fields, for example, are hierarchically defined: as a ring with multiplicative inverse, where a ring is defined as a certain coupling of groups, where a group is defined as a monoid with an inverse, where a monoid is defined as a semigroup with unit: four levels of hierarchy. In classes this hierarchical structure can be expressed in a natural way. It seems a routine exercise to extend Gallier’s work [8] on many-sorted first-order predicate logic to include hierarchy with algebra as example. To perform the considerable amount of work required for such an exercise requires motivation. Such motivation is lacking for logicians. But it may be stirred up among engineers (of the software kind) motivated by the apparent difficulty in converging on an object-oriented programming language and the resulting proliferation of such languages. The guidance and constraint can be provided by a suitably extended version of logic.

[1] Ehud Shapiro and Akikazu Takeuchi. “Object oriented programming in Concurrent Prolog” New Generation Computing, 1:25–48, 1983.
[2] Frederick P. Brooks, Jr. The Mythical Man-Month. Addison-Wesley, 1975.
[3] Brooks speaks of programming in assembler language, where the layout of a data structure is specified by a “table”.
[4] R.A. Kowalski. Algorithm = Logic + Control. Communications of the ACM, 22:424–436, 1979.
[5] Francis G. McCabe. Logic and Objects. Prentice Hall, 1992.
[6] Martin Odersky, Lex Spoon, and Bill Venners. Programming in Scala. Artima Press, 2007.
[7] H.B. Enderton. A Mathematical Introduction to Logic. Fletcher and Sons, 1972.
[8] Jean H. Gallier. Logic for Computer Science. Harper and Row, 1986.

Appendix

Here you can see my rendering in Scala of the field signature.

abstract class Field {
  def +(that:Field):Field
  def neg():Field
  def *(that:Field):Field
  def inv():Field
  def num():Int; def den():Int
  def display()
}

According to the algebraic definition you can’t subtract; you add the additive inverse, hence write  a+(-b) instead of a-b. So there is only a unary minus, rendered here as “neg”. Similarly for the multiplicative inverse: a × b´ instead of a/b. In turn ´ is rendered as “inv”.

There are some discrepancies:

  • The field operators + and × have two arguments; the corresponding class methods show one. This is a consequence of a convention that is common among object-oriented programming languages and which is at variance with mathematics: the first argument of every method is implicit and its type is that of the class in which the method resides. In this way the signatures of + and × can be seen to match their corresponding field operations.
  • The convention of giving every method an implicit first argument is a mere nuisance when the number of arguments is at least one. It becomes more serious when that number is zero, as is the case with the field’s nullary operations 0 and 1. This explains the absence of corresponding methods in the abstract class definition.
  • The definitions of operations on field elements are necessarily given in terms of operations on more primitive values defined elsewhere. This can only done if there is a mapping of field elements to these more primitive types. In the case where the abstract field is embodied by the rationals, a good choice for these primitive values is two integers. These integers can be thought of as numerator and denominator. Hence the methods of the abstract class Field named num and den. As these are not applicable to fields in general, but specific to rationals, such methods should not appear in the abstract class. Regrettably I have not been able to avoid this.
  • The definition of an algebra does not cater to mundane needs such as that of writing down an element of the carrier in some kind of representation. In programming no need, however mundane, can be ignored. Hence the method display.

A class for rationals In algebra the rationals have the signature and axioms of the field. What makes them rationals rather than some other algebra with the same field signature and axioms is the carrier consisting of equivalence classes of pairs of integers with the second of the pair being nonzero. The equivalence relation is the one in which for all integers n and d the pair <n,d> is equivalent to <n’,d’> iff there is a positive integer k such that n’ = kn and d’ = kd. The equivalence classes are such that there is in each class one pair without any common factor with the second of the pair being positive. This pair can be chosen as canonical representative of the equivalence class. This mathematical fact suggests that instances of the class Rational have as members two integers in this canonical form. With these algebraic facts in mind let us consider the following class definition for the class Rational.

In pursuit of the ideal of keeping the class as declarative as possible I wrote the above function implementations, which generate garbage with gay abandon.

As an example of use of such a style of class, let’s consider something that exercises some of the rational operations: the sequence of fractions c0, c1, c2, … where every next element cc is obtained from the previous element c by the formula cc = 1+/(1+c). In case you are curious, the limit of this sequence is the square root of two. The declarative nature of the class shows in the fact that the formula is copied almost verbatim to line 03 below.

00: val one:Field = new Rational(1,1)
01: def contFrac(n:Int):Field = {
02:   if (n == 0) one
03:   else one + one/(one + contFrac(n-1))
04: }
05: x = contFrac(10)
06: x.display()

Objects The advantage of declarativity, which shows in the ability to use the mathematical formula for the recurrence relation in the code, comes at a price: every evaluation of the formula, in line 03, causes four new instances of Rational to be created. Here there are objects, or, at least class instances, lots of them, but none changes state.

Let us write a concrete class, Rat, as an alternative implementation of the abstract class Field, with methods for the field operations that do change the state. And use this class to create the desired approximation to the square root of two but this time with a reasonable number of object creations.

I called the declaratively usable class “Rational”. I’m calling the optimized version “Rat”. Both are non-abstract classes implementing the abstract class Field. Ideally, the abstract class should not change for an optimization. Indeed, the truly abstract methods, those of the field operations, are unchanged. But the access methods change, and I have not been able to keep them out of the abstract class.

00: abstract class Field {
01:   def +(that:Field)
02:   def neg()  // additive inverse
03:   def *(that:Field)
04:   def inv()  // multiplicative inverse
05:   def getNum():Int; def getDen():Int;
06:   def setNum(that:Int); def setDen(that:Int);
07:   def display()
08: }

The distinction between Rational and Rat can be epitomized by the addition of two instances:

val p = new Rational(1,3)
val q = new Rational(1,5)
println(p + q)

Here the sum was created as a new instance. This allows the faithful mimicking of algebraic formulas.

val p = new Rat(1,3)
val q = new Rat(1,5)
println(p.+(q))

Here the sum was created by updating the existing instance p.

It so happens that in Scala p + q and p.+(q) are syntactically equivalent. So I could, perversely, have written instead

val p = new Rational(1,3)
val q = new Rational(1,5)
println(p.+(q))

and

val p = new Rat(1,3)
val q = new Rat(1,5)
println(p + q)

Though this syntactic legerdemain may be welcomed by most users, it is unfortunate for my present purpose.

Though I tried to keep Rat as much as possible the same as Rational, the changes are considerable:

09: class Rat(n:Int, d:Int) extends Field {
10:   private var numer:Int = n
11:   private var denom:Int = d
12:   private def normalize() = {...}
13:   normalize()
14:   def getNum():Int = numer
15:   def getDen():Int = denom
16:   def setNum(that:Int) = {numer = that}
17:   def setDen(that:Int) = {denom = that}
18:   def +(that:Field) =  {
19:     numer = numer * that.getDen() + that.getNum() * denom
20:     denom = denom * that.getDen()
21:     normalize()
22:   }
23:   def neg() =  { numer = -numer; normalize() }
24:   def *(that:Field) = {...}
25:   def inv() = {
26:     val temp = numer; numer = denom; denom = temp
27:     normalize()
28:   }
29:   def display() = {...}
30: }

The main thing to notice is that the methods for +, neg, *, and inv (alias +, -, ×, and ´) update rather than create a new instance. As a result of this the previous program for approximating the square root becomes:

00: def itFrac():Rat = {
01:   val one = new Rat(1,1)
02:   val c = new Rat(1,1)
03:   var i:Int = 1
04:   while (i <= 10) {
05:     c.+(one); c.inv(); c.+(one)
06:     i += 1
07:   }
08:   c
09: }
10: itFrac().display()

Whereas before I was able to transcribe the formula directly, here I had to come up with a more or less clever sequence of updates that ends up in the formula’s value. There are several ways of doing that. My current favourite is the one you find in line 05.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: