In April 2008 I wrote “Where Have All the Great Programmers Gone?” . In trying to answer the question, I contrasted the contemporary introduction to programming with the way it was learned in the 1950s. Of the earlier environment I wrote the following passage:
… two favourable factors existed. The first was that one of the prerequisites for being a promising oddball was that of having a nontrivial college degree. Examples were philosophy (J. A. Robinson), English literature (Mark Halpern), Classics (C. A. R. Hoare), Physics (E.W. Dijkstra). The degree ensured, if not exceptional intelligence, at least not a debilitating lack of it. Being thrown into the deep end in this way was educative, something that cannot be said of the typical first-year programming text, dumbed down in the way that only Educators have the secret of.
A Programmer’s Place (APP) has yet to snag Hoare or Halpern, but was fortunate to find J.A. Robinson available for an interview.
In 1965 Robinson published “A Machine-Oriented Logic Based on the Resolution Principle”. This was a bombshell in logic and in computing. In the 1970s it led to a rift in the Artificial Intelligence community. The rift has healed over since then, with beneficial results to all of AI. Nowadays Robinson is not granting a lot of interviews, so we are doubly fortunate in having him here.
Alan, the “promising oddball” in the passage I just quoted was picked up during a conversation a long time ago. At the time I did not have an opportunity to ask you how you got into programming. Can you say something about that now?
In September 1956 I switched from academe to industry in less than 40 hours. I passed my PhD oral defense (at noon on a Saturday, in Princeton) and became an Operations Research Engineer at the DuPont Company at 8 am the following Monday in Wilmington, Delaware. They handed me the Univac 1 instruction manual, and my new career began. It seemed a huge abyss between full time concentration on David Hume’s theory of Causation and Probability and total immersion in — what? — fixed point arithmetic, register shifting, reading from both magnetic tape and punched paper tape, mercury delay-line memory latencies, Runge-Kutte-Gill numerical integration, flow charts, and Linear Programming using George Dantzig’s (then) brand-new Simplex algorithm. There was little or no systematic programming methodology. It was word-of-mouth and trial-and-error. We visited Remington Rand and met some of their programmers, including a lady naval officer, Grace Hopper, who was then designing Cobol.
I never programmed in assembler until I was in my fifties, which was after Algol 60, Pop-2, APL, Lisp, Prolog, Fortran, Cobol, Basic, C, and C++. I found assembler fascinating. The computer was on a board of 4 by 6 inches, sitting next to my laptop. A single push on a button and the program instantly assembled, loaded and ran. Exhilirating!
For us assembler was, in retrospect, a curse. We eventually (1958) visited John Backus & Co. at IBM and thereupon changed our productivity by a couple of orders of magnitude. Now we could write neat, intelligible Fortran programs instead of vast machine-language kludges. We had never heard of Dijkstra.
Nor had Dijkstra, if you pardon the pun. He did his PhD around the same time as yours.
Ha. Those were the days. Backus’ Fortran team was the prototypical bunch of oddballs. I don’t know whether any of them were classics majors or philosophers, but it seems not unlikely that some of them were. The whole point was to find reasonably smart people and expect them to learn the new complexities and principles quickly, which they did.
You were a philosophy PhD and Hoare read Classics in Oxford. Dijkstra did his undergraduate work in physics, but attended “Gymnasium Beta” as his secondary school. At the time this was a venerated institution; it has since disappeared. This program was “science high school” in the American sense, plus, in each of the six years, five lessons of Latin plus five lessons of Greek per week. Dijkstra attributed the special intellectual edge of two of his acquaintances (C.S. Scholten and A. van Wijngaarden, and, presumably, himself) to the fact that they had the benefit of this kind of schooling. I have heard that the remarkable crop of scientists from Hungary (von Neumann, Wigner, Teller, Erdös, Szilard, Gabor, and others) attended similar schools there.
From this I conclude that it is beneficial for brains between ages 12 and 18 to be stretched more than was, and is, usual. Perhaps the subject matter does not matter too much. On the other hand, one must be careful in finding a replacement for the tried and true Latin and Greek. Chess or computer programming come to mind as untried, but ideologically more acceptable, candidates.
Latin and Greek are much overrated, I think, especially by people like you who never had any. I remember once at Donald [Michie]’s house in Dick Place a conversation with Christopher Strachey in which the subject was the alleged benefit of Latin and Greek for mastery of English. He said: “I’d rather study English”.
Winston Churchill would have agreed. In school he was too dumb for Latin or Greek, and was condemned to study English instead. In retrospect he was grateful.
But I disagree with Strachey. English is not a good subject for study. You pick it up by osmosis, as I did in my teens. This holds a fortiori for grade-school children immersed in English-speaking families. However, learning grammar is excellent for teen-age brains, but then it had better be serious grammar, like the traditional one for Latin.
What do you think does help towards excellence in logic or programming?
Intellectual ability, tenacity, and so on, and these are general-purpose traits. One has what one has, from the start and throughout. Specific knowledge and skills take time and hard work to acquire. Early life is a fertile period for that, but there is no cut-off point in an individual’s development when he or she must carry on with what is already on board.
Donald Michie, for example, seemed to take to cryptanalysis completely naturally, as a classicist fresh from Rugby School at age 18. Within two or three years he was code-breaking on the same intellectual level as mathematicians like Turing, Max Newman, Shaun Wylie, and J.H.C. Whitehead. Donald always had a sense of having been deprived, in youth, of a mathematical education, as indeed, like me, he had been. That was one of the things he and I had in common. Our developments then diverged. He went into biology and medical research, and focussed intensely on that area. I did a similar, but much less well-organized, thing with mathematics and logic.
It is said that it takes ten thousand hours or so of practice and study to master a major skill (e.g. concert pianist) or area of knowledge (e.g. theoretical research physicist). Mozart had already put in this much practice by the time he was eleven. Because of the wonderful plasticity of the developing brain, early life is an excellent time for this sort of preparation. I think, however, that some of the traditional educational practices in schools drag out knowledge acquisition unduly, and that one can drink in (given the right motivation) a new knowledge area much faster. As Seymour Papert (following Piaget) always stressed, it’s a matter of taking in the right ideas at the right time, and using them efficiently to acquire more, in a kind of chain reaction.
When you visited Edinburgh in 1974, Bob Kowalski and I came to your office to tell you about logic programming and Prolog; both very new then. We thought you should be the first to know, this latest and most spectacular outgrowth of your brainchild, Resolution Theorem Proving. You were gratifyingly pleased and excited. But when I wanted you to promise to program in Prolog from then on, you demurred, saying that Lisp would remain your true love. I remember feeling shocked and betrayed. Only much later, when I read the Abelson and Sussman book, I understood.
But for now, how did you, a Fortran boy from a nice home, E. I. du Pont de Nemours and Company, get to fall in love with … er … Lisp?
I first encountered Lisp when John McCarthy rang my doorbell the day after we moved into a house in Menlo Park in the summer of 1965. I had published the Resolution paper a few months earlier and McCarthy had (in 2 hours!) written and run a Lisp program embodying the proof procedure, with, he said, minimal debugging. This seemed a miracle to me, who had for years been flailing about, first with assembly languages and then with Fortran. The gruff McCarthy, who was by then already well known, was also something of an astonishing revelation, but that is another story. I bought a copy of the Lisp 1.5 manual, and found it brilliant but with strangely clumsy but MIT-ishly bold and naïve blunders (involving QUOTE and functions as objects, FUNARGS and all that, eventually fixed years later in SCHEME but still there in Common Lisp).
After this encounter (which included his demo, in his house, of the program on what must have been one of the very first time-sharing terminals in a private dwelling house) I remained lamentably unacquainted with Lisp on a practical basis, primarily owing to its inaccessibility. There was no implementation available to me where I was working. That summer and the following summer I used the Algol 60 system that Stanford made readily available both on the main campus and up the hill at the Linear Accelerator Center where Bill Miller, my host for the summers of 1965 and 1966, had his original Stanford job. (Eventually he became Provost, one of Condoleezza’s predecessors, and then the President of SRI!).
During my sabbatical year in Edinburgh, 1967-1968, I used POP-2 a bit, but felt that it was not the equal of Lisp conceptually. There was also this silly MIT – Edinburgh rivalry and juvenile yah-boo-sucks name-calling, and Lisp seemed to be tainted with some of that emotional baggage.
I was in Edinburgh. Everything connected with Lisp (which was mainly PLANNER) struck me as yuckey. It was only much later, far from Edinburgh, that I encountered the Abelson and Sussman book.
My first Lisp program cost me $800 to run for 45 minutes over a time-sharing telephone-line to somewhere in Texas where there was a General Electric commercial computing center with a Lisp implementation. My entire computing budget was burned up and I had little to show for it. No more Lisp for year and years. Finally, in the 1980s I suppose, we all got terminals in our offices and, a bit later still, personal computers — mine was the earliest Mac which looked like a small TV set — with, mirabile dictu, Lisp on board, and fast. Tiny memory though, so not much in the way of large scale computation power. There was a period (which I look back upon with great distaste) of involvement with DARPA and the Rome Air Development Center, insisted on by the Syracuse administration for the sake of the lucrative overhead on the research contracts. This meant that for a while we actually had two Lisp Machines right there in one of our departmental rooms, with me in charge. The extraordinary fact is that nobody used them, including me and, as far as I know, Ken Bowen.
Thank you, this is more than I dared to hope for. But that still leaves this mysterious gap: between Robinson, Operations Research Engineer at the DuPont Corporation in 1958 to Robinson, author of “A Machine-Oriented Logic …” in 1965.
Another time perhaps; this is getting rather long.
The follow-on interview is in