Who Killed Prolog?

There are a thousand programming languages out there (Literally, it seems, according to people who actually count such things.) A classification of so many species is bound to be complex and subject to much debate. However messy and controversial things get low down in the classification, let’s have just four branches at the top level. I attach to the name of the class of programming language what I consider to be the first exemplar of the class, in chronological order:

– imperative (1956, Fortran)
– functional (1959, Lisp)
– object-oriented (1972, Smalltalk)
– logic (1974, Prolog)

I take as starting point the fact that three of the four branches are doing well in the sense of having a vigorous following. Compared to these three, Prolog has fallen far behind. This was not the case in the early 1980′s, when Prolog had caught up with Lisp in capturing mindshare of what you could call non-IBM computing (to avoid the vexed term “AI”). Hence the title of this article. As culprit (or benefactor, depending on how you look at it) I identify the Japanese “Fifth-Generation Computer System” project, which existed from 1982 to 1992.

Even for those who were aware of the project at the time, it is now worth reviewing its fascinating history in the context of its time. This article is such a review as well as a theory of how Prolog was killed and how Lisp was saved from this fate.

A convenient starting date is 1982. The military and political stand-off between the US and the USSR had long occupied centre stage, but is now replaced by the industrial and commercial rivalry between the US and Japan. Japan, devastated and dirt-poor in 1945, had, while nobody was looking, transformed itself into a gleaming model of everything enviable in a modern industrial society. Not only good at watches, cameras and consumer electronics, but also at bullet trains, industrial robots, cars, steel, and mainframe computers (admittedly, plug-compatible with IBM machines).

Though Japan was commercially daunting in the extreme, it was a consolation that it could be belittled as being imitative rather than innovative. Japan was seen to be competing unfairly by being a parasite on research of others, especially the Americans’. Another way in which Japan was seen to be competing unfairly was the way in which Japanese companies (especially the keiretsu) could get away with anti-competitive behaviour not allowed for their American counterparts. Stronger even, MITI (the Japanese Ministry of International Trade and Industry) was thought to be orchestrating the keiretsu. Unfair competition, because so un-American. The book to ponder, if not to read, is “MITI and the Japanese Miracle: The Growth of Industrial Policy, 1925-1975″ by Chalmers Johnson, which was published in 1982. Neither ten years earlier, nor ten years later would such a book idea have been viable. In 1982 it hit the sweet spot.

With the stage set in this way, imagine the impact of the news that MITI had orchestrated a project to initiate the development of an entirely new kind of computer system. On the software side it embodied just about everything that had been a goal of AI research. On the hardware side, it was to be massively parallel. The marketers at IBM had taught the world to think about progress in computer hardware in terms of generations. They said that the use of vacuum tubes relegated a computer to First Generation, that the use of discrete transistors indicated Second Generation. So, when the IBM 360 came out it was not just a new type of computer, it was a new Generation, the Third! During the 1970s things got muddled, as there did not seem to be a clear criterion for Fourth Generation. So, in 1982, when MITI sponsored the formation a research institute called ICOT, its mission was designated “Fifth Generation Computer Systems”.

The project was associated with two words that seemed calculated to make Westerners nervous: MITI and AI. MITI for the reason mentioned above. AI because it is one of those things that cannot be contemplated dispassionately: most of the time the concept is dismissed. In between these normal periods there are episodes in which AI is embraced with wildly unrealistic expectations. The year 1982 was the beginning of one of these. Japan was seen to be taking off from its current platform, already of daunting power, to shake off any remaining shackles, start innovating, and continue on to world domination.

In the corridors of power around the world there was much scurrying around. The question that reverberated in the minds of ministers in charge of such things as Industry, Technology, Trade, Commerce, Skilled Manpower, or what not, was: What is the Appropriate Response? The Thatcher government in the UK determined that the Appropriate Response was the Alvey Program; in the European Community it was the ESPRIT Program.

In the US things could not be as simple as the government allocating a pot of money and then handing it out to researchers presenting themselves as worthy recipients of largesse. As a result the US response was more interesting. If the government could not respond, could not industry form a consortium to ensure that the US would stay ahead of the rest of world in Fifth Generation Computer Systems? No, such formations were illegal under anti-trust law. But such was the sense of urgency that in 1984 Congress passed the “National Cooperative Research Act”.

Mere lobbying would probably not have been enough for such a complete and timely legislative outcome. I believe that the outcome was greatly helped by a book, a book called “The Fifth Generation” by Edward A. Feigenbaum and Pamela McCorduck and published in 1983. Though Feigenbaum was an academic, in fact a highly respected pioneer in expert systems, the book is superbly written, as eloquent as anything found in Time Magazine, which had just proclaimed as “Man of the Year” for 1982, no one less than The Computer. After proclaiming how expert systems were going to give rise to Knowledge Industry causing Knowledge itself to become the new Wealth of Nations, Feigenbaum and McCorduck continue with:

To implement this vision the Japanese have both strategy and tactics. Their strategy is simple and wise: to avoid a head-on confrontation in the marketplace with the currently dominant American firms; instead to look out into the 1990s [remember, the date is 1983] to find an arena of great economic potential that is currently being overlooked by the more short-sighted and perhaps complacent American firms [hint, hint]; to move rapidly now to build major strength in the arena. The tactics are set forth in a major and impressive national plan of the Ministry of International Trade and Industry (MITI) called Fifth Generation Computer Systems.

The Japanese plan is bold and dramatically forward-looking. It is unlikely to be completely successful in the ten-year period. But to view it therefore as “a lot of smoke”, as some American industry leaders have done, is a serious mistake. Even partially realized concepts that are superbly engineered can have great economic value, pre-empt the market, and give the Japanese the dominant position they seek.

In the atmosphere that gave this book a warm reception, a judicious amount of lobbying was sufficient for the National Cooperative Research Act, which weakened anti-trust legislation sufficiently to make the response consortium legal. As leader a suitable admiral was found, perhaps inspired by the Manhattan Project under the leadership of a general. The admiral was Bobby Ray Inman, formerly Director of the National Security Agency and Deputy Director of the Central Intelligence Agency. The consortium was named Microelectronics and Computer Technology Corporation (MCC) and established in Austin, Texas.

There was plenty of opposition to the FGCS project and the various responses. A common argument was that the FGCS project was not to be taken seriously. There were hints that the crafty Japanese had created the “lot of smoke” to trick their opponents into exactly this kind of boondoggle, thus further weakening the West. One was supposed to be able to tell this was a lot of smoke because of the FGCS’s emphasis on AI. And even if there would be anything in AI, then FGCS would surely be concentrating on Lisp machines and not propose to replace Lisp by … er … what’s this called … er … Prolog?

Yet the choice of Prolog is what came straight from the horse’s mouth, in this case in the form of the Proceedings of the International Conference on Fifth Generation Computer Systems, Tokyo, October 19-21, 1981. The volume, edited by T. Moto-Oka, still lingers in many a library. The conference officially kicked off the project. Some of the papers are by steering committee types and describe how breakthroughs in AI, software, and hardware were going to lead to computer systems transforming society to new levels of harmony and prosperity. But there are also papers by computer scientists, notably by K. Fuchi (later to become director of ICOT) and by K. Furukawa (later to become a group leader in ICOT). While the steering committee types waffle about “LISP/Prolog” as filler for the language slot and “functional/logic machine” for a hardware box, neither Fuchi nor Furukawa make any bones about it: Prolog is the language and logic programming the methodology. Parallelism was seen as the hardware imperative, and Prolog (with inference as basic computing step) seemed to have much potential in this direction. Hence, FGCSs were to be parallel inference machines.

Fast forward to 1992. The world looks very different. In 1990 the Nikkei Index, which had risen strongly for an unprecedentedly long period, from the beginning of the FGCS project, was about to breach 40,000. But instead of continuing its rise, it started a decline and was down to half the peak value by 1992. Most of the Toyotas and Hondas driving around in the US were mostly made in the US. If MITI was mentioned at all, it was in studies revealing that MITI had never sponsored a successful project; that industry, far from being helped by MITI, had been hindered by its meddling. The book to read in 1992 was Francis Fukuyama’s “The End of History and the Last Man”, which celebrated the triumph of the American Way worldwide. The Lisp machine companies were either totally dead, or surviving only as something else than a Lisp machine company. The effect of Moore’s Law was at full strength for the commodity processors of Intel, with the result that commodity PCs ran Lisp programs faster than Lisp machines. The rapid increase in speed of the commodity processor helped to kill interest in parallelism, which had been found harder to exploit anyway. The parallel Prolog version of a Lisp machine, so exciting a prospect in 1982, had become a relic.

Meanwhile, in the Tokyo Prince Hotel, the conference “Fifth Generation Computer Systems 1992″ was held to mark the end of the project. Some of the papers are in the March 1993 issue of the Communications of the ACM. In the course of the project its participants could not help being exposed to people who only remembered that it was going to lead to a new generation of computer systems, transforming society to new levels of harmony and prosperity, because this is something you could understand and forgot the bit about “parallel inference machines”. So the first paper, by Fuchi, is an exercise in veiled apology, with the refrain “but we have a nice parallel inference machine”. The second paper, by Robert Kowalski, the discoverer of logic programming, is less veiled and has a section headed in bold type “What Went Wrong?”

So there you are. “A Lot of Smoke”, after all. The FGCS project had come down in flames, taking logic programming and Prolog with it. I’m not saying that things should be seen this way, only that it was. The fatal connection between logic programming and FGCS was made simply because Fuchi and Furukawa fell in love with Prolog. The lesson is that outside the waffling steering committees, people have to choose between technologies and they choose what they fall in love with. In my next article I plan to review the history of how Prolog came to appear on the radar of the Japanese when the sky was cluttered with Lisp echoes; echoes caused by people who fell in love with Lisp. I will describe as best as I can what causes people to fall in love with Lisp and how the same thing can happen for Prolog.

Postscript February 11, 2014

Richard Grigonis made the following comment:

The funny thing about this is that, in 1982, as I recall, Fifth-Generation Computer Systems project director, Kazuhiro Fuchi, came to Columbia University in New York, along with Edward Feigenbaum, to give a speech speech and answer questions of students. Feigenbaum was railing about how the Japanese were going to take over AI and the world, and we should better fund AI researchers in America or we would all be left behind in the dust. It was as if he was using Fuchi as a prop to get more excitement in America for AI.

In any case, I was working at the Children’s Television Workshop in the ASCAP building at the time, and, accompanied by my friend Mike, went to Columbia that day to hear these guys.

When the question-and-answer period came while Feigenbaum was still at the podium, I raised my hand and said, “I hate to be the fly in the ointment here, but this whole thing is based on Prolog? A sort of embodiment of the first-order predicate calculus? Even with trimming the search space in various ways, paramodulation, etc, if you use logic as a knowledge representation scheme it always leads to a combinatorial explosion, doesn’t it? Even with parallel processing, if you encode all of the facts of the real world into this thing, you will end up with ‘all day deductions,’ won’t you?”

Feigenbaum looked around uncomfortably, swatted the air like he was indeed being pestered by a fly, but then, amazingly (and much to his credit) said — and very quickly at that — “Well, yes, he’s right. BUT Americans need more support because the Japanese are advancing the field!” Feibenbaum quickly moved the session forward.

It was the strangest moment. My friend Mike, who had tagged along to watch know-it-all me get verbally obliterated by this erudite group, was stupified, incredulously uttering, with a tone of disbelief in his voice, “Oh my God Richard, you are right!”

Later we both walked up to the podium and I had a further chat with Feigenbaum. He was a bit miffed at me, but when he discovered we both had just purchased Atari 800 computers, he warmed up a bit, and began asking me questions about its graphics, as his wife was working on a multi-colored quilt and wanted to use the Atari to help design it. My friend was more into Atari graphics at that point and answered his questions.

So my opinion was completely different from everyone else’. The top American researchers knew the FGCS was completely flawed, but we were humoring them and making a big deal of it so we could get better funding for other, LISP-based projects in the U.S.

Richard: your comment is especially interesting because of its concrete illustration of the shady politics conducted by many AI researchers. When the FGCS project started, the panic induced by Japan’s export successes was at its height. FGCS was special in that for the first time Japan was not in copycat mode, but struck out in an original direction. In the US and the UK there were strident calls for an “appropriate response”. AI researchers were only too happy to supply suggestions. The result in the US was MCC, a new institute in Austin, Texas. In the UK the Thatcher government was especially sensitive to perceived lack of industrial virility. This resulted in the lavishly funded Alvey Programme.

The incident you report is a nice example of the duplicity of many of the researchers funded in the US and UK. In-house they denigrated Prolog, while in public they played up the “Japanese Threat”.  I agree that FGCS was destined to fail. I don’t agree that it is the fault of Prolog. My intended message in “Who Killed Prolog” is that Prolog is a promising alternative to Lisp, which was rapidly maturing, and was killed by merely being seen as associated with the failed FGCS project rather than for any technical reason.

A quick way to get an idea of the promise of Prolog is to read “Prolog: The language and its implementation compared with Lisp” by D.H.D. Warren, ACM SIGPLAN Notices, 1977, pages 109-115.  Warren shows that in the four years of Prolog implementation development an efficiency in terms of execution speed and memory use was reached that equalled what was reached by a quarter of a century of Lisp implementation development. This is remarkable for a language that in some repects is more high-level than Lisp.

The Japanese were smarter than researchers like Feigenbaum in that they took the trouble to discover that Prolog was a different animal from resolution-based automatic theorem provers, where the search space was pruned by the paramodulation technique you mention and by several others. Prolog is also based on resolution logic, but its inference is restricted to mimicking the function definition and the function call mechanism that has been the mainstay of conventional programming since Fortran.  As Lisp also relies on this it is not surprising that since 1977 their performances are similar.  In applications where Lisp need not search, Prolog does not search either.

I don’t want to suggest that Feigenbaum should have switched to Prolog, although I may have told him so during the rare and brief meetings we had in the 1980s. My present opinion is that the difference in the strengths of the two languages does not make one of them overall superior to the other. Other things being equal I might now recommend Lisp because its development has steadily continued since the time when interest in Prolog plummeted with the demise of FGCS.

I believe that FGCS was a plausible idea and was similar to the idea behind the Lisp machines of the time.  FGCS failed because it failed to come up with a convincing demonstration. Such a demonstration should have come in the form of at least a prototype comparable to a Lisp machine.  It could have been clear from the start that a project headed by government bureaucrats and staffed with technical people on loan from big companies had little chance of coming up with a convincing prototype.

A Prolog version of a Lisp machine was at least as promising as the Lisp machine itself. I believe that the failure of the Lisp machines was not predictable.  Around 1990 everybody was caught off-guard by the rapid drop in price and rise in performance of commodity PCs. There were a few exceptions: Larry Ellison and Linus Thorvalds come to mind.

“The Japanese Did It” is not the correct answer to “Who Killed Prolog?”.  Prolog was killed by the failure in the early 1980s of mainstream AI researchers to find out about Prolog, by their willingness to play along with the “appropriate responses” to FGCS, and by their use of FGCS’s failure as an excuse for continued neglect of a promising alternative to Lisp.

29 Responses to “Who Killed Prolog?”

  1. Prolog’s Death « Synthèse Says:

    [...] a terrific and authoritative account of one episode in the history of Prolog under the title “Who Killed Prolog” (and, tantalizingly, promises another episode soon featuring my other super-heroic [...]

  2. Andre Vellino Says:

    Terrific article, Maarten. Thank you.

    I felt compelled to write a footnote to it:


  3. Valdis Krebs Says:

    Prolog is dead???

    InFlow, software for social network analysis, is written completely Prolog!


    We use LPA Prolog… http://www.lpa.co.uk/

  4. Roshan Says:

    I wasn’t expecting to read a blog post so well researched and informative. I know of Prolog but I didn’t know that particular bit of history, and I found the idea very interesting that something might die off because of an association with a failure.

  5. Clive Spenser Says:

    As you say an interesting, and provocative, theory.
    For me – FGCS went wrong with going parallel.
    Yes, Prolog is a niche language with a small but dedicated bunch of practitioners.
    And yes, the academics have bored of Prolog, and are hell-bent on inventing newer, more elegant and powerful, languages and paradigms.
    But yet, Prolog and LP in general, keep giving birth to lots of new ideas and languages and systems.
    Oh yeah, the rest of the world, they’ve discovered Java and C#., as most language surveys show.

    Clive Spenser, LPA
    ps: whatever happened to SmallTalk – I still remember when OOPs was going to take over the world ….

  6. Michael Mol Says:

    OOP pretty much has taken over the world. Most widely-used languages have inherent support for it. Anyway, logic programming isn’t dead. At least a few languages’ communities are actively churning through the problem examples over on Rosetta Code.
    See a list of relevant languages over there.

  7. Kazimir Majorinc Says:

    Interesting and well-written article, but I wouldn’t be that pessimistic in the conclusion; although not nearly in the league of C++ or Java, Prolog still atracts lot of interest, seemingly more than even the most popular Lisp dialects and Smalltalk, and comparable with Fortran:
    Don’t give up! Time is on your side (just not as heavily as AI advocates believed in 1980′s.)

  8. AJ Says:

    I think OO can also be under Imperative. I think OO is a different dimension in this chart and does not come in the same list.

  9. adamo Says:

    I submitted the article at Hacker News and there are 40 comments right now.

  10. Tracy Harms Says:

    You might agree that array languages form a fifth branch at the same level as your other four, and that APL (1964) is the point of origin. Compared with the other branches, it’s fallen so far out of visibility that you overlooked it completely.

  11. Old Crow Says:

    Great article. I wish Prolog never died.

  12. Paulo Moura Says:

    OO provides code encapsulation and code reuse mechanisms that are useful for imperative, functional, and logic languages (think programming in the large). Thus, OO can be seen not as a subset of any of the other three paradigms but intercepting them all. There are actual imperative, functional, and logic programming languages that nicely integrate OO concepts. Associating OOP with imperative programming is understandable from an historical perspective but is also a rather limiting view.

  13. David Thornley Says:

    A few nitpicks on your history:

    The first object-oriented language was Simula in 1967. There were other sorts of languages out there, but for practical purposes your breakdown works.

    There wasn’t just a series of computer generations, but also language generations. Machine code was first generation, assembler was second, and the “high-level” languages we love/hate are third. At that time, there were a lot of claims of fourth-generation languages, which inevitably wound up being easy-to-use domain-specific languages. from the perspective of 2010 – useful in their way, but not really a step forward in the same sense as Fortran from assembler. The Japanese involved were hailing Prolog as a fifth-generation language.

    I read a few articles questioning the Japanese claim that Prolog was fifth-generation. The tendency from first to fourth (as then defined) was ease of writing and understanding, and Prolog isn’t a very inviting-looking language. The meaning of a Prolog program can be quite obscure. That, of course, isn’t what caused its fall, as anybody who’s suddenly come across a Perl program will realize.

    The thing I realized was that Prolog was not quite what it was hyped to be. It wasn’t a way of taking general logic and putting it into a language, and every time I tried I screwed up. There are two possible truth values in Prolog, “proven by program execution” and “unproven by program execution”. There was nothing corresponding to “disproven”. The inference engine was limited in what it could accomplish, and adding too many true and non-contradictory premises to a program could easily result in an infinite loop. Prolog was a powerful language, but it was just another model of computation that didn’t directly match how people thought, either mathematically or about the world, and I personally didn’t like it.

  14. Cyborg Says:

    For purposes of artificial intelligence (AI), Prolog has been overtaken and surpassed by the Forth programming language.

  15. gary knott Says:

    Dear Martin, Go take a look at “Interpreting Lisp”
    at http://www.civilized.com. -

  16. Henri de Feraudy Says:

    I worked for CSC and they sell software for insurance companies that is almost entirely written in a Object Oriented variant of Prolog.
    It dominates the market in lots of countries.
    They dont say it’s Prolog though; they are afraid it will frighten the customers.

  17. vlad Says:

    Are you sure they were looking 10 years ahead but not 40′?
    Lisp descendants just catching up with industry. A more advanced Prolog family will dominate the next decade.

  18. Franco Says:

    Prolog is essentially an inference engine capable of examining predicates expressed as clauses, the clauses contain data but also methods, so the Prolog would be ideal for use in database queries. Strangely, but not so much, I note that Prolog has ceased to be used in conjunction with the advent of RDBMS database, which are then imposed forcefully with the windows event driven applications. It’s true that Prolog is still used in some softhouse (Henri de Feraudy … insurance company) , incidentally, to develop database programs.

  19. Robert A. Amsler Says:

    Missing from the list of significant programming language branches is

    - string (1962, Snobol)

  20. Information Overload 2010-09-13 « citizen428.blog() Says:

    [...] “Who Killed Prolog” Interesting read for programming language nerds, also contains interesting info on how the Japanese economy was feared by the US in the 80ies. [...]

  21. Jerome Leboeuf Says:

    Sorry, so many things wrong !!!
    – First, Object Oriented is not a paradigm that excludes the other 3 categories! In fact, functional or logic programming may be object-oriented (data encapsulating).
    – Second, high-level imperative languages (Fortran, C++ , Java …) are similar and successful because conceived according to processors’ architecture. Hence they are efficient, nevertheless they present many lacks and limits and require a strong programming effort.
    – Third, Prolog is not supposed to compete with imperative languages; it is well adapted to a huge class of problems.
    – Though not adapted to the hardware, it offers a high efficiency with many developments like fuzzy oriented Prolog.
    – Lisp is only half functional (the other half is imperative); a pure functional language is FP proposed by Backus with the aim to allow program validation.
    – Until 10 years ago, parallelism did not matter. But now, to avoid overheating, processors use multicores, which requires parallelism. It is very complicated to distribute the load of programs written imperative languages. Parallelism is intrinsic to functional languages.
    Please do not try to justify computing tendencies with political choices.
    Japanese research in AI has been productive in the 80′s while US investment in expert systems in the 70′s appear a failure.

  22. Paul Wormer Says:

    Interesting to be reminded of the fear for Japanese technology in the early eighties; had forgotten about it. Your history shows once again how difficult it is to predict the future. The Japanese thought of extrapolating Cray-like vector computers. No internet or cloud computing in sight. No tens of thousands of GPUs in parallel, as we find in modern high-performance computing systems.

  23. Rupert Smith Says:

    Prolog is interesting enough that I am writing a compiler framework for it currently. My intention is to create a modular, well documented and not-impossible-to-follow framework, that can be extended to experiment with new ideas in Logic Programming, as logic finds its way forwards as a programming concept.
    I have used Prolog as a programmer in the Java/OO/enterprise/database world in which many programmers make their living today. Mostly, I use it to build models of systems described in logic, and then generate test cases and expected results using queries on the model. The search-driven nature of execution in Prolog is great for doing this with minimal amounts of code and has enabled me to write very thorough tests at a fraction of the effort that doing so in Java would have.
    I think Prolog needs better libraries that are oriented towards the everyday things that programmers do. Java was successful in part because it came with a large and standardized set of libraries which enabled the programmer to do most of the useful things that they wanted to do; manipulate files, communicate over networks, and make GUIs. There are libraries like these for Prolog, but I don’t think they are part of the ISO standard(?), which weakens its appeal.
    I find that Prolog is a great rapid prototyping language; often if I have an idea for something I will do a quick sketch of it in Prolog. Its fair to say that it is a niche language but I would recommend to any programmer trying it out.

  24. richard mullins Says:

    Some of us would have learnt Prolog, perhaps in secret, while having paid work. I used Prolog secretly at work between 1988 and 1994. I used it for assignments in courses in 2002 and 2003.

    If Prolog dies, it will be rediscovered or resurrected one day. One day (2050?) natural language will be available as a computer language, and computers will automatically be able to derive systems such as Prolog, from discussion in English or other NL.

    There is nothing to stop someone writing a Prolog interpreter or compiler in an available language such as Fortran or Cobol. But such an exercise is the work of a lifetime, and would not be understood by a modern employer, who expects results immediately.

  25. benjamir Says:

    Use of Prolog for developing a new programming language (1992)
    by J. L. Armstrong , S.R. Virding , M. C. Williams


    Lisp descendants just catching up with industry. A more advanced Prolog family will dominate the next decade.
    With Erlang a Prolog descendant is already up with the industry for ages.

  26. Pete Says:

    I’ve lived right through those times.This article is spot on, IMHO. Also it is rather saddening, for Prolog was/is a great language and the prospect, at the time, of implementing something like real AI was painful to see fading away.

    Like R.Mullin wrote above, sooner or later Prolog will be “rediscovered”, perhaps when there is adequate powerful hardware available.

  27. Bob Says:

    Great article. Just recently installed Prolog and Lisp on my windows system and enjoying every minute of it. Programming is part of my job but it is really a hobby. I have been swept up by many of the trendy programming languages out there but I always felt dissatisfied with the limitations of each. A process of experimenting with these other languages and becoming frustrated led me to the Prolog and Lisp. Prolog and Lisp have freed up my mind to possibilities I had to suppress with other languages. Programming is once again a fun hobby.

  28. Graham Telfer Says:

    Prolog has been reborn in the form of constraint logic programming in languages like ECLiPSe.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

%d bloggers like this: