Archive for the ‘Programming’ Category

Python slams into its exponential wall

April 6, 2014

My first Python sighting was around 1999, in the part of the building where the hackers hang out. Somebody had a poster on the door saying something like If you would have used Python, then you would have been done by now.

Next stop 2005. Since 1986 I had been a fan of “Structure and Interpretation of Computer Programs” by Harold Abelson and Gerald Sussman. At the University of Waterloo it was only in a fourth-year topics course that this book could make a brief appearance. I was envious of MIT, where masses of first-year students took EECS 6.001, where Abelson and Sussman was used from day one. That was class. In 2005 I heard the sad news that, after two glorious decades, EECS 6.001 was closed down, replaced by a course where the text was … a book written for high-school students. Maybe EECS wouldn’t have picked that book if its language would have been BASIC. Perhaps it had to do with the book’s choice of Python. It was from this news item that I learned that Python is a programming language.

(more…)

Boltzmann’s Brood

March 8, 2014

When Ludwig Boltzmann (physicist, 1844–1906) was criticized for his ugly brute-force calculations, his defence was: “Elegance should be the concern of shoemakers and tailors”. The criticism was of course not about elegance in the sense of fashion. It was about the fact that an elegant structure would have had the power to convince the reader of whatever the calculations were intended to demonstrate.

I think this episode has something to do with the fact that much of what we do on computers depends on software that, among other weaknesses, is in need of frequent and urgent “security updates”. Those who criticize this state of affairs are dismissed as unrealistic. To be realistic is to understand that any useful piece of software consists of millions of lines of code written by many programmers, mostly no longer traceable and certainly not accountable. No one has a clear view of the system as a whole; there is no description that enables anyone to gain such a view. The realists of today, who accept this state of affairs as inevitable, are of Boltzmann’s brood.

(more…)

Flowcharts, the once and future programming language

April 8, 2012

Job-control language, punched cards, and flowcharts are so much relics of the past that those who remember them are starting to die off. There are also research goals of that era that are long forgotten,  not because they have been reached or found not useful, but because they are, well, forgotten. In this essay I consider the ancient research goal of developing code and proof of correctness in parallel. I find that flowcharts are a good starting point, provided one goes back much further than job-control language.

(more…)

McCarthy’s recipe for a programming language

October 31, 2011

On October 23, 2011 John McCarthy died. He is remembered as the inventor of Lisp, for contributions to the theory of computation, and for contributions to artificial intelligence. In this essay I argue that he did more than inventing Lisp; even more than originating the concept of high-level programming: he showed to those willing to see it a recipe for designing high-level languages. I support my argument by turning the clock back to 1950 and imagining how McCarthy’s recipe could have been followed at that point in time. By taking this unusual point of view I hope to bring out more clearly McCarthy’s contribution to programming languages. (more…)

The MIT style in Artificial Intelligence 1958 – 1985

June 7, 2011

The MIT style in Artificial Intelligence 1958 – 1985

In a previous article I illustrated the tension in Artificial Intelligence (AI) between two mentalities, generally referred to as “scruffy” and “neat”. I started by accepting the characterization by Norvig and Russell [1].

“… the neats — those who think that AI theories should be grounded in mathematical rigor — versus the scruffies — those who would rather try out lots of ideas, write some programs, and then assess what seems to be working.”

Critics of my article have opened up so many dimensions of scruffy versus neat that it is hard to know where to start. From my disorganized thoughts one recurrent theme emerged: the distinctive MIT style during the formative years of AI.

(more…)

Another Scoop by Dijkstra?

January 15, 2011

Edsger W. Dijkstra (1930–2002) is known and remembered for many things in programming and in computing. But it seems that the problem of efficiently generating the sequence of prime numbers (2, 3, 5, 7, …) is not among them. I recently re-read his 1972 Notes on Structured Programming [5] and noticed for the first time, tucked away in a few lines, a remarkable insight that resolves the stand-off between the Sieve of Eratosthenes (efficient in terms of time, but not memory) and the method of Trial Division (efficient in terms of memory, but not time).

(more…)

Unprotected Numerical Computation and Its Safe Alternatives

December 11, 2010

Conventional numerical computation is marvelously cheap, and it seems to work most of the time. The computing profession has long neglected to develop methods appropriate for those situations where cheapness is not an overriding concern and where “seems to work most of the time” is not good enough.

Many textbooks in numerical analysis start by showing that in certain situations the small errors caused by the use of floating-point arithmetic can rapidly grow and render the result of the computation meaningless. Such textbooks describe conditions under which mathematical algorithms can safely be transplanted to floating-point hardware: that the problem be well-conditioned and that the algorithm be stable. In the early days computing centres employed numerical analysts to make sure that none of the scarce processor cycles got wasted on meaningless results caused by ill-conditioned problems or unstable algorithms.

Much has changed since then. Thousands of scientists and engineers have on their desks gigaflops of computing power, and there is not a numerical analyst in sight. Although an even larger part of problem-solving is entrusted to the computer, the same fragile methodology is followed. And still the only justification is that conventional numerical computation seems to work most of the time.

(more…)

A Standard of Excellence

December 8, 2010

When an industry-wide standard gets established, it is often lamented as a regression to the mediocre, if not outright bad. VHS vs Sony Beta comes to mind. This time, however, I come to praise a standard, not to bury one. I am going to talk about a success from that era: IEEE standard 754 for binary floating-point arithmetic.

(more…)

From Formulas to Algorithms

November 29, 2010

Before computers, computation went like this: a scientist invented a formula; the user plugged in suitable values and evaluated the resulting variable-free expression. The user typically did not have the expertise to judge the correctness of the formula. Though the compilers of formula books tried to cover all important cases, it was common that a user failed to find one for the situation at hand. In such compilations one can find formulas for Annuity Whose Present Value Is One, Return on Investment, Moment of Inertia (for circular sheet, hollow circular cylinder, and dozens of other shapes), Curved Surface of Cone, Net Present Value, and many others (but maybe not the one you need right now).

Initially, computers reflected the age of the formula: FORTRAN comes from FORmula TRANslator. But FORTRAN was found more useful for writing algorithms in which formulas played a minor role.  Used in the right way, an algorithm can be self-explanatory in a way that the old-style formulas were not. The formulaic paradigm was authoritarian in the sense that the user did not get an explanation of the formula found in the book. Thus the algorithmic paradigm is less authoritarian: access to the source code implies access to an explanation, not of a formula, but of the number you are getting. Yet relics from the age of formulas linger. In this article I discuss two of these and use them to illustrate the power of the algorithmic approach.

(more…)

An Interview with Paul McJones

October 27, 2010

In my quest for pioneer programmers I was fortunate to get in touch with one who has a possibly unrivalled record of participation in projects that are important in the development of computing over the decades. I am speaking of Paul McJones, who was part of the team assembled by Butler Lampson to develop Cal TSS, one of the first operating system designs to tackle the problems of protected and even mutually-suspicious subsystems. At IBM San Jose Research he worked with John Backus on his RED languages and with the System R group on the first relational database system. At Xerox he participated in the development of the Star office automation system (based on PARC’s Alto personal computer). Via Tandem Computer he went to do research at DEC SRC in Palo Alto. After various Silicon Valley start-ups he went to work at Adobe. This included a collaboration with Alexander Stepanov. A spin-off of this research is Elements of Programming (by A. Stepanov and P. McJones, Addison-Wesley, 2009), which presents a novel methodology that spans the gap between abstract mathematics and efficient algorithms.

In the following interview we try to cover at least a little bit of this wide and varied terrain.

(more…)