No Silver Bullet? Get a Humble Programmer.

Consider this:

  1. Software reliability is rapidly becoming more
    important.
  2. Software development technology is inadequate and
    is only slowly improving.

The question arises:

Is disaster inevitable?

This has been debated for some time in the computer
science community. Recently it has begun to attract
attention from the general public.
For example a newspaper ran a story entitled: “A Lemon Law for
Software?” Here the author observed:

If Microsoft were liable for product defects the same
way automobile manufacturers are, then it would have
been driven out of business a long time ago.

As we all know, this has not happened. And this is only
because the law exempts the software industry from
product liability.

The question I’m now asking is: Should it be exempted in
this fashion?

Any suggestion that it is time for the software industry
to grow up and accept responsibility for its products is
greeted by an indignant chorus asserting that software is
special and that not recognizing it as such will stifle
innovation.

Clearly Microsoft stands to lose most in a change from
the status quo. But the rest of industry, including those
who love to hate Microsoft, do not dare to contemplate a
world where creators of software are responsible,
seriously responsible, for their products.

While in industry there is agreement on this topic,
academia, in so far as it has an opinion at all, is
divided. Some argue that software is special in such a
way that the current unreliability of software is
inevitable. They argue that there is no prospect of
drastic improvement. I will call this camp “The Real
World”. Their most eloquent, and most widely quoted,
spokesman is Frederick P. Brooks, Jr., in his paper “No
Silver Bullet”.

Others argue that techniques for producing reliable
software are known, that these are practical, and that
some of them have been known for decades. Whenever one
brings these to the attention of industry, one is
dismissed as being sadly out of touch with reality. Hence
I will denote the proponents of the known techniques for
producing reliable software as “The Ivory Tower”.

In this introduction I have sketched the problem,
which is:

  1. Software reliability is rapidly becoming more
    important.
  2. Software development technology is inadequate and
    is only slowly improving.

I have also sketched the main points of view, and called
them “The Real World” and “The Ivory Tower”. In this
lecture I will elaborate further on these two opposing
positions, starting with the real-world people, and
continuing with those who inhabit the Ivory Tower.
Although I will avoid technical matters, few such
concepts need a brief review, which I will do next.

Intermezzo on Program Reliability

Let me remind you that no law concerning the physical
world is known with certainty. For example, not even
Newton’s laws. An experiment can disprove them, but no
amount of experimentation can prove them. Philosophers of
science have been familiar with this ever since Whewell
in the nineteenth century.

With the correctness of programs we are in a similar
position: a test can disprove it, but no amount of
testing can prove it.

But programs differ from laws of nature in an interesting
way: it is sometimes possible to model the correctness of
the program by a mathematical theorem, which is
susceptible to proof. Even in this favourable case,
certainty eludes us: the theorem is only a model of the
program acting on the physical world; moreover,
mathematical proofs may have errors in it. It is tempting
to have the proof done by computer, but then how do you
prove that the theorem-proving program is correct?

Now, it is the case that proving a program correct
enormously enhances its reliability. Although testing is
much weaker, it can also enormously enhance reliability.
In the case of testing it needs to be emphasized that it
needs to be done right, which is difficult. Of course,
the same holds for proving a program correct, but somehow
that does not need emphasis: every programmer thinks he
can test; rare is the programmer believing she can prove.

Because certainty in this matter is not possible, I will
talk about program reliability instead of correctness.
Correctness is a true-or-false concept. Reliability
covers a whole spectrum. On the scale that I have in
mind, Microsoft Windows and the typical applications
running on it are not reliable. An operating system where
security patches are needed on a recurring basis, is
another example of unreliability.

The Real-World Brigade

After this interlude about reliability versus correctness
and testing versus proving, I continue with what I call
the “real-world brigade”, which encompasses almost all of
industry and most of academia. Frederick Brooks wrote a
paper called “No Silver Bullet” that expresses the world
view of the Real-World Brigade so eloquently that he has
become, whether he likes it or not, the ideologue of the
status quo.

Let us first see where that title comes from: “No Silver
Bullet”. I quote from the beginning of the paper:

Of all the monsters that fill the nightmares of our
folklore, none terrify more than werewolves, because
they transform unexpectedly from the familiar into
horrors. For these, one seeks bullets of silver that
can magically lay them to rest.
The familiar software project, at least as seen by the
non-technical manager, has something of this
character; it is usually innocent and straightforward,
but is capable of becoming a monster of missed
schedules, blown budgets, and flawed products. So we
hear desperate cries for a silver bullet–something to
make software costs drop as rapidly as computer
hardware costs do.

Brooks sees a long parade of academics touting methods to
supposedly make software costs drop rapidly. He reviews a
list that I will recite in a moment. Before I do so, you
need to know that the paper was presented in 1986.

Here are, then the purported “silver bullets”: Ada and
other high-level languages, object-oriented programming,
artificial intelligence, expert systems, automatic
programming, graphical programming, program verification,
environments and tools, workstations.

All of these, Brooks point out, only address the
representation of the conceptual construct of the
software system. The difficulties of representing the
construct are only an accidental part of the difficulties
of software development. The essence of the software
system is the conceptual construct itself.

At the opposite of the accidental, one has the essential.
What is then, according to Brooks the essence? He says
this.

Let us consider the inherent properties of this
irreducible essence of modern software systems:
complexity, conformity, changeability, and
invisibility.

As I said, Brooks is eloquent.

He has much to say about the first of the Ferocious
Four: complexity. I will just quote:

The complexity of software is an essential property,
not merely an “accidental” one. Hence, descriptions of a
software entity that abstract away its complexity
necessarily abstract away its essence.

By “conformity” Brooks means that, as software is
supposedly infinitely adaptable, it must span the vast
chasm between unadaptable humans and unadaptable
hardware.

By “changeability” Brooks means that, as any change in
software can be effected by a mere change in text, all
desire for change are to be met as a matter of course by
software.

By “invisibility” Brooks means that neither text nor
graphics are an adequate representation of the conceptual
constructs that grow rampantly in response to marketing
demands and unanticipated programming difficulties.

Such are, according to Brooks, the inherent properties of
the irreducible essence of modern software systems.

A voice from the opposition

In 1972 Edsger W. Dijkstra received the Turing award from
the Association for Computing Machinery. On the occasion
he delivered an address entitled “The Humble Programmer”.
The content has been, as far as I can tell, completely
ignored by the world. In defence of this it can be said
that a prediction made in this paper has failed to come
true.

In fact, Dijkstra did not present it as a prediction. He
said:

Let me sketch for you one of the possible futures. At
first sight, this vision of programming in perhaps
already the near future may strike you as utterly
fantastic. Let me therefore also add the
considerations that might lead one to the conclusion
that that vision could be a very real possibility.

The vision is that, well before the seventies have run
to completion, we shall be able to design and
implement the kind of systems that are now straining
our programming ability at the expense of only a few
percent in man-years of what they cost us now, and
that besides that, these systems will be virtually
free of bugs. These two improvements go hand in hand.
In the latter respect software seems to be different
from many other proudcts, where as a rule higher
quality implies higher price. Those who want really
reliable software will discover that they must find
means of avoiding the majority of bugs to start with,
and as a result the programming process will become
cheaper. If you want more effective programmers, you
will discover that they should not waste their time
debugging — they should not introduce bugs to start
with. In other words, both goals point to the same
change.

Dijkstra went on to argue that this future was indeed
possible. He recognized three necessary conditions. The
third of these conditions is especially relevant for
tonight’s topic: is the revolution sketched by him
technically feasible?

He advances a number of arguments that it is. More
important than the arguments themselves is his pre-amble
to them:

I now suggest that we confine ourselves to the design
and implementation of intellectually manageable
programs. If someone fears that this restriction is so
severe that we connot live with it, I can reassure
him: the class of intellectually manageable programs
is still sufficiently rich to contain many very
realistic programs for any problem capable of
algorithmic solution.

Here it is: the irreconcilable juxtaposition. Brooks
claiming that complexity is one of the inherent
properties of the irreducible essence of modern software
systems. Dijkstra claiming that we stand to lose nothing
if we restrict ourselves to intellectually manageable
programs. This is the reason why Dijkstra gave as title
to his speech: “The Humble Programmer”.

Thirty-five years have gone by since Dijkstra’s speech. It was halfway
that period when Brooks so tellingly crystallized the conventional
wisdom on this matter. Now we hear suggestions to stop the exemption
of product liability enjoyed by software firms. This is a good time to
add an observation that Dijkstra could have made.

The observation is this. One of the foundational dogmas of software
“engineering” is that one starts out with a requirements specification
that is independent of any implementation consideration. To do
otherwise would be to violate the sequence according to which
implementation only comes after design, which comes after the
requirements process. To do otherwise is to invite chaos. The actual
artifact eventually emerging is supposed to be a complete surprise.

In all this preoccupation with process, intellectual manageability is
never a consideration. It may be very hard to predict from a
requirements specification whether the subsequent design and
implementation is going to be intellectually manageable.

Tellingly, the most successful area of software development is ignored
by software engineering, and it ignores software engineering:
compilers. This is one kind of software that often is intellectually
manageable and where a new project consists of modifying an existing
intellectually manageable system. Here is one kind of project without
“… missed schedules, blown budgets, and flawed products”, as Brooks
so aptly describes some other kinds of project.

And this is exactly how innovation proceeds in engineering. I mean
real engineering, not software development. Real engineers are liable
for failures in their products. Dijkstra’s proposal to restrict
consideration to intellectually manageable designs, which is
considered so utterly unrealistic in software, is utterly obvious to
engineers. So obvious that it may never even have been stated.

Of course, there is plenty of scope for unmanageable complexity in
engineering. There are plenty of ideas for new designs, new materials,
new production methods, new computation methods, … Unmanageable
complexity is not the prerogative of software.

But an engineer knows he’s going to be personally liable if he takes
but one small step beyond what’s intellectually manageable. Compared
to software “engineers”, engineers are humble people: they know their
limitations. They have to.

What then to do about Brooks’s typical software development project
that

…is usually innocent and straightforward, but is
capable of becoming a monster of missed schedules,
blown budgets, and flawed products?

The silver bullet that Brooks says does not exist, is a
humble programmer, who keeps complexity within the bounds
of manageability. It is a software engineer who is an
engineer.

Leave a comment