Maybe not. It used to be taken for granted that in the more or less remote future it would no longer be necessary to debug software because it had been *verified*.

That time is long past. What we now take for granted is that, for example, the two pieces of software essential to the health and safety of the internet, MS Windows and MS Internet Explorer, “vulnerabilities” are discovered at such a rate that the repair sessions are saved up to the second Tuesday of each month. Rarely does such a Tuesday pass without an update; sometimes five or six. Another exhibit for the prosecution: many pieces of software that invite being downloaded distinguish between the “latest version” and the “latest *stable* version”. Forget correctness: “stability” is the goal. Diminished expectations …

For smaller programs, such as device drivers, one can still find people aiming at verification. Two approaches have been considered: formal and informal. According to formal verification the program and its desired properties are stated as theorems in a suitable formal system. All that is needed then is to prove the theorem. The attraction of converting the program and the properties to a formal system is that such a proof can then be carried by a program. Let’s call this program a mechanical prover, to cover both inference systems and model checkers.

An example of the kind of difficulty encountered here is described by Martin Ouimet in his paper “Formal Software Verification: Model Checking and Theorem Proving” (MIT Embedded Systems Laboratory, 2007):

“A hurdle to usability is the size of the proof system and the size of the proofs that are being derived using mechanical provers. As a sample case study, a verification approach conducted at Motorola utilized a proof system with relevant concepts spanning hundreds of pages. Furthermore, on the way to the proof of a theorem, it is possible that hundreds and potentially thousands of lemmas must be proved along the way. In the Motorola case study, the size of a proof of a meaningful theorem is stated to require 25 megabytes of memory to print.”

Note that a good-sized novel takes up about half a megabyte. So the situation seems to be that you feed the mechanical prover the inputs, you go out for lunch, and find on return an enormous amount of unintelligible output. This purports to be a proof. You skip to the end, where it says: QED.

The question arises: does this episode raise your confidence in the program subjected to this treatment; does it help you to make the decision whether it is justified to include it in a safety-critical application such as a controller of an aircraft or of a cardiac pacemaker?

For starters, the verification program itself is not verified. Before we even get to the (unintelligible) proof, the concepts used in expressing the desired properties have to be formalized. In the report mentioned, Ouimet speaks of hundreds of pages of Lisp to express the concepts. Verifying these may not be easier than verifying the software itself.

Compare formal verification, as just sketched, to some informal verification methods that have been tried long ago. What I have in mind are the Code Inspections in the so-called Cleanroom Method of software development. According to this method, programmers are not permitted to run their code. Instead, they are required to revise it until they are confident of its correctness. This confidence can only be based on understanding the source code. This understanding is then tested in an inspection, a formal meeting with a few peers, in which one of these goes through the code paraphrasing small bits at a time leading an attempt by all present to detect errors. This practice has been shown to be effective in improving the quality of software compared to the still prevalent approach where no inspections are conducted, and where confidence is gained by failure to find faults during testing.

This is the contrast I want to consider in this essay: on the one hand the approach via formal verification, which turned out to be costly and fruitless. On the other hand the approach via inspections, requiring no science, no new methods, only common sense. The latter turns out to be effective. How is it possible that the effective, easy, and obvious was overlooked in favour of the difficult, expensive, and impotent? Such a perverse preference can only be explained by a *mindset* which turns the obvious into the opposite. If this is indeed so, the mindset must influence other affairs than those related to software development. I will here quote evidence that this is indeed the case, that indeed the mindset is so pervasive as to have been identified elsewhere well before the first attempts at formal verification of programs, that belated identification of the mindset in computing brands this field as a backwater in the world of ideas.

The mindset is called “modernism”; its sane alternative goes by the name of *rhetoric*. The onetime success of software inspections was, in computing, a shortlived triumph of rhetoric over its modernist alternative.

In the remainder of this essay I will sketch various aspects of modernism. Of rhetoric I will say little more than the obvious: that “specious eloquence” is not its primary meaning. In the Humanities side of C.P. Snow’s “Two Cultures” it is not necessary to make this point. On the other side it very much needs to be made. Rhetoric, then, is the *art of persuasion*. This is the case independently of whether the motive for persuasion is honourable. The primary tool of rhetoric is natural language. In certain specialized and limited contexts graphs, tables, and algebraic expressions are supporting tools of rhetoric. It is the modernist delusion to believe that such specialized and limited contexts are strongholds from which the entire range of rational discourse is to be conquered.

My starting point has been a paper by D. McCloskey entitled “The Rhetoric of Economics” (now a book). McCloskey argues that rhetoric should be the main intellectual tool of economists, but that this central position has been usurped by modernism, the complex of attitudes and techniques that McCloskey introduces in the following way:

(It) is an amalgam of logical positivism, behaviourism, operationalism, and the hypothetico-deductive model of science. Its leading idea is that all sure knowledge is modeled on the early 20th century’s understanding of certain pieces of 19th-century physics. To emphasize its pervasiveness in modern thinking well beyond scholarship it is best labeled simply “modernism,” that is, the notion … that we know only what we cannot doubt and cannot really know what we can merely assent to.

By now modernism has achieved a venerable old age. Here are some modernist quotes; one each from the 17th, 18th, 19th, and 20th centuries.

Leibniz (1646 — 1716) envisioned a generalized mathematics, by which thinking could be replaced by calculation:

If we had it, we should be able to reason in metaphysics and morals in much the same way as in geometry and analysis… If controversies were to arise, there would be no more need of disputation between two philosophers than between two accountants

^{[*]}. For it would suffice to take their pencils in their hands, to sit down to their slates, and to say to each other (with a friend as witness, if they liked): let us calculate.

Moving to the 18th century, we have the following from David Hume:

When we run over libraries, persuaded of these [modernist] principles, what havoc must we make? If we take in our hand any volume — of divinity or school metaphysics, for instance — let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion.

Lord Kelvin:

When you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind.

Although for now modernists have the rostrum, I can’t help inserting at this point the following postmodern retort: “Yes, and when you *can* express it in numbers, your knowledge is of a meagre and unsatisfactory kind.”

As representative of the 20th century, I select Wittgenstein whowrote:

Anything that can be said at all, can be said clearly. And of what cannot be said clearly, one must not speak.

Again a postmodernist retort (Michael Polyani in 1962): the methodology of modernism sets up “quixotic standards of valid meaning which, if rigorously practiced, would reduce us all to voluntary imbecility.

McCloskey was concerned with the ill effects of modernism in economics. Susanne Langer (“Philosophy in a New Key”, 1941) makes a similar observation for psychology:

Psychologists have probably spent almost as much time and type avowing their empiricism, their factual premises, their experimental techniques, as recording experiments and making general inductions.

A few pages later:

The physicists’ scheme, so faithfully emulated by generations of psychologists, epistemologists, and aestheticians, is probably blocking their progress, defeating possible insights by its prejudicial force. The scheme is not false … but it is bootless for the study of mental phenomena. … Instead of a method, it inspires a militant methodology.

And not only psychologists, epistemologists, and aestheticians suffer self-inflicted debilitation by adopting modernist trappings. This can also be said of sociologists, management scientists, and computer scientists.

The modernist urge to derive respectability from the formal, the abstract, and the numerical has given matrix algebra, especially of the computational variety, an unwarranted prominence in management science and psychology (just as there exists Mathematical Physics, there is Mathematical Psychology; see e.g. “Handbook of Mathematical Psychology”, three volumes, 1965). As a result, some papers are published in the journals of one these disciplines that could as well have appeared in any of the other ones. Researchers are welcomed to their university faculties mainly on the strength of their expertise in matrix algebra. Their prime allegiance is to matrix algebra rather than to the discipline of their department.

Mathematics plays a crucial role in the understanding of the tension between modernism and rhetoric. On the one hand the modernist mind has seen mathematics as an example to be followed: surely here is the ultimate in precise and secure knowledge, an example to be emulated by all scholarship. On the other hand, it has escaped the modernist that mathematics is still done by old-fashioned rhetoric. Where the modernist psychologist is a heavy user of computers (for computation), the mathematician has no such need. Hilbert’s program, the quintessence of modernism, has never been followed in mathematics. However, it is the precursor of the program verification enterprise in computer science.

This is not to say that mathematics and program verification should only be conducted in natural language. Algebraic notation, introduced centuries ago, is now very much accepted in the rich, unmodernist rhetoric of mathematics. It is important to realize that algebraic notation can be used as extension to natural language to clarify (to the (*human*) mind) what would be obscure otherwise. But such use need not be part of a formal method. Algebraic notation can be used in a rhetoric that is both rigorous and informal. At one time, then, a successful shift has occurred that incorporated algebraic notation into the language of rhetoric.

On the one hand, the modernist excesses of the recent times must be rectified. On the other hand this must not imply that rhetoric should forever be shielded from further enrichment by modest incorporation of additional algebraic notation. We must keep in mind that modest steps are called for: so far anything beyond English enriched by formulas, graphs, and tables has proved to be sterile. And the additions should be made by those who appreciate the power of traditional rhetoric in mathematics. This disqualifies for example E.W.Dijkstra (see his note EWD1012), who denounces this rhetoric as a manifestation of the spirit of the medieval guilds: calculated to maintain spurious barriers preventing entry by the laiety. Apparently Dijkstra believed that it is territorialism that prevents the process of mathematical discovery from being formalized and automated without loss of power.

To return to the success of rhetoric in computing in the form of the code inspection, I note that in its reported form, inspections are traditional rhetoric, unaided by any symbolic logic. I believe that an informal version of some program verification techniques, such as Floyd’s method of assertions, can make inspections more effective. My guess is that the optimum consists of including assertions into code, which consist of English fortified by logic formulas. This is far from a formal system and much like the traditional rhetoric of mathematics.

^{[*]}Leibniz is naive about accounting. If ever there is a field in which categoriesare disputable, it is that.

## Leave a Reply