A Programmers Place

“The unreasonable effectiveness of mathematics in the design and implementation of software systems”

Alas! If only the current state of affairs were such that a future scholar would feel impelled to write a paper with this title, just like in 1960, when Eugene Wigner wrote his widely quoted “The unreasonable effectiveness of mathematics in the natural sciences”. In the hope that at some future time it will improve the sorry state of software, let us consider how mathematics came to be “unreasonably” (i.e. surprisingly, mysteriously) effective in the natural sciences.

Some of Wigner’s predecessors in physics did not consider the effectiveness of mathematics unreasonable. For example Galileo is often quoted with

Philosophy is written in that great book which ever is before our eyes—I mean the universe—but we cannot understand it if we do not first learn the language and grasp the symbols in which it is written. The book is written in mathematical language, …

By 1960, when Wigner wrote his paper, it was clear that Galileo was right, but it is a mystery why Galileo believed this, because in the same sentence he spoils it all by continuing with

… and the symbols are triangles, circles and other geometrical figures, without whose help it is impossible to comprehend a single word of it; without which one wanders in vain through a dark labyrinth.

The book is indeed written in mathematics, in symbolic form, but the symbols do not include triangles or circles. In the following pages I trace how mathematics went through a revolution, characterized by Michel Serfati as “The Symbolic Revolution” [1] spanning Galileo’s time. But the symbolic revolution did not come in time to empower Galileo; the first physicist to benefit was Isaac Newton, who was born around the time Galileo died.

The most conspicuous achievement of the Scientific Revolution (1600-1700) [2] is the infinitesimal calculus. A lot of people think that algebra is many centuries older, but in fact the plain old school algebra as we know it is not much older than the calculus. The invention of algebra was the big event. As John Allen puts it [3]

When we think of engineering mathematics, we usually think of the 17th century calculus of Newton and Leibniz. But their calculus is the frosting, not the cake in modern science. Without a symbolic language for general mathematical ideas, the originators of the calculus would have been hard-pressed to make their advances. The fundamental breakthrough occurred approximately one hundred years earlier with the invention of symbolic algebra.

In fact, we have to correct Allen a bit, as the main contributors to algebra were Viète with his Isagoge (1591) and Descartes with his La Géometrie (1637). Not only mathematicians were impressed by algebra, but even philosophers were bowled over:

They that are ignorant of algebra cannot imagine the wonders in this kind that are to be done by it; and what further improvements and helps, advantageous to other parts of knowledge, the sagacious mind of man may yet find out, it is not easy to determine. This at least I believe: that the ideas of quantity are not those alone that are capable of demonstration and knowledge; and that other and perhaps more useful parts of contemplation would afford us certainty, if vices, passions, and domineering interest did not oppose or menace such endeavours …. The relation of other modes may certainly be perceived, as well as those of number and extension; and I cannot see why they should not also be capable of demonstration, if due methods were thought on to examine or pursue their agreement or disagreement.

This is not Newton writing home to his mother, but John Locke in a book published in 1690 [4].

How did Locke know? Isaac Newton had written, but not published, samples of these algebraic marvels. The only work of Newton’s that was published before 1690, the year of Locke’s Essay … is Philosophiae Naturalis Principia Mathematica (“Principia”, 1687). The curious thing about this book is that Newton did not use algebra, but went to considerable trouble in translating the algebra into the circles and triangles familiar to Galileo, as if unwilling to reveal his secret weapon.

I agree with Allen that algebra was, compared to calculus, the main event. But calculus was the killer application of algebra, especially in the notation introduced by Leibniz. The effectiveness of mathematics in physics became even clearer in the 18th century with the development of analytical mechanics. But here the effectiveness would not be considered mysterious because analytical mechanics could just as well be considered part of mathematics itself.

The mystery sets in when the physics becomes driven by the mathematics. I only see this starting to happen later, with Maxwell’s equations for the electro-magnetic field. Maxwell started with Gauss’s laws about the nature of the electric and magnetic fields by themselves. Then he had Ampère’s law relating current with magnetic field and Faraday’s law relating current with change in magnetic field. When Maxwell, as the first, wrote these laws in the form of a system of interlocking equations, he noticed that the mathematically symmetric counterpart of Faraday’s law was missing. Thus, by making the system of equations symmetric, he added an equation, which turned out to be a law of nature.

Maxwell thought of his equations as describing tensions in an all-pervading medium, the “ether”. Contemporary physicists don’t believe in such things any more. The equations not only survive, but are enthroned in the Standard Model, which unifies all forces of nature except gravitation. Of course, here Maxwell’s equations have a new interpretation. As Weinberg remarks [5], the physics changes, but the equations survive.

Another example of Weinberg’s is the Dirac Equation. The short, slick version of the story is that in 1928 Dirac set out to find a version of the Schrödinger wave equation that is consistent with special relativity. A reason to apply the equation to electrons was the proposal of Goudsmit and Uhlenbeck to explain some mysteries in connection with atomic spectra. They proposed that electrons were spinning particles generating a magnetic moment. This would not only explain the observed structure of the spectral lines, but also explain Pauli’s exclusion principle. Problem was that classical mechanics predicted a gyromagnetic moment g = 1, whereas it would have to be g = 2 according to Dirac’s proposal. Moreover, the electrons would have to be spinning impossibly fast.

The Dirac equation predicted an electron with spin one half just as in the Goudsmit/Uhlenbeck proposal. It also predicted g = 2. So far so good. However. Dirac had set out to find a version of the Schrödinger wave equation that is consistent with special relativity. Dirac’s “equation” is actually a system of four interlocking equations. The successes mentioned were obtained from two of the four. Attempts to interpret the other two resulted in the absurdity of a particle with the properties of the electron, except that it would have the opposite (positive) charge. In 1932 Carl Anderson would find evidence of such a particle in tracks left by cosmic rays in a cloud chamber.

This looks like triumph upon triumph for Dirac’s equation. However, Weinberg remarks [5, p. 255] “The trouble is that there is no relativistic quantum theory of the kind that Dirac was looking for.” The happy ending is that Dirac’s equations are now part of the Standard Model, with a different interpretation. Meanings come and go; some formulas survive.

When Dirac was old and famous he said “A great deal of my work is just playing with equations and seeing what they give.” He wasn’t the first physicist to do so. In the 1890’s it was noticed that Maxwell’s equations for the electro-magnetic field are not invariant under the Galilean transformation, a sort of sanity check that was passed by Newtonian mechanics. This was a matter of mathematical aesthetics, not a problem of physics.

Yet physicists Fitzgerald and Lorentz, I presume out of curiosity, looked for a transformation under which Maxwell’s equation were invariant. This must have involved some serious playing with equations. But Newton’s F = m.a is not invariant under the Fitzgerald/Lorentz transformation. The next episode in this little saga was Einstein, not happy with electro-magnetical field theory and mechanics living in different worlds, wondering how to tweak the F = m.a of Newton’s second law so as to make it invariant under the Fitzgerald/Lorentz transformation. This led to the special theory of relativity, where algebraic play apparently played an important role.

Around the same time, Planck was tackling a mystery in physics, that of the spectrum of black-body radiation. At the low-frequency end the Rayleigh/Jeans formula gave a good fit to the data. At the high end Wien’s formula held sway. Away from these extremes both formulas gave nonsense. Planck’s discovery was a formula that coincided with both laws where they worked, but also over the entire spectrum. You don’t find such a formula by fitting curves to data; it can only happen by playing with equations. The amazing thing is that with the black-body radiation data of 1900 and Planck’s formula, one gets a good value for the constant h, the incredibly small number in Heisenberg’s uncertainty relation that explains so much in quantum mechanics.

I must not make it appear as if the new mathematics of the Symbolic Revolution was obediently marching along in the service of physics. In the 18th century, in the hands of the Bernoulli’s and Euler, mathematics becomes an exuberant romp through new territory, neither avoiding nor subservient to anything in the real world. An intoxicated d’Alembert wrote, I imagine reaching for a new sheet of paper to be filled with formulas, “Allez en avant, la foi vous viendra”, telling us to push ahead without worrying too much about what it all means.

I quoted Locke earlier as saying “They that are ignorant of algebra cannot imagine the wonders in this kind that are to be done by it; …” which, when published in 1690, could only have been a prophecy: how could he have known? One who could was Heinrich Hertz:

It is impossible to study this wonderful theory [Maxwell’s theory of the electro-magnetic field] without feeling as if the mathematical equations had an independent life and intelligence of their own, as if they were wiser than ourselves, indeed wiser than their discoverer, as if they gave forth more than was put into them [6, page 318].

Hertz knew, because he re-worked Maxwell’s equations into the unassailable form in which they have stood ever since.

Wigner’s phrase “The Unreasonable Effectiveness of Mathematics …” sounds too much like mathematics as a handmaiden of physics. As Weinberg claimed, the Great Equations have a life of their own. They stay, the physics comes and goes. In Plato’s cave analogy we mortals can only perceive imperfect shadows cast by the pure forms. The current interpretations are like the shadows. The difference is that the pure forms can be perceived by mortals: the prof just wrote one on the blackboard.

What is it in physics that makes it so susceptible to the applicability of mathematics? Can this property not be shared by the design and implementation of software systems? It may be objected that the latter is subject to many constraints, but then it should be realized that the effectiveness of mathematics in physics is so surprising because of the stringent constraints imposed by experimental outcomes.

Acknowledgements

I am indebted to Paul McJones for his willingness to review a draft and for pointing out that my first version of the final paragraph was wrong.

References

[1] La Révolution Symbolique: la Constitution de l’Écriture Symbolique Mathématique by Michel Serfati. Pétra, 2005.
[2] The Invention of Science: a New History of the Scientific Revolution by David Wootton. Allen Lane, 2015.
[3] “Whither software engineering” by John Allen. Workshop on Philosophy and Engineering. London, 2008.
[4] Essay of Human Understanding by John Locke. Quoted by H. J. Kearney, Origins of the Scientific Revolution (London, 1965), pp. 131ff.
[5] “How Great Equations Survive” by Steven Weinberg. In It Must Be Beautiful, edited by Graham Farmelo. Granta, 2002.
[6] Miscellaneous Papers by Heinrich Hertz; McMillan, 1896.