The first programming language to allow functions to be recursively defined was McCarthy’s LISP in 1959. Its introduction was not controversial: nobody but John McCarthy had any say in what the language was going to be. In addition to his work on LISP, McCarthy was on the committee finalizing the Algol language in 1959 and 1960. In spite of the fact that a majority was opposed to it, the definition of Algol 60 ended up allowing recursively defined procedures. In [1] I gave an acount of how this happened.
Why was recursion such a big deal? For us this is hard to understand: for decades the programming language C, not exactly a paradigm of avant-garde, has allowed recursion. Still, remnants of unease remain. Still, in some introductory courses, recursion plays the role of a pons asinorum. Have instructors been traumatized by a sarcastic teacher finding, oh horrors, a circular definition in their high-school essay? In this essay I’m going to explore recursion by contrasting it to its opposite, iteration.
I propose that the distinction between recursion and iteration for the same computational task is related to the possibility noted by Robert Kowalski that the same induction rule can be used with bottom-up and top-down reasoning and that the choice between these possibilities has algorithmic consequences [2].
For example, here is an inductive characterization of the Fibonacci numbers:
fib(0) = 0 and fib(1) = 1 and
for all a,b,n:
fib(n) = a and fib(n+1) = b -> fib(n+2) = a+b
which says that whenever two successive Fibonacci numbers are known, the next can be found by adding them. This reading of the induction rule is the use of the rule in bottom-up reasoning mode. It suggests an iterative program. In C it could be:
int fib(int n) { if (n == 0 || n == 1) return n; { int a = 0, b = 1, i = 1; while (i++ < n) { int temp = a; a = b; b += temp; // b is the i-th Fibonacci number } // b is the n-th Fibonacci number return b; } }
The induction rule can be put in a different format by having the implication arrow point the other way. This suggests its use in top-down reasoning mode:
fib(0) = 0 and fib(1) = 1 and
for all a,b,n:
fib(n) = a+b <- fib(n-1) = b and fib(n-2) = a
This format of the rule transcribes to a recursively defined function:
int fib(int n) { if (n == 0 || n == 1) return n; return fib(n-1) + fib(n-2); }
More succinct, more obviously seen to be correct with respect to the induction rule, and egregiously inefficient.
Clearly, a strike against recursion.
That a majority of the Algol committee was opposed to allowing recursion in the language was probably not because of Fibonacci numbers. The justification was that it was not clear how to implement it. Moreover, Algol was meant to be for numerical computation and there seemed to be no need for recursion in that type of application. In the fullness of time everyone was convinced of the virtues of recursion by Hoare’s Quicksort [3].
That majority position against recursion is interesting because it is wrong on two counts.
- Adaptive Simpson integration is numerical and every bit as recursive as Quicksort is. This shows that even numerical analysts have a limited view of numerical analysis.
- How convincingly recursive is Quicksort anyway? More on this point below.
For a long time I thought that iterative Quicksort, while possible, would be too horrible to contemplate. Until I took the trouble to write it.
First, as a basis for comparison, the convincingly and elegantly recursive Quicksort.
void qSort1(int a[], int p, int q) { // sorts a[p..q-1] if (p >= q-1) // one element or fewer; do nothing return; int m = partition(a, p, q); // a[m] is in place qSort1(a, p, m); // sort a[p..m-1] qSort1(a, m+1, q); // sort a[m+1..q-1] } void qSort(int a[], int n) { // sorts a[0..n-1] qSort1(a, 0, n); }
But this is only acceptable if it is acceptable that an unfavourable input results in overflow of the system stack. If not, then some modifications are called for.
First, the second of the two recursive calls needs to be eliminated. This is the well-known tail-recursion elimination: the observation that under certain conditions a recursive call at the end of the function body can be replaced by a transfer of control to the beginning.
Once this is done, we can choose which of the two segments resulting from the call to “partition” is to be sorted by the remaining recursive call. By choosing the smaller of the two segments we ensure that the number of pending activations of “qsort1” is bounded by log n for segments of length n.
After these modifications the elegance of the original Quicksort is somewhat tarnished:
void qSort1(int a[], int p, int q) { // sorts a[p..q-1] int m; while (p < q-1) { m = partition(a, p, q); // a[m] is in place if (m-p < q-(m+1)) { qSort1(a, p, m); p = m+1; } else { qSort1(a, m+1, q); q = m; } } } void qSort(int a[], int n) { // sorts a[0..n-1] qSort1(a, 0, n); }
But, of course, still elegant.
Without recursion the overflow-proof version becomes something like the code shown below. It is based on a pair of stacks
int lb[50] and int rb[50]
for the left and right bounds of array segments that still need to be sorted. Thus typically, with k items on the stacks, we still need to sort
a[lb[i] .. rb[i]-1]
for i = 0, …, k-1.
void qSort1(int a[], int p, int q) { const int max = 50; int lb[max], rb[max], n = 0; L: // to be sorted: a[p..q-1] and the n items on the stack if (p < q-1) { // at least two elements in a[p..q-1] int m = partition(a, p, q); if (m-p < q-(m+1)) { //push smaller (left) half lb[n] = p; rb[n] = m; n++; // sort right half p = m+1; goto L; } else { // push smaller (right) half lb[n] = m+1; rb[n] = q; n++; // sort left half q = m; goto L; } } // p >= q-1, so a[p..q-1] already sorted if (n == 0) return; // because nothing on the stack n--; p = lb[n]; q = rb[n]; goto L; } void qSort(int a[], int n) { // sorts a[0..n-1] qSort1(a, 0, n); }
Not as concise as the recursive version, but hardly the horrible mess I had hoped for as a fan of recursion.
A word about that label and my use of the g-word. I found that using any of the standard control patterns resulted in superfluously stacking and unstacking or in superfluous testing. Maybe I didn’t try hard enough. But why should I have tried at all? Keeping in mind what needs to be true at the label (see the comment there), the code writes itself.
For Quicksort recursion is nice to have, but hardly essential. Once you know the idea, the iterative version is just as understandable as the recursive one. But to get the idea, recursion is essential. Or so I thought, until a knowledgeable friend told me that Sir Anthony Hoare, the inventor of Quicksort, wrote the code for it without even knowing that recursive subroutines are possible [4].
Now that recursion is seen as not essential to Quicksort, one may wonder whether it ever is. I give two examples of programming situations where it makes more than a superficial difference.
Let us consider the problem of producing the decimal numeral representing a given nonnegative integer. To avoid the clutter associated with manipulating strings or lists, we produce the numeral by printing its digits.
void num1(unsigned i) { while (i>0) { print(i%10); i /= 10; } } void num(unsigned i) { num1(i); nl(); }
Here “print” prints its argument as a decimal digit; “nl” prints a newline. Basically the same algorithm is used in printing decimal numerals in most libraries. This algorithm prints the digits backwards, which can, and is, of course easily corrected by reversing the string produced.
Just out of curiosity, what does it take to print the digits immediately in the right order? Here is one way to do it. Regard the loop as an optimized recursion. The unoptimized version of the above is:
void num1(unsigned i) { if (i>0) { print(i%10); num1(i/10); } } void num(unsigned i) { num1(i); nl(); }
This makes it obvious that the last digit gets printed first. It also makes it obvious what to do about it: print the last digit after the others, as in the code below.
void num1(unsigned i) { if (i>0) { num1(i/10); print(i%10); } } void num(unsigned i) { num1(i); nl(); }
Two observations.
- To get the digits out in the correct order one would like to have temporary storage for those digits.
- In this program by the time the last (in the correct order) digit is computed, there are as many activations of the function num1 as there are digits and none of these has completed. Each of these activations stores its own copy of the parameter that allows another digit to be computed. The allocation and deallocation happens without any code explicitly forcing this to happen. It happens at the computational cost of a function call and exit.
Allocation and deallocation of storage in local copies of parameters or in local variables or arrays has advantages over the use of storage in the heap. Allocation in the heap requires finding an unoccupied segment of sufficient length. When a program has caused a considerable number of allocations and deallocations of heap storage, the available storage on the heap has become fragmented. That is, even though the total unallocated space may be plentiful, it may be time-consuming, or even impossible, to find a free segment of the required length.
The technique exemplified here, that of using multiple activations of the same subroutine as temporary storage, is applicable whenever the storage can be allocated and de-allocated on a last-in first-out basis. I suspect that the heap is often (mis)used without considering alternatives.
Here is an example. Suppose that in C we want to copy standard input to an area in the heap. As the length of standard input remains unknown until the last byte has been read, temporary storage is needed to ensure that exactly the right amount of heap storage is allocated. As the needed temporary storage can be allocated and de-allocated last-in, first-out, an array local to a recursively defined function can be harnessed to this purpose.
As I am running out of time and space, I will omit further explanation, while realizing that, due to the unfamiliar technique, the code may present a bit of a puzzle.
#include <stdio.h> #include <stdlib.h> int stdin2heap1(char** p, int n, int size) { // returns number of characters in remainder of stdin // n characters have been read by previous invocations // and are stored in block[] of previous invocations char block[size]; int c; int m; // for number of characters to be read by // future invocations int i = 0; A: // i characters have been read by this invocation // i < size c = getchar(); if (c == EOF) { m = 0; *p = malloc(n + i); goto B; } block[i++] = c; if (i < size) // there is space in block[] goto A; m = stdin2heap1(p, n + size, 2*size); B: m += i; // m characters have been read by previous invocations // and by the current invocation while (i--) *(*p + n + i) = block[i]; return m; } int stdin2heap(char** p) { // Copy the contents of stdin to the heap. // Return the number of characters read. return stdin2heap1(p, 0, 1); }
It is essential here to acknowledge the crucial contributions of Gauthier van den Hove. At the same time the reader should be warned that any reasons for disliking it are likely to be due to my extensive modifications.
Too bad that the operating systems people think that one should not do this kind of thing. That is at least my conclusion from the fact that even in systems generously endowed with RAM the systems stack gets a measly 8 MB.
We noted Kowalski’s use of “bottom-up” versus “top-down” to characterize forward versus backward reasoning with inductive rules. This terminology derives from parsing, as we can see in Kowalski’s account [5] of his collaboration with Alain Colmerauer concerning logic representation of context-free grammars:
We saw that different kinds of resolution applied to the resulting grammars give rise to different kinds of parsers. For example, forward reasoning with hyper-resolution performs bottom-up parsing, while backward reasoning with SL-resolution performs top-down parsing.
Indeed, production rules of context-free grammars, typically recursive, read like inductive definitions.
Independently of this, compiler writers have identified top-down with recursive by attaching the term “recursive-descent” to top-down parsers. To me it seems that the natural way to define the syntax of a high-level programming language is by means of a context-free grammar and that the natural way of parsing sentences of a context-free language is with a recursive-descent parser. Therefore I am puzzled that this view is not shared by the majority of those who know more about compiler implementation than I do.
I was gratified to find an account of his parsing adventures by Bjarne Stroustrup , the creator of C++ [6, pp. 68—69]. In 1982, when he was planning Cfront, the precursor of C++, he wanted to use a recursive-descent parser because he liked the ease with which such parsers could be made to provide helpful error messages and because he liked to have the full power of a high-level programming language available for implementing the decisions to be made in the parser.
However, Stroustrup took advice from his colleagues and used instead YACC to generate a parser. Given that these colleagues were Al Aho and Steve Johnson, that advice was somewhat biased. Stroustrup continues: [6, p. 69]
In retrospect, for me and C++ it was a bad mistake.
A litany of problems follows and concludes with the remark that several modern C++ compilers do use recursive descent. <!–
In this article I noted several instances of a certain bias against the top-down, recursive approach that seems to run like a muddy thread through the length of the short life of programming. –>
Acknowledgments
Thanks to Paul McJones and Gauthier van den Hove for their careful reading and suggestions for improvement.
References
[1] “How Recursion Got Into Programming”
[2] “Algorithm = Logic+ Control” by R.A. Kowalski. Comm. ACM, 22(7), 424—436, 1979.
[3] “Quicksort” by C. A. R. Hoare. Comput. J. 5 (1962), pp. 10—16.
“Algorithm 63: partition” and “Algorithm 64: Quicksort” by C.A.R. Hoare. Comm. ACM 7 (1961), pp. 321—322.
[4] “The Emperor’s Old Clothes” by C. A. R. Hoare. Comm. ACM 24 (1981), pp. 75—83.
[5] “Logic Programming” by R.A. Kowalski, in Computational Logic Dov Gabbay, Jörg Siekmann, and John Woods (eds.), volume 9 of Handbook of the History of Logic North-Holland, to appear.
[6] The Design and Evolution of C++ by Bjarne Stroustrup. Addison-Wesley, 1994.
Leave a Reply