The H-bomb and the birth of the computer, Part I: Edward Teller’s obsession

In 1945 the stored-program computer was invented; by 1950 every country wanted computers. None were for sale, so every country was trying to build them. Not just the big players, like the US and Britain; by 1955 computers had also been built in Switzerland, Russia, the Netherlands, Sweden, and Belgium. At the origin of this wave of enthousiasm was J. von Neumann’s prestige and advocacy. In 1945 he had not only been the first to describe how to build a computer in the modern sense, but he was also infused with the conviction that this was nothing less than a new, universal research instrument with the potential of revolutionizing science. As a result he wanted his design to become as widely known as possible, as soon as possible.

How von Neumann got involved in electronic computing will be the topic of the next instalment. Here I first want to recount why he got involved. It had to do with the unholy alliance of science and war.

Let’s talk about science first; before we know it war will enter the picture. Let’s go back in time to the puzzles that confronted physicists in the 19th century. Light was one such: the wave theory had triumphed over Newton’s idea that light consisted of particles. The wave theory required a transmitting medium, the “ether”. But Michelson and Morley failed to show any movement of the planet Earth through ether in a set-up that should have demonstrated it. And the photo-electric effect suggested that light consisted of particles.

Then there were the puzzles of the ages of the Earth and the Sun. Geologists insisted that the Earth was at least hundreds of millions of years old. Yet the planet still was mostly molten rock and metal with a relatively thin layer of slag on top, and showed no sign of cooling down. Even more puzzling was the Sun. How could it possibly have continued pouring out energy in the prodigious quantities observed at present during the aeons required by the geologists? The physically plausible models showed an age less than a hundred million years.

Fast forward to 1930. Tremendous progress. The conundrum of light had been solved: new ways of thinking had dissolved the contradictions between the wavelike and particle-like aspects of light. Radioactivity explained why the Earth was still as hot as it is. But the bigger riddle, that of the age of the Sun, remained. One wonders how the scientists of the time lived with the contrast between being able to explain everything and being clueless about this glaring exception.

After 1930 things started moving fast. The neutron was discovered. The first accurate measurements of the masses of the proton and the neutron showed that the helium nucleus had a mass less than four hydrogen atoms; a tiny percentage that had now become measurable. Plug the difference into E=mc^2, remember that the Sun consists mostly of hydrogen and helium, and conclude that the Sun could have been fueled by hydrogen fusion for billions of years past and to come. On the microscale fusion, and the energy released by it, was soon confirmed by Rutherford, Oliphant, and Harteck [1, page 247].

By 1939 the rough outline was complete: enormous quantities of energy resulted from conversion of a small amount of mass by nuclear fission in the heaviest elements and by fusion of the lightest. It was also known that of two effects fusion was much stronger. Until 1939 fission was only known as natural phenomena such as the radio-active decay of radium and of uranium-235. In that year Hahn and Strassmann published an experiment in Berlin in which fission of uranium was artificially induced. Note that this was predated by the fusion experiment of Rutherford, Oliphant, and Harteck. In 1941 Tokutaro Hagiwara put two and two together in a lecture in Kyoto (“Super-Explosive U235”) speculating that a fission chain reaction in U235 might initiate fusion in hydrogen [1, page 247, see end note as well].

When the US joined the war against Germany and Japan the possibility of using nuclear energy for military purposes was very much on physicists’ minds. For some years Leo Szilard, an emigrated Hungarian physicist, well-connected to physicists and to persons in power, had been waging a campaign to investigate the possibility of a fission chain reaction. In 1942 America’s most prominent home-grown physicist, Robert Oppenheimer, convened the best brains in the US to collate what was known and to evaluate its possible military consequences.

At the top of Oppenheimer’s wishlist was Hans Bethe. Also invited to the Berkeley Summer study was Edward Teller, an emigrated Hungarian physicist. The two traveled by train from Chicago. With a compartment to themselves they used the opportunity to speak freely by conducting a mini-pre-seminar of their own [1, page 417]. Independently they discovered Hagiwara’s idea of a fission bomb setting off a fusion chain reaction in hydrogen.

At the Berkeley Summer Study the consensus soon formed that fission bombs would be feasible for the US to build. Further progress would have to rely on experiments giving accurate data and on elaborate calculations requiring clerical help by the dozen. All this would have to wait for an adequately organized project. The fusion side, on the contrary, was theoretically wide open. To gain insight the Summer Study was ideal: all that was needed was brainstorming by a critical mass of top physicists. As a result the agenda was dominated by the Super, as the fusion bomb started to be called.

When the Manhattan project started in April 1943 the situation was reversed: every effort had to go to the fission bomb. It was far from certain that anything could achieved in time to influence the outcome of the war. Given the Hahn and Strassmann experiment of 1939, the possibility seemed real that Germany was ahead. Development of a fission bomb required design, detailed physics investigations, and solving numerous engineering difficulties. The great physicists (Fermi, Bethe, Weisskopf, Wigner) miraculously assembled in the Manhattan project collaborated on the detailed work, helped by a bevy of as yet unknown scientists.

The miracle wrought by Oppenheimer was that in a matter of months a scientific research centre was assembled with a density of intellect not rivalled before or since. For someone like Stanislaw Ulam it was like returning to the heaven he had left in Lwów, Poland, where the Scottish café had been something similar within the microcosm of pure mathematics. Though Ulam had been in Princeton and Harvard after that, it was only in Los Alamos that he experienced an equal intensity. And in Los Alamos there was the added dimension of a challenge that oriented everybody toward a common purpose, with scientific stars willing to serve in subordinate roles. Dyson compares this to the dream described by Jules Verne in his book “De la terre à la lune” (1865). [2, page 204]

Teller was an exception to the general pattern of collaboration. He arrived at the beginning of the project with his wife, baby son, and a Steinway grand piano. He had worked on nuclear energy ever since the 1939 confirmation of nuclear fission. He was a prime choice for the planning committee for the theoretical division in Los Alamos in April 1943. These antecedents made Teller senior to Bethe in the Manhattan project, though not in age and not in scientific achievement. Yet Bethe was appointed head of the theoretical division, a decision difficult to accept for Teller [1, pages 538-539].

Though Teller initially contributed to fission development, he withdrew in the spring of 1944 and threatened to leave Los Alamos [1, end of page 545]. This was a setback the project could ill afford. Oppenheimer stepped in to minimize the delay. To keep Teller around, Oppenheimer encouraged him to continue work on the Super, a remarkable concession in a laboratory working six days a week overtime trying to finish the fission bomb in time to be able to influence the outcome of the war [1, end of page 546.8]. Teller was further rewarded by a promise of a weekly meeting with Oppenheimer for an hour of freewheeling discussion.

Yet there was little progress on the Super beyond the sketchy beginning of the 1942 Summer Study. Although better data had become available on the temperatures to be expected from the fission ignition, it was as doubtful as before whether sustained fusion in the surrounding deuterium (isotope of hydrogen) would result.

John von Neumann, who was involved as mathematical consultant in several subprojects, was skeptical about the Super. More than others he thought it important to validate, or otherwise, Teller’s proposal for the Super. No conceivable experiment would help; computation was the only way out. The required computations were on a scale far beyond the facilities at Los Alamos. This was one of the problems on von Neumann’s mind when he was accosted by Captain Herman Goldstine on a railway platform in Aberdeen, Maryland.

This chance encounter in the summer of 1944 led to von Neumann’s participation in the design of the EDVAC, the intended successor of the ENIAC, the first electronic digital computer. His involvement led to him inventing the stored-program computer, an invention referred to as the “Von Neumann machine”. In the next part of this series, “The Revolution That Almost Didn’t Happen”, I will adress two controversies: whether the invention can be attributed to von Neumann, to the exclusion of those from whom von Neumann learned the requisite electronics, and whether the dominance of the von Neumann machine over the decades has been beneficial to the development of the field.

References

[1] The Making of the Atomic Bomb by Richard Rhodes; Simon and Schuster, 1986.

[2] Turing’s Cathedral, George Dyson, Pantheon Books, 2012.

Leave a comment