Archive for December, 2010

Unprotected Numerical Computation and Its Safe Alternatives

December 11, 2010

Conventional numerical computation is marvelously cheap, and it seems to work most of the time. The computing profession has long neglected to develop methods appropriate for those situations where cheapness is not an overriding concern and where “seems to work most of the time” is not good enough.

Many textbooks in numerical analysis start by showing that in certain situations the small errors caused by the use of floating-point arithmetic can rapidly grow and render the result of the computation meaningless. Such textbooks describe conditions under which mathematical algorithms can safely be transplanted to floating-point hardware: that the problem be well-conditioned and that the algorithm be stable. In the early days computing centres employed numerical analysts to make sure that none of the scarce processor cycles got wasted on meaningless results caused by ill-conditioned problems or unstable algorithms.

Much has changed since then. Thousands of scientists and engineers have on their desks gigaflops of computing power, and there is not a numerical analyst in sight. Although an even larger part of problem-solving is entrusted to the computer, the same fragile methodology is followed. And still the only justification is that conventional numerical computation seems to work most of the time.


A Standard of Excellence

December 8, 2010

When an industry-wide standard gets established, it is often lamented as a regression to the mediocre, if not outright bad. VHS vs Sony Beta comes to mind. This time, however, I come to praise a standard, not to bury one. I am going to talk about a success from that era: IEEE standard 754 for binary floating-point arithmetic.