Conventional numerical computation is marvelously cheap, and it seems to work most of the time. The computing profession has long neglected to develop methods appropriate for those situations where cheapness is not an overriding concern and where “seems to work most of the time” is not good enough.
Many textbooks in numerical analysis start by showing that in certain situations the small errors caused by the use of floating-point arithmetic can rapidly grow and render the result of the computation meaningless. Such textbooks describe conditions under which mathematical algorithms can safely be transplanted to floating-point hardware: that the problem be well-conditioned and that the algorithm be stable. In the early days computing centres employed numerical analysts to make sure that none of the scarce processor cycles got wasted on meaningless results caused by ill-conditioned problems or unstable algorithms.
Much has changed since then. Thousands of scientists and engineers have on their desks gigaflops of computing power, and there is not a numerical analyst in sight. Although an even larger part of problem-solving is entrusted to the computer, the same fragile methodology is followed. And still the only justification is that conventional numerical computation seems to work most of the time.