ABSTRACT

The mathematical operations plus, minus, times, divide, square root, and so on have computer versions that physicists and engineers use to imitate mathematics numerically; but there is no numerical function that takes a function u(t, x) as input and produces, say, the partial first derivative ∂u∂t or second derivative ∂2u∂t2 as output. Derivatives are approximated with difference equations, like

∂u∂t ≅ u(x,t+h)-u(x,t)h and ∂2u∂t2 ≅ u(x,t+h)-2 u(x,t)+u(x,t-h)h2 . The derivatives are defined as the limit as h becomes infinitesimal; so it is tempting to think about computing them with some small nonzero value for h. That is, use finite differences instead of calculus. Then the two expressions above look like something you can actually compute, but it is just about the worst thing you can ask a computer to do, numerically. If u is computed with floats, the numerator will be similar rounded numbers that cancel out their significant digits when subtracted; the smaller h is, the larger the rounding error relative to the result. If you try using a larger value of h to reduce the rounding error, it increases the sampling error, since you are sampling the function u over a range instead of a single point, so you can’t win. If unums are used for the calculation, at least the loss of information from using difference approximations becomes visible to the programmer; with floats, the programmer generally has no idea just how inaccurate the finite difference is. There is a different approach, one that bypasses calculus and that can produce rigorous bounds on the behavior of physical systems.