ABSTRACT

This chapter introduces the types of errors that generally may be encountered and how these errors may influence simulation results. It shows that the number of significant digits determines the accuracy of a number. Computers are limited in the expression of numbers, and can principally represent numbers in two ways: as integers and as floating point numbers. On the basis of word length, computers round off or truncate numbers, which may lead to error. Numerical operations may lead to loss of significant digits, resulting in computational error. Also, errors in input data may propagate the error in the output data, and repeated operations may increase the error. The formulation of the problem and the method that is used to solve the problem determine if the problem is conditioned properly and/or if the method is stable or not.