ABSTRACT

In a computer every real number is represented by a sequence of bits, often 64 bits (8 bytes). (1 byte is always 8 bits.) One bit is for the sign, and the distribution of bits for mantissa and exponent can be platform dependent. Almost universally, however, a 32-bit number will have 8 bits for the exponent and 23 bits for the mantissa (as illustrated in Figure 3.1). In the decimal system this corresponds to a maximum/minimum exponent of ±38 and approximately 7 decimal digits. The relation between the number of binary bits and decimal accuracy can be worked out as follows. The 23 bits of the mantissa provide 223 ≈ 107 different numbers, and therefore about 7 significant digits. The exponent can be negative or positive. Half of the 28 numbers can be used for the positive exponent: 22

7 ≈ 1038.5, so the largest number has a decimal exponent of +38. The 8 bits, 22

8 ≈ 1077.1, can represent decimal exponents from −38 to +38, which are 77 in number.