ABSTRACT

Critical to the understanding of almost all cryptographic methods is an understanding of the study of what are called “modular arithmetic systems.”

Any cryptographic method must ensure that when we perform the encryption step, we must be able to arrive back at the original or plaintext message when we perform the decryption. As an immediate consequence, this implies that representing information in computer memory cannot be interpreted as in the computer context as floating point numbers, or, in mathematical terminology, real or rational numbers. The problem that arises is that interpretation of the result of any computation in computer memory of real or rational numbers leads to an uncertainty in terms of the lowest-order bit or bits of that computation. Therefore, in general, if we were to manipulate real or rational numbers in a cryptographic computation, the fact that the lowest-order bits are indeterminate could mean that no true or exact inversion could be performed.

As a consequence, virtually all cryptosystems use the natural numbers or integers as the basis for computation. Indeed, since computations in the integers might apply an unlimited range, instead we almost always use a smaller, finite, and fully enumerable system derived from the integers that we generally refer to as “modular arithmetic.”