ABSTRACT

Arguably the most important goal of any scientific discipline is to develop the mathematical tools necessary for modeling and prediction. Models that accurately describe the world around us are retained; those that do not, are discarded or modified. From a modeling perspective we typically divide the world into two types of phenomena: those that are deterministic and those that are probabilistic. Deterministic models are those that can be used to predict, with near certainty, the outcome of a given experiment. Using Newton’s laws of motion, for example, we can predict the exact time it will take for a ball of lead to fall to the ground when released from a height of d meters. Newton’s laws tell us that the ball, when released from rest, will accelerate at 9.81m/s2 so that the time to reach ground is found through basic calculus to be

√ 2d/9.81

seconds. On the other hand, there are certain phenomena that defy a deterministic

description. We cannot predict whether or not the flipping of a fair coin, for example, will result in a “head” or a “tail.”1 Intead we use a probabilistic model to describe the outcome of this experiment. The goal of the probabilistic model is not to predict a specific outcome, but rather to predict the likelihood or probability of a given outcome. Understanding probabilistic modeling is essential to understanding entropy. In fact, as we will show, entropy relies on a probabilistic description of an event. We will also show that entropy is itself a predictive tool that can be used to forecast limits on rates of communication between two devices, predict the amount of compression possible in a piece of data, or even assess the degree of statistical dependence among two or more experimental observations.