ABSTRACT

The sample standard deviation is a measure of the dispersion of the sample data around the sample mean. A small standard deviation indicates less dispersion of sample data. A larger standard deviation indicates more dispersion of sample data. This understanding is also true for the range, which is the difference between the largest and smallest data value. However, the standard deviation provides more information about the data than the range. The standard deviation permits the formation of intervals that indicate the proportion of the data within those intervals. For example, 68% of the data fall within +/− one standard deviation from the mean, 95% of the data fall within +/− two standard deviations of the mean, and 99% fall within +/− three standard deviations of the mean, in a normal distribution. If 100 students took a mathematics test with a mean of 75 and a standard deviation of 5, then 68% of the scores would fall between a score of 70 and 80, assuming a normal distribution. In contrast, given the highest and lowest test scores, 90 and 50 respectively, the range of 40 only indicates that there is a 40-point difference between the highest and lowest test score, i.e., 90-50=40.