The Princeton Robustness Study
DOI link for The Princeton Robustness Study
The Princeton Robustness Study book
During the academic year 1970–1971, the faculty and graduate students of the Princeton University Statistics Department began one of the most extensive and massive computer-generated Monte Carlo studies ever run. The Princeton Robustness Study was a landmark look at the problem of blunders. Reference is a detailed description of the mathematical methods that resulted from the study. John Tukey, at Princeton, thought differently. He wanted to define robust in terms of specific departures from the assumptions. He wanted to be able to measure how robust different procedures are, so he could compare several and find the “best.” The error distribution is assumed to have a zero mean, be symmetric, and have a small variance. One discovery of the Princeton Robustness Study was that the average of a set of numbers is a very poor estimate of the center of the distribution when contamination is there.