For centuries, clinicians have evaluated and treated patients using information drawn from textbooks, formal training, consultation, and personal experience. Traditional decision making involved definition of a problem, reference to a database (usually personal experience), and formulation of a therapeutic plan after discussion with the patient. An inherent part of this process was accepting a certain degree of uncertainty regarding the plan, and so it remained until the widespread availability of medical journals. Guidance to reduce this uncertainty was then sought in the medical literature. However, reference to a bibliographic data base was complicated by difficulties identifying relevant published studies and discovery that the studies were poorly designed, with recommendations based on little or no valid data. In spite of publications of increasing number and focus sound clinical data existed for only 50% of clinical practice. Forty percent of this evidence demonstrated effectiveness for a particular modality, while 60% of the evidence suggested a lack of value of current clinical interventions.1 Such was clinical decision making for most of the twentieth century.