ABSTRACT

During the 1990s, in professional program evaluation in higher education, the most probable task for a program evaluator would have been to document and report “program fidelity” between an approved proposal for program services and actual services delivered and whether program activities were in accordance with legislation or regulations. Funding agencies and program administrators focused on questions about use of resources and program implementation: Were program resources used as intended? Was the program implemented as proposed? What was the number of products or services that were delivered? How many clients were served? Were clients satisfied with the deliverables or services? These types of questions, which were more research-oriented than evaluative in nature, directed evaluation efforts in that era.