ABSTRACT

This paper addresses the determination of statistically desirable response rates in students’ surveys, with emphasis on assessing the effect of underlying variability in the student evaluation of teaching (SET). We discuss factors affecting the determination of adequate response rates and highlight challenges caused by non-response and lack of randomization. Estimates of underlying variability were obtained for a period of 4 years, from online evaluations at the University of British Columbia (UBC). Simulations were used to examine the effect of underlying variability on desirable response rates. The UBC response rates were compared to those reported in the literature. Results indicate that small differences in underlying variability may not impact desired rates. We present acceptable response rates for a range of variability scenarios, class sizes, confidence level, and margin of error. The stability of estimates observed at UBC, over a 4-year period, indicates that valid model-based inferences of SET could be made.