ABSTRACT

Increasing demands and advancing technologies have fostered a surge in language proficiency assessments that feature a variety of efficiencies, including increased access through online at-home testing, decreased cost through automated item development and scoring, and innovative test design through new task types and measurement approaches. While improvements in the efficiency of language assessments are timely, there is a concern that gains in testing efficiency may come at the cost of test security, the validity of interpretations and actions based on test scores, and the consequences of efficient test and task designs for language teaching and learning. In this chapter we describe one approach to addressing the desire for enhanced efficiency while maintaining the meaningfulness of assessment, in the development of the TOEFL® Essentials™ test. We outline a hybrid approach to construct coverage and measurement efficiency, illustrate the resolution of validity challenges through task design and multi-stage adaptive testing, and address issues of test security through remote-proctoring and other measures. We conclude by pointing to key avenues for validity research in relation to technology mediated language assessments.