ABSTRACT

For computer-based tests, in many low-stakes contexts, it is often more practical to use pen-and-paper versions of items for trialling. Nevertheless, moving across test modes has implications for test validity. In this chapter, we describe the programme evaluation of a suite of language placement tests developed at a large Australian university. Pen-and-paper trials were used, while the final placement procedure was administered online, automatically scored, and integrated with the local institutional enrolment systems.

This evaluation project was guided by the argument-based approach to validation – this chapter focuses on two related aspects of the evaluation inference which were unable to be fully addressed in the pen-and-paper trials: the assumptions that (a) technical problems during test taking would be at an acceptably low level and (b) instructions and tasks would be clear to all test takers (i.e., appropriate test administration procedures). The problems identified and the solutions implemented or recommended after evaluation of the test procedures have implications for technology mediated language assessment more broadly.