ABSTRACT

Automated scoring (AS) can be broadly conceived as using computers to convert students’ performance on educational tasks into characterizations of the quality of performance. Scoring any kind of performance – whether by human raters or by computer – rests on the assumption that the performance in a task can be decomposed into a set of targeted skills that are brought to bear in some combination during task completion. AS provides organizations the advantage of cost-effective ways of implementing mechanisms that can support assessment by scoring performances on complex tasks quickly and reliably in order to make inferences about construct-relevant complex skill sets. In fact, a number of high-stakes assessments have adopted AS, which enables the introduction of more advanced performance tasks without the need for as many human raters as in the past. The growth in the field of AS for complex tasks necessitates building common understanding across the multiple disciplines involved.