ABSTRACT

With the increased use of computer technology in K–12 education, many state education agencies in the United States have begun to develop or adopt computer-based English language proficiency (ELP) assessments for English language learners. Considering the heterogeneous characteristics of the EL population, it is crucial to ensure that the use of technology does not inadvertently introduce construct-irrelevant variance in assessing students’ ELP. This chapter reports on a small-scale usability study that was conducted during an early stage of developing the ELPA21’s computer-based ELP assessment. ELPA21 items and tasks were designed, as dictated by the targeted constructs, to take advantage of the computer’s capability to elicit student responses in a manner that more closely approximated language-related capabilities in the classroom than paper-based assessment could. Findings indicate that, while students appeared to be engaged in the tasks, some students had difficulty negotiating the tasks when they were first presented, with more difficulty experienced at lower grade levels. However, students quickly improved once they had hands-on experience with task features. This chapter illustrates collaborative efforts undertaken among test developers and researchers during the development of a computer-based ELP assessment with a specific focus on garnering validity evidence based on students’ response processes.