ABSTRACT

As online writing assessment is becoming widespread in large-scale writing assessment, it is important to understand if there is any scoring bias in favor of or against typed essays as compared to handwritten essays. This chapter thus investigates the effect of mode of scoring on raters’ rating behavior. Some 82 handwritten argumentative essays written by adult ESL test-takers in a reading-to-write task in a post-admission English proficiency exam were typed. The handwritten and typed versions of the essays were distributed to six raters in a counterbalanced design. Scores on an analytic rubric, rater severity, consistency, scoring domain difficulty, and rater bias were examined across handwritten and typed essays using FACETS (Linacre, 2006). Results show that the mode of scoring had an impact on rater severity, indicating potential impacts of different writing modes on rater behaviors. Implications for essay rater training and monitoring in an online writing testing are discussed.