ABSTRACT

Much of the published work on automated scoring of writing has focused on writing instruction and assessment in K–16 education in the United States, with the implicit assumption that the writers being assessed have native or native-like command of English. In this context, English language learners (ELLs) appear to be of somewhat peripheral concern. However, readers may be surprised to find out that there are more people learning and communicating in English in the world than there are native speakers (Kachru, 1997; McKay, 2002); some are immigrants or children of immigrants in the United States (U.S.) or other English-speaking countries; some intend to work or study in an English-speaking country; and some use English for professional reasons in their home countries. Given the importance of written communication for global education and business and thus the need to teach and assess writing throughout the world, the interest in reliable and efficient automated scoring systems for assessing the writing of ELLs is increasing, and the applicability of automated essay scoring (AES) systems to non-native speakers (NNSs) 1 of English is an ever-more important concern. One might even argue that the largest market for automated scoring of English writing is not in assessing the writing ability of native speakers, but rather that of NNSs of English.