ABSTRACT

Much of the literature on automated essay scoring (AES) frames it as an attempt to model human essay scoring, with its assignment of scores or grades based upon a rubric. This is certainly the way that Project Essay Grade was originally conceptualized (Page, 1966, 1967), and it continues to be the default approach in most AES systems, including e-rater (Burstein & Chodorow, 2010; Burstein, Chodorow, & Leacock, 2003; Chodorow & Burstein, 2004; Powers, Burstein, Chodorow, Fowles, & Kukich, 2000, 2001), and the Intelligent Essay Assessor (Landauer, Laham, & Foltz, 2003), among others (Larkey & Croft, 2003). Some of the other chapters in this volume explore the strengths and limitations of this approach and possible ways to move beyond it (e.g., the chapters by Attali, Bridgman, Burstein and Williams). In this chapter, however, a slightly different approach will be explored. Rather than starting with human rubrics and human scores, and examining how much of the same information can be captured in an AES system, analysis will begin by considering a general cognitive framework, a construct model for writing, and then work forward to consider what aspects of the construct can and should be modeled in current and future approaches to automated writing assessment.