ABSTRACT

Advances in automated writing evaluation (AWE) systems have been guided by the education community. Specifically, the community has acknowledged a need to more comprehensively address construct-relevance in automated essay scoring, and to develop capabilities intended to provide relevant, actionable feedback that supports developing writers working independently. Advances in natural language processing (NLP) capabilities have led to a more varied AWE portfolio that provides (1) increased construct-relevance in automated scoring of writing with regard to quality and content and (2) individualized writing quality feedback to support a self-regulated writing process. In this chapter we discuss AWE research motivation and development trends for generating writing quality and content feedback, system descriptions, the “lift” involved in innovative AWE system development, and exploratory research expected to inform system advances to include personalized learning analytics.