ABSTRACT

This book studies the use of an automated writing evaluation (AWE) systems in research paper revision for publication purposes by Chinese doctoral students.

Research writing skills are essential for achieving academic status, and AWE tools can be a great companion on the journey. However, AWE tools may provide a disservice if users do not stay alert to inaccurate feedback, inaccurate correction suggestions, and missed errors. The effects of accurate feedback on revision outcomes have been the focus of a number of AWE studies, but student engagement and revision results in cases of inaccurate feedback and missed errors have rarely been investigated. Such investigations can provide practical advice on using automated feedback in research writing. This book provides a comprehensive evaluation of AWE tools and profiles student engagement with tool use in cases of different qualities of feedback. It can empower novice scholars and improve the effectiveness of academic writing instructors. The findings can also inform AWE system developers about possible ways of system improvement for research paper writing.

The book will be particularly useful to students and scholars of language and linguistic studies, education, and academic English writing.