ABSTRACT

Research writing in English is a tedious and lonely task for novice research writers who are learners of English as a foreign or second language. As such, corrective feedback is of great significance to them. One increasingly important source of corrective feedback is automated written corrective feedback (AWCF) generated by automated writing evaluation (AWE) tools. The effectiveness of AWCF has been extensively studied; however, research gaps still exist. First, non-source-based short essays have been much explored, but research writing is underrepresented in both system-centric and user-centric evaluations of AWE tools. Second, the effects of AWCF on revision outcomes have been the focus of many AWE studies, with little consideration given to feedback accuracy. Third, student engagement with AWE is underexplored, especially engagement across different feedback qualities. Fourth, randomized experimental and quasi-experimental designs are conducive to causal inferences between AWE use and revision outcomes, but such designs are rarely used in AWE studies. Given these research gaps, we extensively studied the use of an AWE system in research paper revision for publication purposes by Chinese doctoral students.