ABSTRACT

The global demand for English proficiency in the workforce and academia has grown steadily and estimates suggest that in 2020 over two billion people will be using English on a regular basis at some level of proficiency. Human rating, which has traditionally been the only means to evaluate such constructed spoken responses, can be costly, time consuming, and subject to factors that may negatively impact the validity of the scores, such as rater fatigue, and rater bias. Automated speech scoring can be situated within the broader context of automated scoring applications in general. The idea to use automatically extracted features for scoring a test-taker’s constructed response was first proposed by E. B. Page in the context of automated essay scoring. The primary application area that has benefitted from the use of automated speech scoring technology is automated assessment of non-native speaking proficiency.