ABSTRACT

With the coronavirus pandemic that began in early 2020, there has come a heightened, practical need for remote, “at-home” testing and assessment of second and foreign languages. It may be argued that for the use of online second language (hereinafter L2) test methods to succeed in terms of validity (Messick, 1989), it is necessary to review and possibly modify theoretical models that may undergird such emergent forms of assessment. This chapter therefore first considers a number of models and frameworks of spoken L2 language ability (Canale & Swain, 1980; Bachman, 1990; Celce-Murcia, et al., 1995; Markee, 2006) and offers a critical appraisal of their respective usefulness in light of both principles of validity and also practical constraints surrounding oral online L2 assessment. Such analyses are complemented by an examination of data from a number of empirical studies comparing face-to-face (f2f) and online modes of L2 assessment as well as emergent studies of online communication in the oral mode. Analyses are informed by the theoretical literature on test validity (e.g., Messick, 1989) and research findings from the field of conversation analysis (CA) (e.g., Hall, 2021; Sacks, et al., 1974; Schegloff, 1986, 2007).