ABSTRACT

Machine translation, the automatic conversion of text from one natural language to another, was one of the first non-numerical applications of the digital computer. Catford’s textual equivalence was, in a way, operationalized in the data-driven approaches to machine translation that would begin to dominate the research agenda by the turn of the millennium. The rise of statistical machine translation coincided with a new wave of technology-oriented research in translation studies. Studies of the translation process are increasingly based on user activity data from eye tracking, keystroke logging and related techniques garnered from contemporary translation environments that integrate both machine translation and translation memory. Several writers working within the broader domain of the critical humanities have homed in on the underlying assumptions made by advocates of machine translation, their political and cultural import, and especially their implications for how language itself is viewed.