ABSTRACT

Understood as the relative excellence of a translation product or process, quality can be measured in many ways, including automatic comparison metrics, evaluation by translators, evaluation by monolingual end users, time required for post-editing, time required for non-translation (language learning), process regulation, user satisfaction and translator satisfaction. Behind all these measures there lie a series of human judgements and work-process considerations. In order to draw out those human aspects of quality, a critical appraisal is made of five relations involved: 1) Automatic evaluation metrics appear to measure equivalence to a start text but in effect adopt a reference translation, which is itself subject to all the hazards of translational indeterminacy; 2) Claims to parity with human translation are based on human judgements of acceptability but are often measured on the basis of isolated sentence pairs, which is not how humans communicate; 3) Criteria of usability generally do not take into account the risks involved in not knowing where error might lie; 4) Industrial regulation of production processes allow for enhanced reviewing and revision needs but do not address technologies directly; and 5) Assessments of translator satisfaction give variable results but tend not to account for the individual skills involved in the use of technologies.