ABSTRACT

This chapter juxtaposes two mainstream approaches in their efforts to clarify the concept of measuring translation quality: qualitative human-based approaches and quantitative collective standards-oriented translation quality assurance. Their underlying rationale and processes are compared against the background of the superordinated framework ICS as tertium comparationis. The analysis shows that the anonymization process involved in the establishment of quantitative standards is incommensurable with individual human value judgements; their inherent uncertainty resists quantification processes. It is argued that modern advances in human-based, technologically supported text analysis tools (Relatra) may offer ways out of this dilemma by attributing, representing and comparing an individually established ‘sense’ to (translated) texts. Operationalizing individual sense by combining a-coherent text islands with one’s own world or text knowledge (individual hypotheses) to visibly connect all text utterances and form topic maps, we gain a new parameter in measuring translation quality along different text levels, including ‘the text as a whole’, from an inter-individual point of view. We therefore suggest to leave the individual critical value judgement of translation quality to a project team of expert jurors as an ‘outside authority’, working on individual ‘grading’ assignments with other expert teams, depending on the nature of the individual project assignment (exemplified in the team-teaching experience when translating “Für höchstes Gut” into E/G/F/I/R on the occasion of the Tschernobyl disaster.