Documente Academic
Documente Profesional
Documente Cultură
Quality in translation is certainly one of the most debated subjects in the field.
The strong interest it continues to generate among different groups, from
researchers and translation organisations to practitioners and translation
teachers, has made it a field of inquiry on its own, called translation quality
assessment (TQA). This interest is motivated by both academic and
economic/professional reasons: the need to evaluate students work and the
translation providers need to ensure a quality product). What makes a good
translation? What are the standards that have to be met for a translation to
be excellent, good or simply acceptable? Is there a universally
acceptable model of evaluation? In the absence of any precise answer to the
above questions, one can imagine an impetuous wish to develop an
evaluation system which would solve the problem of subjectivity by providing
a standard specification of what an acceptable translation should or should
not contain. But, as it is extensively recognised (Pym 1992, Sager1989),
there is still no universally accepted evaluation model in the translation world:
there are no generally accepted objective criteria for evaluating the quality of
both translations and interpreting performance. Even the latest national and
international standards in this area DIN 2345 and the ISO 9000 series
do not regulate the evaluation of translation quality in a particular context.
[] The result is assessment chaos. (Institut fr Angewandte Linguistik und
Translatologie, 1999, in Williams, 2001: 327). The reason why no single
standard will suffice is that quality is context dependent. This is what Sager
(1989) says when he says there are no absolute standards of translation
quality, but only more or less appropriate translations for the purpose for
which they are intended. For many types of texts, both vocative and
informative, an important element of their appropriateness or fitness for
purpose will be extrinsic whether they effectively usable by their
consumers/readers in pursuit of their purpose. Since the establishment of
such extrinsic standards of translation quality is elusive, a common tendency
is to take a narrower view, focusing on intrinsic characteristics of translated
texts and on errors committed in translation as a way of measuring quality.
(Williams, 2001:331). Moreover, the large number of error types made this model hard to
use. However, it proved to be popular, since numerous other organisations and agencies
in Canada (the Ontario government translation services, Bell Canada) opted for a
customised version of Sical. The search for workable evaluation schemes based on error
classification has continued, with many following the Sical model and listing a number of
error categories with or without a certain score attached to each and every one of them. In
that category fall schemes developed and adopted by big translation organisations such as
ATA (American Translators Association) whose scheme include 22 errors types ranging
from terminology and register to accents and diacritical marks. The categories require the
evaluator to spot the translation errors, then to assign 1, 2, 4, 8, or 16 error points for each
error. A passage (of usually 225- 275 words) with a final score of 18 or higher is marked
Fail. This tendency to assign a weighting on a pre-defined scale to every translation error,
rather than simply mark it as minor or major, rapidly gained popularity as it was
considered a step forward in the development of translation quality evaluation models.
But, while acknowledging that such a scale is more refined, we cannot implicitly accept
its objectiveness. No meta rules are stressed as to how an evaluator should apply these
scores, that is what constitutes a 1-point error versus a 16-point one. By making it
available on their web site, ATA gives the translators an idea about what types of errors
might be allowed in a translation that meets ATA standards.
In an attempt to further minimise the time and effort spent by the evaluator in objectively
grading a translation, some companies used the computer to manage mathematical
operations and manual processing in the allocation of translation errors. SAE J 2450 is a
quality metrics developed by SAE (Society of Automotive Engineers) in collaboration
with GM (General Motors). The aim was to establish a standard quality metric for the
automotive industry that could be used to provide an objective measure of linguistic
quality for automotive service information regardless of language or process. The metric
became an SAE Recommended Practice in October 2001. The model is based on seven
error categories focussing on content problems that might affect the overall understanding
of the content, rather than on style see Figure 2. These categories prompt the evaluator
or the translator to classify them as major or minor, with a numeric score and severity
level (serious /minor) attached to each error. According to its relevance in the source text
(ST), each error has a certain weight; the final score is obtained by adding up the scores
of the errors and dividing the result by the number of words in the text.)
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.126.3654&rep=rep1&type=pdf