Marc Has a Problem, but Third-Party QSR Evaluation Can Help
Meet Marc
Marc is a program manager in the language adaptations department of a travel company. His team is responsible for translating brochures, videos, and tourist guides from English into twelve different Central European languages.
Marc enjoys working with outside translation agencies—they collaborate well and deliver on time. Their teams of translators love working for Marc, too. His content is enjoyable, engaging, and always teaches them something new.
However, at least once a month, Marc receives complaints from local marketing agencies that use the translated content:
- “The material is useless!”
- “This is Machine Translation!”
- “It’s obvious this wasn’t created for the target audience!”
The translation providers claim that the text is just fine. Sure, it could be more fluidly written, but overall, it’s acceptable work.
Marc frequently asks the marketing agencies for more specific feedback, but their teams are always too busy to report back.
How can we make Marc's life easier?
We’d like to introduce Marc to the Quality Services & Reporting (QSR) group at Translations.com, where our team provides independent review, evaluation, and auditing services. These offerings will help Marc assess the quality of localized content and understand the feedback from the displeased marketing companies.
QSR EVALUATION
Our first recommendation in addressing negative feedback is to pursue a third-party evaluation. This evaluation—performed by linguists and subject-matter experts—will foster an objective quality assessment of the translations. With detailed feedback and scores, managers and non-native speakers can decide how to revamp their translation processes.
The primary concern when requesting this assessment is that the evaluators might concentrate too much on grammar and neglect what’s most important for the target markets. The traditional quality assessment models, like LISA or J2450, are mainly applicable to highly regulated and technical content. They’re not ideal for the types of content Marc’s travel agency needs. Above all, his websites have to read well and the internal training videos should sound natural, but industry-specific.
The QSR Evaluation Model considers more factors than the aforementioned traditional approaches. It factors in readability, relevance, and market suitability for certain content types. So, let’s talk about Marc again. If his clients are complaining about a lack of local market adaptation in a travel guide, experts will also take notice and report it to Marc. The model is additionally helpful for objectifying any negative feedback given to Marc. A “machine translation” accusation might not be justified, but the linguistic evaluators can synthesize more general feedback into a thorough assessment of the text’s weaknesses and the different ways it can be perceived by the reader, and then transform it into an improvement plan.
IDENTIFY WEAK LINKS
The second step for managers to control quality is to perform regular sample reviews of content translated by different LSPs. Quality evaluation using a customized model allows more insight into the translation provider’s performance. It also facilitates data-driven decisions about future business with different vendors.
The data provided by evaluation teams, like QSR, help to identify:
- Under-performing translation providers
- Problematic content types
- Trends in errors
KNOWLEDGE SHARING
People in Marc’s position don’t always have an in-house quality management team to facilitate the third step in feedback management: closing the loop. Making sure the translation vendors are learning from feedback is a crucial element of the quality improvement cycle. Teams like QSR can also manage regular catch-up sessions with suppliers to make sure no knowledge is lost and no update, issue, or outcome goes unnoticed. They can also control language asset management and ensure that style guides, glossaries, TMs, and reference materials are regularly reviewed and updated.
IDENTIFY THE CAUSES
The fourth step the QSR team could implement is identifying root causes of recurring issues and proposing corrective and preventative solutions. The actual cause of an issue could result from problems with vendors, poor maintenance of linguistic assets, or faulty technology and expectation management.
Depending on Marc’s needs and the outcome of the evaluation, the quality teams would propose preventative actions such as:
- Define new—or improve the existing—QA processes and metrics
- Decide on content mediums and jobs that might benefit from regular evaluations
- Organize calls between local office employees and our experts
These recommended steps aren’t quite the same as setting up a full Quality Management Plan, but they’re definitely a step in the right direction.