How to run Linguistic Quality Assurance hassle-free and at scale

Need translations? Try Smartcat for free!

When you are a small localization team, linguistic quality assurance (LQA) is often done on an ad-hoc basis. But, as you start to scale up your localization efforts, you need to put a more formal process in place. You might be working with dozens or hundreds of translators and reviewers, all over the world. How can you keep track of who is doing a good job and who is not?

In this article, we will take a look at what LQA is, how it works, and why it is challenging for large-scale companies. We will also take a look at the multidimensional quality metrics (MQM) framework and how you can automate the process.

What is Linguistic Quality Assurance (LQA)?

LQA is a process of reviewing the source text and its translation to ensure it meets customer standards. Its goals are to check for spelling/grammatical mistakes, correct terminology use, accurate meaning conveyance, style accuracy, cultural adaptation, format correctness, and so on.

Importantly, LQA is a process with more than just one party involved:

  • buyers should ensure the source text is finalized before sending it to a translation vendor,

  • LSPs need to screen the source material,

  • translators must read instructions carefully and, well, translate to the best of their abilities.

Linguistic quality assurance can be broken down into three activities:

  • source text review,

  • automated check of machine-detectable errors,

  • and final check by a native speaker.

For the sake of brevity, we’ll only consider the latter two.

Automated LQA

Automated linguistic quality assurance involves using various software tools to detect typos and errors in the translation. Smartcat, for example, can automatically check for typos and number mismatches, as well as formatting errors. It also has a built-in spellchecker that can be used to quickly identify spelling mistakes.

LQA is a standardized way to communicate mistakes in a translation platform. It's easy to follow and understandable by all.

But, however enticing automated LQA may be, it is not a substitute for manual review. Automated checks can only detect certain types of errors and are prone to false positives. Besides, no automated tool can check for meaning accuracy, stylistic flaws, or cultural appropriateness.

Human LQA

Manual LQA is the process of reviewing translations for errors that cannot be detected by automated tools. It involves a reviewer going through the text and making sure it meets all quality criteria, such as accuracy, style, cultural appropriateness, etc.

However, “quality,” by its very nature, is a very subjective concept. What one reviewer may consider a good translation, another may deem to be of poor quality. This is why it is important to have a well-defined and agreed-upon set of quality criteria, as well as a process for recording errors, as well as a tool for reporting and analyzing them.

Smartcat uses the Multidimensional Quality Metrics (MQM) framework to assess translation quality, so let’s take a closer look at it.

What are multidimensional quality metrics (MQM)?

MQM is a framework for measuring and assessing the quality of translations developed by the World Wide Web Consortium (W3C). In a nutshell, it breaks down quality into several categories, namely terminology, accuracy, linguistic conventions, style, locale conventions, and design and markup (you can read more about them here).

For example, a terminology error could be using “car” instead of “automobile,” an accuracy issue could be mistranslating a phrase, style issues include using too formal or colloquial language (depending on the organization’s style guide), and design and markup errors could be UI element labels that are too long or too short.

Each category has its own weight, which is not standardized and left up to the organization to decide. Each error is also assigned a severity level from Minor (1) to Critical (25), also used for weighting.

The Translation Quality Evaluation (TQE) workflow

Granted, just having some metrics in place is not going to magically improve the quality of your translations. You need to have a process in place for actually using those metrics.

One such process is Translation Quality Evaluation (TQE), which goes side by side with the MQM framework.

In a nutshell, TQE is a workflow that includes the following steps:

1. Preliminary stage, where the metrics are defined and the evaluation criteria are set.

2. Annotation stage, where a human reviewer goes through the translation and marks errors according to the MQM categories.

3. Calculation stage, where a translation management system or a spreadsheet compiles a “scorecard” of all the errors and delivers it to the project manager.

Ensure you translations quality with Smartcat

Why is MQM an industry standard?

The multidimensional quality metrics framework is an industry standard for a number of reasons. First, it provides a common language for everyone involved in the process — the project manager, the reviewers and the translators. It also standardizes the evaluation process, with a clear set of categories and severity levels.

At the same time, MQM is flexible: organizations can decide which categories and severity levels are most important for their particular context. Finally, it mitigates the human factor in quality assurance, with a clear set of rules that reduces the likelihood of arbitrary decisions by reviewers.

How to automate large-scale linguistic quality assurance

While the MQM framework is a great way to ensure quality in large-scale localization projects, there are ways to automate the process and make it more efficient. One such way is to use a tool like Smartcat, which automates the MQM workflow on several levels:

  • You can create LQA profiles from predefined templates, which include the industry-standard MQM MQM Core and MQM-DQF frameworks:

Create a new profile

  • If needed, you can customize the profiles to better suit your organization’s needs:

Customize the categories and select the weight, i.e. how important they are.

  • The reviewer can add comments with specific MQM categories and severity levels right from the Smartcat interface:

  • The results are compiled automatically into LQA reports, complete with the overall quality score, a breakdown of errors by category and severity level, and even references to the specific segments where the errors were found:

This way, you can get all the benefits of MQM for large-scale linguistic quality assurance without losing your mind in the process.