Linguistic Quality Assurance (LQA) is a structured process for evaluating translation quality. This article explains how LQA works, how it differs from Quality Checks, and the key concepts behind schemas, scoring, and reports.
Linguistic Quality Assurance (LQA) is a process by which human linguists evaluate translations against a defined schema of objective errors. It gives localization managers a structured, data-driven way to measure translation quality, identify problem areas, and make informed decisions about their localization process.
LQA can evaluate any type of translation: human, machine, or machine with human post-editing.
Ready to set up LQA? See LQA: Setup Guide.
How LQA differs from Quality Checks
Smartling offers two complementary tools for translation quality. Understanding the difference helps you use both effectively.
| Quality Checks | LQA | |
|---|---|---|
| How it works | Automated rules that flag programmatic errors (spelling, tag consistency, placeholders, etc.) | Human linguists evaluate translations against a schema of error categories |
| When it runs | In real time, during translation in the CAT Tool | In a dedicated post-translation workflow step |
| What it catches | Objective, rule-based errors | Linguistic quality issues that require human judgment |
| Output | Pass/fail warnings in the CAT Tool | Scored evaluations with error categories and severity levels |
| Best for | Preventing common translation errors before submission | Measuring overall translation quality and informing process improvements |
Use Quality Checks to enforce rules during translation. Use LQA to evaluate the quality of translations after the fact and track quality trends over time.
LQA Basic vs. LQA Suite
LQA can be run in two ways:
LQA Basic: LQA is added as a step within your existing production translation workflow. Linguists evaluate translations in the same project where they were translated. This is the simpler setup and a good starting point.
LQA Suite: LQA runs in a separate, dedicated project. A sample of translated strings is sent to this project for evaluation, keeping quality assessment separate from production translation. LQA Suite also supports the Translation Round-Trip feature, which allows edits made during evaluation to be pushed back to the original production strings.
For most teams getting started with LQA, LQA Basic is sufficient. LQA Suite is better suited to teams that need to evaluate translations at scale, run regular sampling programs, or keep production and evaluation workflows fully separate.
For full details on LQA Suite, see LQA Suite: Overview.
Key concepts
Schemas
An LQA schema is the catalog of errors that translations are evaluated against. Each error belongs to a category (for example, Accuracy or Fluency) and is assigned a severity level: neutral, minor, major, or critical. The schema defines what counts as a quality problem and how seriously each type of problem is weighted.
Smartling provides three industry-standard MQM-compatible schema templates, or you can build a custom schema. Whichever you choose, it is recommended to use the same schema across all evaluations so that results are consistent and comparable over time.
For a full comparison of the available templates, see LQA: MQM Schema Templates.
Severity levels and error weighting
Each severity level carries a numerical weight that determines how much it impacts the overall quality score. The default weights are:
- Critical: highest weight; reserved for errors that render a translation misleading, harmful, or unusable
- Major: significant errors that affect meaning or usability
- Minor: small errors that do not affect meaning but reduce quality
- Neutral: recorded for tracking purposes; do not affect the score
These weights can be customized when setting up a schema.
MQM score
The MQM score is a numerical representation of overall translation quality. It is calculated by accumulating the weighted penalty points for all errors recorded across a body of content. Translations are reverse-graded: the lower the penalty, the higher the quality.
A passing threshold is defined by the Acceptable Penalty Points (APP) value configured in the schema. By default this is set to 20, which corresponds to a Raw Quality Score passing threshold of 98 and a Calibrated Quality Score passing threshold of 80. The MQM score is only available when using an MQM-compatible schema template.
Workflows
LQA is enabled on post-translation workflow steps. Linguists access the LQA evaluation dialog from within the CAT Tool when working in a step where LQA is enabled. Translation resource users can only access the LQA dialog on workflow steps where they are assigned and where LQA is enabled.
If LQA is enabled on only one workflow step, linguists will not be able to access the LQA dialog to view errors or arbitration history after the content moves to another step. For this reason, we recommend enabling LQA on multiple post-translation steps or across an entire dedicated LQA workflow so that linguists can view errors and arbitration history as content progresses through the workflow.
Errors and arbitration
During an LQA evaluation, linguists review each translation and either record one or more errors or mark the string as having no errors. Each error is categorized according to the schema and assigned a severity level. These recorded errors feed into the MQM score and LQA reports.
After errors are recorded, translators can dispute them through a process called arbitration. Arbitration allows translators to provide context or justification for their translation choices, and reviewers or managers can then accept or reject the dispute. This back-and-forth is tracked at the string level and is visible in the LQA Errors & Arbitration report.
For step-by-step instructions on recording errors and arbitrating, see Evaluating Translations with LQA.
Viewing changes to translations
An error is recorded on the translation that was submitted to your workflow step. You can edit a translation before or after you record an error. You can then view the difference between the unedited translation (the translation the error is recorded on) and the edited translation (the translation that you will submit to the next step of the workflow), by clicking the downward arrow beside Translation in the error dialog.
The change (or diff) is highlighted for your attention.
Finding strings reviewed under LQA
You can filter for strings that have been reviewed under LQA in the Strings View. Use the LQA filter to find strings that have been recorded as having errors, having no errors, or that have not been LQA reviewed.
Note that strings that have been submitted without recorded errors (i.e. submitted or skipped) are recorded as strings with "No Errors" by default.
Users can also filter strings by LQA Status, when the CAT Tool is in "Job Mode".
Reports
LQA evaluations generate three reports, accessible under Reports in the top navigation:
- LQA Dashboard: a visual overview of MQM scores by locale, project, or job over time. Available only when using an MQM-compatible schema template.
- LQA Errors & Arbitration Report: a string-level view of all errors recorded, including category, severity, and arbitration comments.
- LQA Error Density Report: an overview of error counts and density by project, language, and job. Useful for reviewing vendor agreements and SLAs.
Video Tutorial: Linguistic Quality Assurance in Smartling
Timestamps:
What is Linguistic Quality Assurance (LQA)? 00:08
How to set up an LQA process in Smartling 01:10
Step 1: Create an LQA Schema 01:45
- Choose a severity format 03:14
- Add error categories 03:51
- Add error types 04:27
- Publish your schema 05:13
Step 2: Enable LQA as part of your workflow 05:28
How your Reviewers record LQA errors 06:30
LQA and Issues: What is the difference? 08:34
Step 3: Run & analyze the LQA report 10:04
Arbitration: What is it and how can it be enabled? 11:31
Adding arbitration comments 13:55
LQA Errors and Arbitration report 14:46
Help & Support 15:52
Additional resources
- LQA: Setup Guide — configure LQA in your account step by step
- LQA: MQM Schema Templates — compare and choose a schema template
- LQA Suite: Overview — learn about the dedicated LQA project approach
- Video Tutorial: Smartling's LQA Suite — walkthrough of LQA Suite setup and evaluation