What is MQM
MQM, or Multidimensional Quality Metrics, is a comprehensive system designed to assess and monitor the quality of translated content. MQM serves as a standardized Linguistic Quality Assurance (LQA) framework to evaluate translation quality across various categories.
Assessing translations under the MQM framework of LQA can help identify strengths in your localization process, as we as opportunities to improve. Furthermore, it helps identify if any change in your localizations process has had a positive impact on your translation quality.
Human and machine translations can be evaluated with MQM LQA.
By evaluating and monitoring translations under MQM, organizations can make strategic actions on their localization process with concrete, objective data.
For example, a rise in errors under terminology could suggest revisions are required to the glossary, while stylistic errors could suggest revisions to the style guide.
MQM Schemas
MQM assesses translation quality based on categories of errors, making it more manageable and structured. Each error is assessed with a severity weight. The number and severity of errors in these categories allows us to generate MQM scores for a body of content.
The MQM LQA framework is available in Smartling as two schema templates when setting up:
Simplified MQM Schema
Smartling's simplified version of the full MQM schema has fewer categories and error types, and is easier for evaluators to assess translations:
Category | Errors | Description |
Accuracy |
|
Errors in this category occur when the target text does not accurately correspond to the propositional content of the source text. |
Linguistic conventions |
|
Errors that are related to the linguistic well-formedness of the translated text. |
Style |
|
Errors in text that are grammatically acceptable but deviate from organizational style guides or exhibit inappropriate language style. |
Technical & Locale |
|
Errors occur when the translation product violates locale-specific content or formatting requirements for data elements. |
Other |
|
Full MQM Schema
The full MQM schema is an industry-standard MQM schema with the entire catalog of errors:
Category | Errors | Description |
Terminology |
|
Errors arise when a term does not conform to normative domain or organizational terminology standards or when a term in the target text is not the correct, normative equivalent of the corresponding term in the source text. |
Accuracy |
|
Errors in this category occur when the target text does not accurately correspond to the propositional content of the source text. |
Linguistic conventions |
|
Errors that are related to the linguistic well-formedness of the translated text. |
Style |
|
Errors in text that are grammatically acceptable but deviate from organizational style guides or exhibit inappropriate language style. |
Locale conventions |
|
Errors occur when the translation product violates locale-specific content or formatting requirements for data elements. |
Audience appropriateness |
|
Errors arising from the use of content in the translation product that is invalid or inappropriate for the target locale or target audience |
Design & markup |
|
Errors related to the physical design or presentation of a translation product, including character, paragraph, and UI element formatting and markup, integration of text with graphical elements, and overall page or window layout. |
Other |
|
Error Severity
Translations are assessed against categories of errors. Each error is given a severity, critical, major, minor, and neutral. Each severity value carries an error weight. These weights indicate the relative importance or severity of each error when assessing the overall quality of a translation. The calculation of error weights results in an MQM score.
Severity Values | Description | Default Weight |
Critical | Error is severe and could have legal or commercial repercussions for the client. When used, this category has a significant effect on the overall score. Examples include: Omitting negation in a terms and conditions text or mixing up the 2 parties in a legal contract. |
25 |
Major | Errors that may cause confusion for the reader, sounds unnatural, introduce inconsistencies, or diverts from customer’s established linguistic assets (SG, TM, Glossary, Instructions) | 5 |
Minor | Mistakes such as typos, or issues with punctuation that do not impact the reader’s understanding, but are still objective errors. | 1 |
Neutral | Error categories that should not affect the final score, but still need to be documented. Neutral errors can be used to offer preferential suggestions, feedback or kudos to translators. | 0 |
Customization
You can customize the default severity weight on any unpublished schema:
- Go to Account Settings> Linguistic Quality Assurance
- On any draft schema, click the ellipses > Edit
Acceptable Penalty Points
A key component of any LQA strategy is assessment. It’s important to know whether translations pass or fail according to your evaluation. Acceptable Penalty Points (APP) allow you to maintain consistent translation quality by defining clear pass/fail criteria based on your MQM framework, providing a clear picture as to whether translations passed or failed the assessment.
When you create a new MQM-compatible LQA schema, you have the option to specify the APP value. By default, the value is set to 20, which results in a Raw Quality Score passing threshold of 98 and a Calibrated Quality Score passing threshold of 80. These thresholds change dynamically based on the APP set.