Smartling has multiple features to support translation quality. Each of the following has a different purpose and best practices:
Quality ChecksÂ
Purpose: Automatically check for certain kinds of translation quality issues that can be defined programmatically.
Quality Checks are a set of translation rules assigned to a specific Project under a Quality Check Profile. The profile is integrated with the Smartling CAT Tool for linguists to follow in their translations. When a Quality Check fails to pass its evaluation, e.g. if a translation contains a misspelling, a warning is shown in the CAT Tool.
You can also create Custom Quality Checks, using regex, if a Quality Check isn't already provided in the standard list.
Quality Checks configured with a “High” severity level will prevent translations from being submitted to the next workflow step. For this reason, it’s recommended to use this setting carefully and judiciously.
Quality Checks will not be applied when importing translations, updating translations from the Translation Memory (including propagating to the project strings) or when using Quick Edits in the Strings View, or when changing workflows steps from the Strings or List Views.
Issues
Purpose: A channel between content owners and linguists to communicate a required change should be made to the source content, context, instructions or the translation.Â
Smartling's Issues feature is a lightweight “action-required"communication system that can track problems in the source and/or the translations. It has a simple Open/Close lifecycle to support translation project management.
Source Strings Issues require attention from the content owners. They are opened by linguists, typically seeking further information about the content.
Translation Issues require the attention of the person who translated the string. They can be opened or closed by any user who has access to translations that have been authorized, and their state is not automatically modified by workflow step changes.
Historically, Issues have been used to track “quality” by some users. However, with the introduction of Linguistic Quality Assurance, Smartling recommends that Issues be used exclusively to communicate required actions that are not related to translation quality.
Continue reading to Issues Vs. LQA for more on this.
Linguistic Quality Assurance
Purpose: Objectively evaluate translation quality to provide feedback for improving linguist performance.
LQA is a human-driven process that involves creating a quality assurance schema to identify objective issues in translations, focusing on linguistic accuracy and proficiency.
The goal is to evaluate and report on translation quality to help linguists understand their performance, inform strategic decisions on localization processes, and meet service-level agreements with your translation vendors.
LQA should be performed by native speakers of the target language. Ideally, a professional linguist handles the review to ensure objectivity, as internal reviewers may offer more subjective feedback. Instead of conducting LQA themselves, internal reviewers can reject translations or use the Issues feature to flag areas that need revision by the original translator.
LQA reviewers are typically expected to correct errors while recording them. It’s important to note that LQA errors are recorded based on the translation that enters the step, not the version after revisions. You can use the string history and LQA reports to show original translators the mistakes flagged during the LQA process.
If a translation reviewed under LQA is rejected rather than edited, the translator who receives the rejected string will not see the LQA error, unless LQA is also enabled on that workflow step. It is uncommon to enable LQA on multiple steps on one workflow, unless the workflow includes specific steps for LQA arbitration or rebuttal.Â
The LQA error data remains accessible even if the translation moves between steps. Meaning, if a translation is reviewed under LQA and is sent to the previous step and comes back to the LQA step, the initial error recording remains, and can be modified, but only on the LQA-enabled step.
It is recommended that LQA errors are never “deleted”, even if/when the translation is corrected, as these errors serve as a historical snapshot of the translation's quality at that time.Â
LQA can be performed in a production translation project, referred to as LQA Basic, or in a separate project dedicated exclusively to LQA, known as LQA Suite. For more information, see Getting Started with LQA.Â
Issues vs LQA
Smartling strongly recommends reserving the Issues feature for tracking “bugs” in your implementation or integration that prevent proper use of the translations. For example, if a translation doesn’t display correctly due to a limitation in an application user interface, you could open a translation issue to request that translators shorten the translation to fit in the available space.
Another example could be alerting linguists to a formatting change or addition in the translation that causes visual or functional issues in the final application. In response to such issues, linguists may need to choose a less ideal translation or adjust formatting to resolve the technical problem.
Linguists can also open Source Issues to ask questions about the source content, helping them understand it better before producing translations. The best way to avoid or resolve Source Issues is by providing high quality Visual Context, instructions, and an up-to-date Glossary.
The review step is often managed by an internal reviewer - a person in your business that is fluent in the target language. It is common for internal reviewers to simply review the translations, without making any revisions. If revisions are required, it is important to enable the reject function on that workflow step, so the internal reviewer can reject the string to the original translator, and open a translation issue to provide further detail as to what areas require attention.
As mentioned with LQA, it is expected that the LQA reviewer is external to your business to ensure an objective view on the translation quality. It is also expected that this reviewer correctly edits the translation before progressing it along the workflow.
LQA should be used to provide feedback about the quality of translations using criteria that are as objective as possible. LQA errors are not subjective, functional or technical problems. Rather they indicate an objective problem with the translation: translators failed to follow the style guide or glossary, or the translations are inconsistent.Â
Content owners should design their LQA Schemas to avoid subjective feedback about translation quality. This kind of feedback is difficult to respond to because often it’s a matter of opinion. When designing your schema, we recommend you use one of the pre-configured schema templates or consider one of several well known and popular “error typologies” that linguists may be familiar with.
Important Considerations
It is important to note that quality check results are not static and can change over time. Using Quality Check to assess translations can have different results, even if translations are unchanged. Similarly, the warnings flagged to translators when saving and submitting translations in the CAT Tool can be different from the results an Account Owner or Project Manager gets if they run checks at a later point, either in the CAT Tool or Strings View.
The following are some reasons why results can change over time:
- Translations can change.
- The configuration of the Quality Check Profile can change.
- Translation Memory is continuously changing; so checks that compare to TM can have different results over time.
- Glossary can change; terms can be added/removed and configuration of terms can change.
- Spellcheck can change because the system wide dictionaries can change and we also allow users to have personal dictionaries that are used by that user and not shared with anyone else.