Smartling has multiple features to support translation quality. Each of the following has a different purpose and best practices:
Purpose: Automatically check for certain kinds of translation quality issues that can be defined programmatically.
Quality Checks are a set of translation rules assigned to a specific Project under a Quality Check Profile. The profile is integrated with the Smartling CAT Tool for linguists to follow in their translations. When a Quality Check fails to pass its evaluation, e.g. if a translation contains a misspelling, a warning is shown in the CAT Tool.
You can also create Custom Quality Checks, using regex, if a Quality Check isn't already provided in the standard list.
Quality Checks configured with a “High” severity levels will prevent translations from being submitted to the next workflow step. For this reason, it’s recommended to use this setting carefully and judiciously.
Quality Checks will not be applied when importing translations, updating translations from the Translation Memory (including propagating to the project strings) or when using Quick Edits in the Strings View, or when changing workflows steps from the Strings or List Views.
Purpose: A channel between content owners and linguists to communicate a required change should be made to the source content, context, instructions or the translation.
Smartling's Issues feature is a lightweight “action-required"communication system that can track problems in the source and/or the translations. It has a simple Open/Close lifecycle to support translation project management.
Source Strings Issues require attention from the content owners. They are opened by linguists, typically seeking further information about the content.
Translation Issues require the attention of the person who translated the string. They can be opened or closed by any user who has access to translations that have been authorized, and their state is not automatically modified by workflow step changes.
Historically, Issues have been used to track “quality” by some users. However, with the introduction of Linguistic Quality Assurance, Smartling recommends that Issues be used exclusively to communicate required actions that are not related to translation quality.
Continue reading to Issues Vs. LQA for more on this.
Linguistic Quality Assurance
Purpose: Objectively evaluate the quality of translations for improving the quality by providing feedback to the linguists.
LQA is a human-driven process that is based on creating a quality assurance schema that describes objective issues that are seen when evaluating the translations for linguistic accuracy and proficiency.
The goal is to evaluate translations and then report on the overall quality to help your linguists understand how well they are performing. It can be used to make strategic decisions on your localization process and to satisfy service-level agreements with your translation vendors.
The users performing LQA should be native speakers of the translation language. It is advised that a professional linguist performs the LQA review, as this will ensure the review is objective. An internal reviewer within your company could apply a more preferential view on the translations. Furthermore, the internal reviewer might not have “editing” permissions, but can instead reject translations or use the Issues feature to flag areas of the translation that require revision by the original translator.
Additionally, it is expected that the LQA reviewer will edit the translation to correct the error they are recording. LQA errors are recorded on the assumption that if LQA reviewer reports an error on the translation that entered the step; not the translation that progresses after revision.
You can use the string history, and LQA reports, to show the original translators the mistakes that were flagged in LQA.
It is important to note that if the translation reviewed under LQA and instead of being revised on that step, it is rejected, the linguist who receives the rejected string will not see the LQA error, unless LQA is not enabled on that workflow step. It is uncommon to enable LQA on multiple steps on the one workflow.
The LQA errors data exists even when the translation changes steps. Meaning, if a translation is reviewed under LQA and is sent to the previous step and comes back to the LQA step, the initial error recording remains, and can be modified, but only on the LQA-enabled step.
It is recommended that LQA errors are never “deleted”, even if/when the translation is corrected. This is because LQA errors are a snapshot of time, so if there was an error in a specific revision of the translation, it should be recorded and not removed.
Issues vs LQA
Smartling strongly recommends reserving the Issues feature for tracking “bugs” in your implementation and integration that prevent you from using the translations. For example, if a translation doesn’t display correctly due to a limitation in an application user interface where the translation is used, you might open a translation Issue to request that the translators reduce the length of a translation to fit in the space.
Another example could be to alert the linguists that a formatting change or addition in the translation does not work or creates a visual aesthetic issue in the end application. When responding to these kinds of issues, the linguists may need to choose a less desirable translation or formatting to satisfy the technical issue.
Linguists can use the Source Issues to communicate questions that they have about the source content to help them understand it before they produce translations. The best way to avoid or resolve Source Issues is to provide high quality Visual Context and instructions, and keep your Glossary up to date.
The review step is often controlled by an internal reviewer - a person in your business that is fluent in a language you are translating on. It is also common that the internal reviewer simply reviews the translations, without making any revisions. If revisions are required, it is important to enable the reject function on that workflow step, so the internal reviewer can reject the string to the original translator, and open a translation issue to provide further detail as to what areas require attention.
As mentioned with LQA, it is expected the LQA reviewer is external to your business to ensure an objective view on the translation quality. It is also expected that this reviewer correctly edits the translation before progressing it along the workflow.
LQA should be used to provide feedback about the quality of translations using criteria that are as objective as possible. LQA errors are not subjective, functional or technical problems. Rather they indicate that an objective problem with the translation: translators failed to follow the style guide or glossary, or are the translations being inconsistent.
Content owners should design their LQA Schemas to avoid subjective feedback about translation quality. This kind of feedback is difficult to respond to because often it’s a matter of opinion. When designing your schema, we recommend you use the Smartling's Standard LQA Schema or consider one of several well known and popular “error typologies” that linguists may be familiar with.
It is important to note that quality check results are not static and can change over time. Using Quality Check to assess translations can have different results, even if translations are unchanged. Similarly, the warnings flagged to translators when saving and submitting translations in the CAT Tool can be different from the results an Account Owner or Project Manager gets if they run checks at a later point, either in the CAT Tool or Strings View.
The following are some reasons why results can change over time:
- Translations can change.
- The configuration of the Quality Check Profile can change.
- Translation Memory is continuously changing; so checks that compare to TM can have different results over time.
- Glossary can change; terms can be added/removed and configuration of terms can change.
- Spellcheck can change because the system wide dictionaries can change and we also allow users to have personal dictionaries that are used by that user and not shared with anyone else.