When used as a translation provider, Large Language Models (LLMs) are prone to hallucinations, which refers to cases where they generate nonsensical, incorrect, or inconsistent translations.
To help prevent publishing problematic translations, Smartling's hallucination detection feature automatically flags potential issues due to LLM hallucinations, allowing you to route affected strings to an alternative provider or workflow.
How it works
Smartling provides automated hallucination detection to help identify potentially problematic translations. This service uses a non-LLM Google embedding model (Vertex AI) that evaluates the semantic similarity between the source and translation to determine whether a translation is a potential hallucination.
If you prefer not to send data to Google, an alternative option using a different embedding model, LaBSE, is available. Please contact your Customer Success Manager to enable this alternative option.
Hallucination detection in a workflow
When translating with an LLM in a translation workflow, if a potential hallucination is detected, the alternative MT Profile will be used. If that provider also generates a potential hallucination, or if no alternative provider is configured for the workflow, the string will be flagged by opening a translation issue. You will then need to either move the string into a different workflow that uses human translation or an alternative provider (e.g., MT engine), or manually enter a translation. This detection service helps reduce the risk of hallucinations affecting translation quality. As with machine translation, we recommend including a human in the loop to validate and edit translations as needed.
Hallucination detection for MT integrations and the CAT Tool
Hallucination detection is also performed when using an LLM as your translation provider with one of Smartling's instant MT integrations or for MT suggestions in the CAT Tool.
If a problematic translation is detected for an integration using Smartling's MT API, the translation is still returned, but a validation error will alert you to the potential hallucination. You may then decide how you would like to proceed with the flagged translation, based on your preferences.
Example:
Disabling hallucination detection
Hallucination detection is enabled by default when an LLM is used as your primary translation provider.
If you wish to disable hallucination detection, you can do so within the LLM Profile:
- From the AI Hub, navigate to the Translation Profiles.
- Click on the name of the LLM Profile you would like to disable hallucination detection for.
- Select the checkbox "Disable hallucination detection".