Smartling supports the use of Large Language Models (LLMs) as translation providers.
The process for setting up LLM translation is similar to setting up a traditional MT provider, with the added need for a well-crafted translation prompt, customized parameter settings, and recommended testing and monitoring to ensure the translation output aligns with your brand messaging.
MT engines vs. LLMs
Unlike MT engines, which are generally ready to use out of the box, LLMs rely heavily on well-designed prompts to produce the desired translation output. Additionally, LLMs are prone to hallucinations, which refers to cases where they generate nonsensical, incorrect, or inconsistent translations. Because of this, MT engines are often a more reliable and recommended option for translation, while LLMs are better suited for smoothing and refining the translation output (for example, by using Smartling's AI Toolkit).
However, with the right prompt and tools like RAG technology, the translation quality produced by LLMs often rivals or even exceeds traditional NMT providers, and can produce more customized results.
Check out this post in the Smartling Community to learn about best practices for translating with LLMs and how they compare to MT engines.
Supported LLMs as translation providers
Smartling supports the following LLMs as translation providers:
Similarly to traditional MT providers, these LLMs can be used to translate content with any workflow or integration that is part of Smartling’s AI Hub.
Built your own in-house LLM service? Use it in Smartling—learn more in Bring Your Own MT or LLM Service.
Benefits of translating with LLMs in Smartling
To achieve optimal results with LLM translation, Smartling offers highly specialized features to enhance and customize the translation output, and to prevent potential issues due to hallucinations.
Efficient prompt creation and testing
- Smartling's prompt management interface enables you to easily store, test and adjust your prompt and configuration details.
- A side-by-side view of your translation prompt and a testing interface allows you to adjust your prompt in real time based on the test results.
-
Adjustable translation parameters allow you to tailor the translation output to your preferences, for example by adapting the level of output creativity, sampling range and repetition tolerance.
For more information, see Translation Parameters for LLM Translation. -
Prompt tooling with RAG (Retrieval-Augmented Generation) allows you to automatically inject your translation prompt with example translations and glossary terms from your linguistic assets. This is proven to significantly enhance the quality of the translation output, and to ensure that translations adhere to your organization's preferred style and terminology.
For more information, see Prompt Tooling with RAG. - Smartling allows you to create dynamic translation prompts with conditional logic.
Jinja2 conditions can be included in your prompt to dynamically adapt it to specific circumstances based on predefined rules.
Tip: For more information about Smartling's prompt interface and syntax, see Managing LLM Profiles and Prompts.
String batching
- When strings are sent to an LLM for translation, they are bundled into a string batch. This allows for your content to be processed more efficiently.
- Since the translation prompt is sent to the LLM only once per string batch (instead of each individual string), string batching helps reduce the token count for your translation request.
- The exact size of each string batch depends on the model used for translation, but won't exceed the maximum supported token count.
Mitigate hallucination issues
- When used as a translation provider, LLMs may at times generate nonsensical, incorrect, or inconsistent translations. This behavior is referred to as "hallucinating".
- Smartling's hallucination detection feature automatically flags potential issues due to LLM hallucinations, allowing you to route affected strings to an alternative provider or workflow.
- Hallucination detection helps catch problematic translations before they get published and may negatively impact your brand.
Tip: For more information, see Hallucination Detection for LLM Translations.
Customize LLM translations with your linguistic assets
- AI-Enhanced Glossary Term Insertion allows for your glossary terms to be inserted in LLM translations and adapted to the surrounding sentence structure, in order to preserve your brand terminology.
- For strings where a translation memory match is available, the existing translation from the TM can be inserted and used instead of an LLM translation. If the AI Toolkit is enabled, available TM matches can be optimized with the help of AI, to provide a translation that is better adapted to the current source text.
Optional add-on: Smartling's AI Toolkit
Smartling's AI Toolkit can be used in combination with LLM translation workflows as an optional add-on. This bundle of AI-powered features to optimize the LLM translation output and workflow.
- Adjust the formality register to address your audience with the correct formality level.
- Use the AI Post-Editing Agent to further optimize LLM translations by referencing your linguistic assets and locale-specific rules.
- Use AI Adaptive TM to increase your translation memory leverage by optimizing available matches, which can then be inserted and used instead of an LLM translation.
- Use the Language Quality Estimation Agent to predict the quality level and route LLM translations accordingly to the right workflow steps.
Measurable translation quality
- Smartling's Linguistic Quality Assurance (LQA) tools can help facilitate an objective evaluation process, providing an MQM quality score for easy assessment.
- An AI-powered LQA Agent will become available in 2026, allowing you to automatically assess LLM translation quality.
How to get started with LLM translation
- To begin translating with an LLM in Smartling, you will need to obtain provider credentials from your preferred provider and then store them in Smartling.
For more information, see MT and LLM Provider Credentials (BYOK). - Once you have set up your provider credentials, you can use them to create an LLM Profile. This is where you store your translation prompt, and further customize the translation output.
For more information, see Managing LLM Profiles and Prompts. - The LLM Profile can then be used to translate your content in the Smartling platform (in a translation workflow or to provide translation suggestions in the CAT Tool), or with one of Smartling's instant MT integrations.
Important considerations for translating with LLMs
Understanding token limits
LLM models use tokenizers to break text into units called tokens. Token usage includes both input and output tokens. Input tokens refer to everything fed into the model, including the prompt and the source string. Output tokens refer to the number of tokens returned by the model.
There is no universal tokenization method. Text can be broken down by words, characters, or character sequences, depending on the model. Therefore, the number of translated words shown in Smartling will likely differ from the token usage.
Each model has a maximum token limit that applies to both the input prompt and the generated response. In addition to the length of the translation prompt sent with each request, larger source content and a greater number of target languages increase the risk of reaching the token limit.
When you create an LLM Profile for translation, you can optionally specify token limits in the configuration.
Understanding rate limits
Rate limits control the number of requests or the volume of traffic allowed within a specific time period. Translation providers define rate limits for their models. You must adhere to these limits to maintain successful translations.
If a token or rate limit is exceeded, an error will appear in Smartling on the Profiles page in the AI Hub. Smartling will retry using the LLM Profile until a translation can be produced successfully. If the overall or monthly token limit has been reached, the LLM Profile may stop generating translations.
Adding a fallback translation provider
When using an LLM in the translation step of a workflow in Smartling, it is strongly recommended to configure an alternate MT profile and/or a fallback method on the workflow step. This backup will be used if translation with the LLM fails.
Learn more in Understanding and Troubleshooting Machine Translation Errors.
Quality considerations
- Tags and placeholders: Smartling performs an automated formatting clean-up to restore HTML tags and reduce incorrect spacing where possible. However, a human post-editing step may be required at times to ensure the correct placement of HTML tags and placeholders.
- Low-resource languages: LLMs generally perform poorly when translating low-resource languages. It is also important to check the documentation for your specific model to determine which locales or languages it supports.
- Hallucinations: LLMs are prone to hallucinations, which refers to cases where they generate nonsensical, incorrect, or inconsistent translations. Smartling provides a hallucination detection service to help identify potentially problematic translations. This detection service helps reduce the risk of hallucinations affecting translation quality. As with machine translation, we recommend including a human in the loop to validate and edit translations as needed.