LLM Profiles (also referred to as "Translation Profiles") are a collection of configurations for the Large Language Models used to translate your content in Smartling.
LLM Profiles provide an interface to create, store and test your translation prompt, and allow you to further customize the translation output for each provider.
Each profile can then be assigned to a machine translation workflow. You can also use your LLM Profile to provide MT suggestions in the CAT Tool, to make use of Smartling's MT API, or to display translations directly where they are needed with a range of instant translation integrations.
Warning: Please note that the information included in this article only refers to LLM Profiles for GPT (OpenAI) and Google Gemini (Vertex AI). If an LLM is used as a translation provider as part of an MT Profile, please refer to the documentation for your provider instead.
How to access LLM Profiles
To view, manage or create LLM Profiles:
- Click into the AI Hub from the top navigation bar of your Smartling dashboard.
- Click into the Translation Profiles tab.
Here, you can manage existing MT or LLM Profiles, and create additional profiles for a provider of your choice.
How to create an LLM Profile
To set up an LLM as your translation provider, please follow these steps:
Step 1: Obtain provider credentials
Obtain credentials for your preferred LLM provider, and store them in Smartling.
- To bring your own provider key, you will need to set up your own provider credential in Smartling. Once the provider key has been stored on the Credentials page of the AI Hub, you can then simply select this credential when setting up your new LLM Profile.
- If you are opting for GPT (Azure), Google Translation LLM or Google Gemini (Vertex AI), your Smartling Customer Success Manager can provide provider credentials for testing purposes.
- To continue translating beyond testing, you will need to supply your own key.
- If you avail of this option, an LLM Profile will have already been created by your Smartling Customer Success Manager. You will still need to customize the profile with your own parameters, token details and translation prompt (see next step).
Step 2: Create a Translation Profile
Once the provider credential has been saved in Smartling, a Translation Profile needs to be created to use the LLM in your Smartling workflows or integrations.
- From the AI Hub in the top navigation bar of your Smartling dashboard, navigate to the Translation Profiles.
- Click Create Profile.
- To create a new Translation Profile, select one of the following actions based on your provider:
- Select LLM Profile (RAG) if you are using GPT (OpenAI) or Google Gemini (Vertex AI).
- Select MT Profile if you are using any other supported LLM provider.
If creating a profile for a different LLM, the setup process will look slightly different that what is described below as Prompt Tooling with RAG is currently only supported for GPT (OpenAI) and Google Gemini (Vertex AI). See Creating and Managing MT Profiles for more information.
Info: If your Smartling Customer Success Manager has provided credentials for testing purposes, an LLM Profile has already been set up. You will, however, need to edit the Profile to enter your own translation prompt and set your desired translation parameters. Click on the profile name to customize it.
Step 3: Enter your provider and token details
If you are bringing your own provider credential, enter the provider and token details for your preferred LLM on the Configuration Details screen.
Tip: If an LLM Profile was created by your Smartling Customer Success Manager, the provider details are already filled in and this step can be skipped.
Provider details
- LLM Provider: From the dropdown menu, select the LLM provider you want to use as a translation provider in Smartling.
- LLM Profile Name: Enter a profile name that is easily identifiable by your team. We recommend choosing a name that indicates the selected provider, as well as any additional specifications.
- Provider Credentials: From the dropdown menu, select the provider credential you created earlier.
- Version or model details: Specify which version and/or model should be used to translate your content.
Token details (optional)
LLM models use tokenizers to break text into units called tokens. Token usage includes both input and output tokens. Input tokens refer to everything fed into the model, including the prompt and the source string. Output tokens refer to the number of tokens returned by the model.
There is no universal tokenization method. Text can be broken down by words, characters, or character sequences, depending on the model. Therefore, the number of translated words shown in Smartling will likely differ from the token usage.
Each model has a maximum token limit that applies to both the input prompt and the generated response. Please see your provider's documentation for specific token limits.
In addition to the length of the translation prompt sent with each request, larger source content and a greater number of target languages increase the risk of reaching the token limit.
Tip: Smartling uses string batching to reduce the number of tokens used with each translation request.
Under the Token Details of your LLM Profile, you can specify the following token limits:
-
Max supported tokens for the selected model:
Maximum amount of tokens that your model supports per request. -
Max supported output tokens for the selected model:
Maximum amount of output tokens that your model supports per request. Please refer to the documentation of the model that is selected for the LLM profile. Not all models support this parameter.
Tip: These fields are optional. If they are left empty, Smartling will use the provider's default values.
Info: If a token limit is exceeded, an error will appear in Smartling on the Translation Profiles page. Smartling will retry using the LLM Profile until a translation can be produced successfully. If the overall or monthly token limit has been reached, the LLM Profile may stop generating translations.
Step 4 (optional): Adjust the translation parameter details
Unlike traditional MT providers, LLMs allow you to customize the translation output by adjusting additional parameters.
Under Parameter Details, you can adjust the available translation parameters to your preferences. This step is optional. If no custom values are entered, your model's default values will be used.
The specified parameter details will influence factors like the creativity and randomness of the translation output, the tolerated level of word and topic repetitions, as well as the sample size used to create the translation.
For more information on the available translation parameters, please see Translation Parameters for LLM Translation, and visit the relevant documentation for your provider.
Step 5: Enable or disable hallucination detection
When used as a translation provider, Large Language Models (LLMs) are prone to hallucinations, which refers to cases where they generate nonsensical, incorrect, or inconsistent translations.
To help prevent publishing problematic translations, Smartling's hallucination detection feature automatically flags potential issues due to LLM hallucinations, allowing you to route affected strings to an alternative provider or workflow.
Tip: For more information, please see Hallucination Detection For LLM Translation.
Hallucination detection is enabled by default. If you wish to disable hallucination detection, you can do so within the LLM Profile, by selecting the checkbox "Disable hallucination detection".
Note: Once you have entered all configuration details, click Next to set up your translation prompt.
Step 6: Configure your translation prompt
Unlike traditional MT providers, LLMs allow you to customize the translation output by creating your own translation prompt. The translation prompt instructs the LLM on how to translate your content.
Smartling’s prompt configuration interface makes it easy to first create a translation prompt (on the left side), and then test the output (on the right side) to check if it aligns with your expectations.
Assets References (for Prompt Tooling with RAG)
Smartling allows you to augment your translation prompt with contextual information extracted from your linguistic assets. By using RAG technology (Retrieval-Augmented Generation), your translation prompt is automatically injected with highly customized translation data, allowing the LLM to better understand your translation preferences and to produce a more tailored translation output.
You can choose to augment your prompt with examples from your TM(s), with glossary terms, or both. To learn more, see our documentation on Prompt Tooling with RAG for LLM translations.
Note: Prompt tooling with RAG is currently only available for translations with GPT (OpenAI) and Google Gemini (Vertex AI). For other providers, we would recommend including examples of desired terminology and translations directly in the translation prompt ("few-shot prompt").
Prompt
A translation prompt is an instruction that asks the LLM to translate your content. You can also use the translation prompt to specify how the content should be translated. For example, you can mention the overall tone that should be applied in the translations (e.g. friendly, formal etc.), and include information on the intended audience.
The prompt creation interface is divided into three fields:
-
System Role (provided by Smartling):
The first field for the System Role, which is greyed out, is already pre-populated by Smartling based on the selected model and asset references. It is not possible to modify or edit this field.
Smartling's system prompt uses the DSPy (Declarative Self-improving Python) framework to ensure that any selected Assets References will be applicable to all supported LLM models. -
System Role (provided by you):
The second System Role field, which is editable, is where your translation prompt needs to be entered. In order to successfully translate content in your Smartling projects and integrations, it needs to contain the following elements:- We recommend using the word "translate" in the prompt to instruct the LLM to provide a translation.
Example prompt: "Translate the source text. The tone should be formal and friendly." - Additionally, conditions and placeholders can be used to dynamically adapt the prompt to different translation scenarios. For example, the use of conditional syntax allows you to create a single prompt that supports locale-specific rules.
For more information about the supported condition syntax and placeholders, see Conditions and Placeholders in LLM Prompts. - For best results, we recommend following these best practices:
- Complex, "few-shot" prompts typically achieve better results than simple prompts that provide little context about your translation preferences.
- Precise, affirmative and simple step-by-step instructions, as well as chain-of-thought prompts are typically the most efficient.
- Depending on your exact goals, locale-specific prompts may be needed.
- We recommend using the word "translate" in the prompt to instruct the LLM to provide a translation.
-
User Role (provided by Smartling): The User Role field is already pre-populated by Smartling based on the selected model. It is not possible to modify or edit this field.
Step 7: Test the prompt
Once the translation prompt has been entered, click "Test prompt" to check if the prompt is working as expected and produces the desired results. You will be able to produce a test translation for one or multiple sample strings. Click the blue section icon to expand the Test Prompt section.
Under Testing Prompt, enter the following details:
- Source and target locale: Specify the language pair you would like to get a test translation for, by selecting the source and target locale from the dropdown menu.
-
Linguistic Package: If you are using Assets References for prompt tooling with RAG, you will need to specify which linguistic package should be referenced for glossary terms and/or translation memory examples.
- This selection only applies to testing purposes. For translations within a Smartling project, the project's Linguistic Package will be used for prompt tooling with RAG.
-
Test Source String: Enter a text snippet that you would like to get a test translation for.
- To test multiple strings, click "Add String" and add only one string in each input field. Only a single string should be added per field.
- If multiple test source strings are added, they will be sent to the LLM as a string batch to mimic an actual translation request. For more information, see String batching.
Once you have entered your test data, click "Run" to get a test translation using your selected provider and prompt.
Note: If any issues were detected with your configuration details, an error message will be shown. Please ensure that all information has been entered correctly.
Output
Provided that no issues were detected, the test translation will be shown under Output, in the "Translation" field.
A separate tab also displays the rendered prompt. This is the full prompt as it was sent to the LLM by Smartling, including any relevant metadata, asset references and applied conditions.
Step 8: Create the profile
Click Create to save the LLM Profile.
The LLM Profile is now created and can be accessed anytime from the Translation Profiles tab under the AI Hub.
Tip: We recommend monitoring the translation results on a regular basis to check if the translation prompt or any of the parameters need to be adjusted.
Step 9 (optional): Additional customization
You can further customize your LLM translation profile or workflow for an even more tailored translation output.
Customize the LLM profile and workflow with your linguistic assets
- To apply glossary terms in your LLM translations, enable AI-Enhanced Glossary Term Insertion for your translation workflow.
- To use existing TM matches instead of an LLM translation where available, enable Translation Memory Match Insertion.
Activate the AI Toolkit (optional add-on)
Smartling's AI Toolkit can be used in combination with LLM translation workflows as an optional add-on. This bundle of AI-powered features to optimize your translation output and workflow routing.
- Use the AI Post-Editing Agent to further optimize LLM translations by referencing your linguistic assets and locale-specific rules.
- Use the Language Quality Estimation Agent to predict the quality level and route LLM translations accordingly to the right workflow steps.
- Use AI Formality Adjustment to apply the desired formality register in LLM translations (formal vs. informal).
- Use AI Adaptive TM to increase your translation memory leverage by optimizing available matches, which can then be inserted and used instead of an LLM translation.
Using the LLM Profile to translate your content
Once the LLM Profile has been fully configured, it can now be used to translate your content, either within the Smartling platform (using an MT workflow or MT suggestions in the CAT Tool), or through one of the integrations and APIs included with Smartling’s AI Hub, to display machine translations directly where needed.
- LLM Profiles can be used as a translation provider in a machine translation workflow.
Machine translation workflows allow you to translate any content in the Smartling platform, using your preferred provider and configurations. For more information, please see Setting Up a Machine Translation Workflow.
- LLM Profiles can be used to translate content with Smartling's MT API or one of Smartling's instant MT integrations, to provide LLM translations directly where they should be displayed - without the need to upload the content into the Smartling platform first.
To select the desired LLM Profile for each of these integrations, please navigate to the Instant MT tab of the AI Hub. Then click into the relevant tab for your integration.
- LLM Profiles can be used to provide translation suggestions in the CAT Tool.
To select the desired LLM Profile for translation suggestions in the CAT Tool, please navigate to the Instant MT tab of the AI Hub. Then click into the CAT Tool tab.
How to edit an LLM Profile
- From the AI Hub, click into the Translation Profiles tab
- Click on the name of the LLM Profile that you want to edit
- The Edit LLM Profile dialog appears where you can edit and test the LLM Profile credentials and adjust your prompt.
How to delete an LLM Profile
- From the AI Hub, click into the Translation Profiles tab.
- Next to the LLM Profile you want to delete, click the ellipses under Actions.
- Select the action Delete Profile.
You cannot delete an LLM Profile that is assigned to a workflow. You need to re-assign the workflow step to another translation option. If you unassign an LLM Profile while translations are in progress, translation progress will stop. The translations will remain in the translation step until the step is reassigned.