Skip to main content

Configuring LLMs

note

LLM configurations is available on the following licensing plans:

  • Flex: Advanced Platform, Flex Standard Platform.

The LLM configurations tab allows you to integrate your existing AI subscriptions while maintaining the governance framework provided by UiPath. You can:

  • Add your own LLM: Use any LLM that meets the product's compatibility criteria. To ensure smooth integration, your chosen LLM must pass a series of tests initiated through a probe call before it can be used within the UiPath ecosystem.

Configuring LLMs preserves most of the governance benefits of the AI Trust Layer, including policy enforcement via Automation Ops and detailed audit logs. However, model governance policies are specifically designed for UiPath-managed LLMs. This means that if you disable a particular model through an AI Trust Layer policy, the restriction only applies to the UiPath-managed version of that model. Your own configured models of the same type remain unaffected.

When leveraging the option to use your own LLM or subscription, keep the following points in mind:

  • Compatibility requirements: Your chosen LLM or subscription must align with the model family and version currently supported by the UiPath product.
  • Setup: Make sure you properly configure and maintain all required LLMs in the custom setup. If any component is missing, outdated, or incorrectly configured, your custom setup may cease to function.
  • Cost-saving: If your custom LLM setup is complete, correct, and meets all necessary requirements, you may be eligible for a Reduced Consumption Rate.

Setting up an LLM connection

LLM connections rely on Integration Service to establish the connection to your own models. You can create connections to the following providers:

  • Azure Open AI
  • Open AI
  • Amazon Bedrock
  • Google Vertex
  • Open AI V1 Compliant LLM – Use this option to connect to any LLM provider whose API follows the OpenAI V1 standard. For details, refer to the OpenAI V1 Compliant LLM connector documentation.

To set up a new connection, follow these steps:

  1. Create a connection in Integration Service to your provider of choice. For connector-specific authentication details, see the Integration Service user guide.
    note

    To prevent unauthorized access, create the Integration Service connection in a private, non-shared folder.

  2. Navigate to Admin > AI Trust Layer > LLM Configurations.
  3. Select the tenant and folder where you want to configure the connection.
  4. Select Add configuration.
  5. Select the Product and Feature.
  6. Choose how you want to configure:
    • Add your own LLM – Add an additional LLM configuration managed entirely by you.

Depending on the selected product, only one option may be available. 7. Set up the connection for Add your own LLM:

  1. Folder – Select the folder where the configuration will be stored.
  2. Displayed (LLM) name – Provide an alias for your LLM.
  3. Connector – Select your connector (e.g., Microsoft Azure OpenAI).
  4. Connection – Choose your Integration Service connection.
  5. LLM identifier – Enter the identifier for your model.
    • For Azure-hosted models, enter the model identifier.
    • For AWS Bedrock cross-region inference, enter the inference profile ID instead of the model ID.
  6. Select Test configuration to check that the model is reachable and meets the required criteria.

UiPath can confirm reachability, verifying the exact model used is your responsibility. 9. If the test is successful, select Save to activate the connection.

Managing existing LLM connections

You can perform the following actions on your existing connections:

  • Check status – Verify the status of your Integration Service connection. This action ensures that the connection is active and functioning correctly.
  • Edit – Modify any parameters of your existing connection.
  • Disable – Temporarily suspend the connection. When disabled, the connection remains visible in your list but doesn't route any calls. You can re-enable the connection when needed.
  • Delete – Permanently remove the connection from your system. This action disables the connection and removes it from your list.

Configuring LLMs for your product

Each product supports specific large language models (LLMs) and versions. Use the table below to identify the supported models and versions for your product.

You can connect your own LLM using one of the following providers: Amazon Web Services, Google Vertex, Microsoft Azure OpenAI, or OpenAI V1 Compliant. Follow the steps outlined in the previous section to create a connection.

Product Feature LLM provider Version
Autopilot for everyone Chat Anthropic

anthropic.claude-3.5-sonnet-20240620-v1:0

anthropic.claude-3.7-sonnet-20250219-v1:0

OpenAI gpt-4o-mini-2024-07-18
Coded agents Call LLM Anthropic

anthropic.claude-3.5-sonnet-20240620-v1:0

anthropic.claude-3.5-sonnet-20241022-v2:0

anthropic.claude-3.7-sonnet-20250219-v1:0

anthropic.claude-3-haiku-20240307-v1:0

Gemini

gemini-1.5-pro-001

gemini-2.0-flash-001

OpenAI

gpt-4o-2024-05-13

gpt-4o-2024-08-06

gpt-4o-2024-11-20

gpt-4o-mini-2024-07-18

o3-mini-2025-01-31

Context Grounding Advanced Extractions Gemini gemini-2.5-flash
Embeddings OpenAI text-embedding-3-large
Gemini gemini-embedding-001
GenAI Activities Build, Test & Deploy Anthropic

anthropic.claude-3.5-sonnet-20241022-v2:0

anthropic.claude-3.7-sonnet-20250219-v1:0

Gemini

gemini-2.5-pro

gemini-2.5-flash

OpenAI

gpt-5-2025-08-07

gpt-5-mini-2025-08-07

gpt-5-nano-2025-0807

Test Manager Autopilot (generate and manage tests) Anthropic anthropic.claude-3.7-sonnet-20250219-v1:0
Gemini

gemini-2.5-pro

gemini-2.5-flash

OpenAI gpt-4o-2024-11-20