Skip to main content
In OpenHands Cloud, this tab is only available to Pro subscription users.

Overview

The LLM settings allows you to bring your own LLM and API key to use with OpenHands. This can be any model that is supported by litellm, but it requires a powerful model to work properly. See our recommended models here. You can also configure some additional LLM settings on this page.

Basic LLM Settings

The most popular providers and models are available in the basic settings. Some of the providers have been verified to work with OpenHands such as the OpenHands provider, Anthropic, OpenAI and Mistral AI.
  1. Choose your preferred provider using the LLM Provider dropdown.
  2. Choose your favorite model using the LLM Model dropdown.
  3. Set the API Key for your chosen provider and model and click Save Changes.
This will set the LLM for all new conversations. If you want to use this new LLM for older conversations, you must first restart older conversations.

Advanced LLM Settings

Toggling the Advanced settings, allows you to set custom models as well as some additional LLM settings. You can use this when your preferred provider or model does not exist in the basic settings dropdowns.
  1. Custom Model: Set your custom model with the provider as the prefix. For information on how to specify the custom model, follow the specific provider docs on litellm. We also have some guides for popular providers.
  2. Base URL: If your provider has a specific base URL, specify it here.
  3. API Key: Set the API key for your custom model.
  4. Click Save Changes

Memory Condensation

The memory condenser manages the language model’s context by ensuring only the most important and relevant information is presented. Keeping the context focused improves latency and reduces token consumption, especially in long-running conversations.
  • Enable memory condensation - Turn on this setting to activate this feature.
  • Memory condenser max history size - The condenser will summarize the history after this many events.