Skip to main content
The OpenHands SDK is a modular framework for building AI agents that interact with code, files, and system commands. Agents can execute bash commands, edit files, browse the web, and more.

Prerequisites

Install the uv package manager (version 0.8.13+):
curl -LsSf https://astral.sh/uv/install.sh | sh

Installation

Step 1: Acquire an LLM API Key

The SDK requires an LLM API key from any LiteLLM-supported provider. See our recommended models for best results.
Get an API key directly from providers like:
Set Your API Key:
export LLM_API_KEY=your-api-key-here

Step 2: Install the SDK

pip install openhands-sdk # Core SDK (openhands.sdk)
pip install openhands-tools  # Built-in tools (openhands.tools)
# Optional: required for sandboxed workspaces in Docker or remote servers
pip install openhands-workspace # Workspace backends (openhands.workspace)
pip install openhands-agent-server # Remote agent server (openhands.agent_server)
# Clone the repository
git clone https://github.com/OpenHands/agent-sdk.git
cd agent-sdk

# Install dependencies and setup development environment
make build

Step 3: Run Your First Agent

Here’s a complete example that creates an agent and asks it to perform a simple task:
examples/01_standalone_sdk/01_hello_world.py
import os

from pydantic import SecretStr

from openhands.sdk import LLM, Conversation
from openhands.tools.preset.default import get_default_agent


# Configure LLM and agent
# You can get an API key from https://app.all-hands.dev/settings/api-keys
api_key = os.getenv("LLM_API_KEY")
assert api_key is not None, "LLM_API_KEY environment variable is not set."
model = os.getenv("LLM_MODEL", "openhands/claude-sonnet-4-5-20250929")
base_url = os.getenv("LLM_BASE_URL")
llm = LLM(
    model=model,
    api_key=SecretStr(api_key),
    base_url=base_url,
    usage_id="agent",
)
agent = get_default_agent(llm=llm, cli_mode=True)

# Start a conversation and send some messages
cwd = os.getcwd()
conversation = Conversation(agent=agent, workspace=cwd)

# Send a message and let the agent run
conversation.send_message("Write 3 facts about the current project into FACTS.txt.")
conversation.run()
Run the example:
uv run python examples/01_standalone_sdk/01_hello_world.py
You should see the agent understand your request, explore the project, and create a file with facts about it.

Core Concepts

Agent: An AI-powered entity that can reason, plan, and execute actions using tools. Tools: Capabilities like executing bash commands, editing files, or browsing the web. Workspace: The execution environment where agents operate (local, Docker, or remote). Conversation: Manages the interaction lifecycle between you and the agent.

Basic Workflow

  1. Configure LLM: Choose model and provide API key
  2. Create Agent: Use preset or custom configuration
  3. Add Tools: Enable capabilities (bash, file editing, etc.)
  4. Start Conversation: Create conversation context
  5. Send Message: Provide task description
  6. Run Agent: Agent executes until task completes or stops
  7. Get Result: Review agent’s output and actions

Try More Examples

The repository includes 24+ examples demonstrating various capabilities:
# Simple hello world
uv run python examples/01_standalone_sdk/01_hello_world.py

# Custom tools
uv run python examples/01_standalone_sdk/02_custom_tools.py

# With microagents
uv run python examples/01_standalone_sdk/03_activate_microagent.py

# See all examples
ls examples/01_standalone_sdk/

Next Steps

Explore Documentation

Build Custom Solutions

Get Help