Skip to main content

Does the agent SDK support parallel tool calling?

Yes, the OpenHands SDK supports parallel tool calling by default. The SDK automatically handles parallel tool calls when the underlying LLM (like Claude or GPT-4) returns multiple tool calls in a single response. This allows agents to execute multiple independent actions before the next LLM call.
When the LLM generates multiple tool calls in parallel, the SDK groups them using a shared llm_response_id:
ActionEvent(llm_response_id="abc123", thought="Let me check...", tool_call=tool1)
ActionEvent(llm_response_id="abc123", thought=[], tool_call=tool2)
# Combined into: Message(role="assistant", content="Let me check...", tool_calls=[tool1, tool2])
Multiple ActionEvents with the same llm_response_id are grouped together and combined into a single LLM message with multiple tool_calls. Only the first event’s thought/reasoning is included. The parallel tool calling implementation can be found in the Events Architecture for detailed explanation of how parallel function calling works, the prepare_llm_messages in utils.py which groups ActionEvents by llm_response_id when converting events to LLM messages, the agent step method where actions are created with shared llm_response_id, and the ActionEvent class which includes the llm_response_id field. For more details, see the Events Architecture for a deep dive into the event system and parallel function calling, the Tool System for understanding how tools work with the agent, and the Agent Architecture for how agents process and execute actions.

More questions?

If you have additional questions: