A ready-to-run example is available here!The SDK provides flexible visualization options. You can use the default rich-formatted visualizer, customize it with highlighting patterns, or build completely custom visualizers by subclassing
ConversationVisualizerBase.
Visualizer Configuration Options
Thevisualizer parameter in Conversation controls how events are displayed:
Customizing the Default Visualizer
DefaultConversationVisualizer uses Rich panels and supports customization through configuration:
Creating Custom Visualizers
For complete control over visualization, subclassConversationVisualizerBase:
Key Methods
__init__(self, name: str | None = None)
- Initialize your visualizer with optional configuration
nameparameter is available from the base class for agent identification- Call
super().__init__(name=name)to initialize the base class
initialize(self, state: ConversationStateProtocol)
- Called automatically by
Conversationafter state is created - Provides access to conversation state and statistics via
self._state - Override if you need custom initialization, but call
super().initialize(state)
on_event(self, event: Event) (required)
- Called for each conversation event
- Implement your visualization logic here
- Access conversation stats via
self.conversation_statsproperty
Ready-to-run Example
This example is available on GitHub: examples/01_standalone_sdk/26_custom_visualizer.py
examples/01_standalone_sdk/26_custom_visualizer.py
The model name should follow the LiteLLM convention:
provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o).
The LLM_API_KEY should be the API key for your chosen provider.Next Steps
Now that you understand custom visualizers, explore these related topics:- Events - Learn more about different event types
- Conversation Metrics - Track LLM usage, costs, and performance data
- Send Messages While Running - Interactive conversations with real-time updates
- Pause and Resume - Control agent execution flow with custom logic

