Confirmation Policy
Confirmation policy controls whether actions require user approval before execution. They provide a simple way to ensure safe agent operation by requiring explicit permission for actions.Full confirmation example: examples/01_standalone_sdk/04_confirmation_mode_example.py
Basic Confirmation Example
Require user approval before executing agent actions:examples/01_standalone_sdk/04_confirmation_mode_example.py
Running the Example
Setting Confirmation Policy
Set the confirmation policy on your conversation:AlwaysConfirm()- Require approval for all actionsNeverConfirm()- Execute all actions without approvalConfirmRisky()- Only require approval for risky actions (requires security analyzer)
Custom Confirmation Handler
Implement your approval logic by checking conversation status:Rejecting Actions
Provide feedback when rejecting to help the agent try a different approach:Security Analyzer
Security analyzer evaluates the risk of agent actions before execution, helping protect against potentially dangerous operations. They analyze each action and assign a security risk level:- LOW - Safe operations with minimal security impact
- MEDIUM - Moderate security impact, review recommended
- HIGH - Significant security impact, requires confirmation
- UNKNOWN - Risk level could not be determined
ConfirmRisky()) to determine whether user approval is needed before executing an action. This provides an additional layer of safety for autonomous agent operations.
LLM Security Analyzer
The LLMSecurityAnalyzer is the default implementation provided in the agent-sdk. It leverages the LLM’s understanding of action context to provide lightweight security analysis. The LLM can annotate actions with security risk levels during generation, which the analyzer then uses to make security decisions.Full security analyzer example: examples/01_standalone_sdk/16_llm_security_analyzer.py
Security Analyzer Example
Automatically analyze agent actions for security risks before execution:examples/01_standalone_sdk/16_llm_security_analyzer.py
Running the Example
Security Analyzer Configuration
Create an LLM-based security analyzer to review actions before execution:- Reviews each action before execution
- Flags potentially dangerous operations
- Can be configured with custom security policy
- Uses a separate LLM to avoid conflicts with the main agent
Custom Security Analyzer Implementation
You can extend the security analyzer functionality by creating your own implementation that inherits from the SecurityAnalyzerBase class. This allows you to implement custom security logic tailored to your specific requirements.Creating a Custom Analyzer
To create a custom security analyzer, inherit fromSecurityAnalyzerBase and implement the security_risk() method:
Next Steps
- Custom Tools - Build secure custom tools
- Custom Secrets - Secure credential management

