Webhook tools enable your AI persona to call external HTTP endpoints during conversations. This allows integration with any REST API, enabling your persona to:
Check order or shipment status from your e-commerce system
Create support tickets in your helpdesk software
Update CRM records based on conversation context
Fetch real-time data (weather, stock prices, availability)
Trigger workflows in external systems
Send notifications or alerts
Log conversation events
Webhook tools run server-side, keeping your API credentials secure and allowing the LLM to use response data in its answers.
Beta Feature: Tool calling is currently in beta. You may encounter some issues as we continue to improve the feature. Please report any feedback or issues to help us make it better.Response time requirements: Webhooks should respond quickly to maintain natural conversation flow:
Ideal: Under 1 second
Maximum: 5 seconds
Timeout: 60 seconds (hard limit)
For operations taking longer than 5 seconds, use the split pattern: one webhook to start the process, another to check status later.
Webhooks are perfect for implementing complex agentic workflows. Beyond simple
data retrieval, use them to orchestrate multi-step processes, integrate with
third-party services, and build sophisticated AI-driven automation.
Describes when the LLM should invoke this webhook (1-1024 characters). Be specific about:
What triggers the webhook call
What data the webhook provides
When NOT to use it
Example: "Check order status when customer mentions an order number or asks about delivery, tracking, or shipping. Use the order ID from the conversation."
The LLM receives the webhook response and incorporates it into a natural language answer:“Your order ORD-12345 has been shipped! It’s currently in transit with UPS (tracking number 1Z999AA10123456784) and should arrive by March 15th.”
The user receives real-time, accurate information from your systems.
// ✅ Good error response{ "error": true, "message": "Order ORD-12345 not found in our system", "suggestion": "Please verify the order number or contact support at support@company.com"}// ❌ Bad error response{ "error": "NOT_FOUND"}
Handle missing parameters gracefully
Your API should handle missing or invalid parameters:
Webhook tool calls emit lifecycle events on the client that you can use for logging or analytics. Use toolType and toolSubtype to filter for webhook-specific events:
Use system prompt to guide conditional webhook usage:
If the order status is "delayed" or "problem", automatically create a support ticket.If the user asks about weather and mentions travel, also check flight status.
For backend processes that take longer than 5 seconds (like generating a report or running a batch job), the conversation can feel stalled. To keep the interaction fluid, we recommend a “split pattern” where you create two separate webhooks.This is a design pattern that you implement on your backend. You are responsible for creating the two endpoints and managing the state of the job. The Anam agent’s role is simply to call the webhooks you provide.
1
Step 1: Create a 'start' webhook
This webhook initiates the long-running process. It should immediately return a unique jobId and an estimated completion time.
This webhook checks the status of the job using the jobId. It should return the status and, if complete, the final result (e.g., a download URL).
{ "name": "check_report_status", "description": "Checks the status of a report generation job.", "url": "https://api.example.com/reports/status", "awaitResponse": true, "parameters": { "type": "object", "properties": { "jobId": { "type": "string", "description": "The job ID returned from start_report_generation" } } }}
3
Step 3: Guide the AI with a system prompt
Instruct the AI on how to use this two-step process.
When a user asks for a report, first call `start_report_generation`. Inform the user that the report is being generated and tell them the estimated wait time. After waiting, proactively call `check_report_status` to see if it's ready.
Example Conversation Flow:
User: “Can you generate a sales report for last month?”
AI:
Calls your start_report_generation webhook.
Your backend starts the job, saves its state (e.g., status: 'PENDING') in a database, and immediately returns { "jobId": "job_123", "estimatedTime": "30 seconds" }.
Responds: “I’ve started generating your sales report. It should be ready in about 30 seconds. I can let you know when it’s done.”
(AI continues the conversation on other topics…)
AI (after ~30 seconds):
Proactively calls your check_report_status webhook with jobId: "job_123".
Your backend checks the job’s state. If it’s done, it returns { "status": "COMPLETE", "url": "https://.../report.pdf" }.
Responds: “Great news! That sales report you asked for is ready. You can download it here: [link].”
This pattern works for any long-running operation: data exports, batch
processing, AI model inference, video processing, etc. The key is providing
immediate feedback that the process has started, then checking back later.