Customer Support Agent Playbook
When a new support ticket arrives, this agent retrieves the customer record, classifies the issue type and urgency, searches the knowledge base for relevant articles, and drafts a response — all before a human sees the ticket. Low-confidence or high-risk responses (billing disputes, legal threats, account changes) are held for mandatory human review via an approval step. High-confidence, low-risk responses (FAQ answers, status updates) can notify the customer automatically. The agent does not guarantee correct responses — it generates a draft that reduces handle time and surfaces relevant context for the support agent.
What this agent does
The customer support agent handles the intake and initial response cycle for inbound support tickets. Its job is not to replace your support team. Its job is to reduce the manual work on the first-touch: pulling customer context, matching the issue to known solutions, and producing a draft response that a human can send or discard in under a minute.
The workflow begins when a ticket arrives. A webhook trigger fires from Zendesk the moment a new ticket is created, or a event trigger fires from the bus on the support/ticket-created topic if your platform routes events internally. The agent then:
- Retrieves the customer record and ticket history from Zendesk via a
datastep - Classifies the issue type and urgency using a
reasoning-category agent step - Searches the knowledge base for matching articles using a knowledge retrieval skill step
- Drafts a response using a second agent step
- Routes based on urgency: low urgency with confidence above 0.85 goes to a
notifystep (auto-notify customer with draft); high urgency or low confidence goes to anapprovalstep - Logs the outcome, classification, and draft to a
storagestep for audit and quality review
The agent stores customer tier and last interaction date in persistent agent memory, so context accumulates across tickets from the same customer.
Best-fit use cases
This playbook is well-suited when:
- Your support team handles high ticket volume with repetitive issue types (password resets, billing questions, shipping status, known outages)
- First-response time is a key SLA metric and your team spends significant time on initial triage before a human touches the ticket
- Your knowledge base is well-maintained and up to date — the agent's response quality is directly tied to the quality of the KB articles it retrieves
- You have a clear escalation policy that maps issue types to risk levels — this is what drives the approval gate logic
When not to use this agent
Do not deploy this agent when:
- Your knowledge base is sparse, outdated, or unstructured — the agent will retrieve poor-quality content and generate unhelpful drafts
- All ticket types are novel, complex, or require deep account investigation — the agent adds overhead without saving time if nearly all tickets require full human review
- Your team lacks a defined escalation policy — the approval routing logic requires a mapping from issue type to risk level, and that policy must exist before you build it into the workflow
- You are handling regulated communications (healthcare, financial advice, legal) that require licensed professionals — this agent is not a substitute for professional review, even with an approval gate
Required connections and data sources
| Connection | Purpose | Auth method |
|---|---|---|
| Zendesk | Ticket trigger, ticket data retrieval, response action | API Key |
| Slack | Notify support manager on high-risk approvals; SLA breach alerts | OAuth 2.0 |
| Snowflake (optional) | Customer tier, health score, account value for enriched context | Service Account |
Configure connections in Workspace → Connections before building the workflow. The Zendesk connection requires an API key with read and write access to tickets and users. See /docs/connections/index for setup instructions.
For the webhook trigger, configure a Zendesk webhook pointing to your ProvenanceOne workspace trigger URL. The event payload should include the ticket ID, subject, description, requester email, and priority field.
Recommended agent instructions
Create two agent configurations for this workflow: one for classification and one for response drafting.
Classification agent (category: reasoning, trust: medium):
You classify inbound customer support tickets.
Given a ticket subject, description, and customer record, return a JSON object with:
- issue_type: one of [billing, account, technical, shipping, legal, general]
- urgency: one of [low, medium, high]
- confidence: a number between 0 and 1 representing your confidence in the classification
- summary: a one-sentence summary of the customer's issue
Do not invent details. Base your classification only on the ticket content and customer record provided.
Do not classify as "low" urgency if the customer is an Enterprise tier account or has referenced cancellation, legal action, or a regulatory complaint.
Response drafting agent (category: reasoning, trust: medium):
You draft initial responses to customer support tickets.
Given a ticket, customer record, and a set of knowledge base articles, draft a response that:
- Addresses the customer's specific issue
- References only information from the provided knowledge base articles
- Does not make commitments, promises, or policy exceptions
- Does not reference credits, refunds, or compensation unless explicitly told to
- Ends with a clear next step for the customer
If no relevant knowledge base article matches the issue, say so clearly and do not invent an answer.
Cite the article title you used in your response.
Keep system prompts scoped. A broad prompt like "be a helpful support assistant" produces unpredictable outputs. A prompt that enumerates the exact output format and constraints produces outputs that downstream steps can parse and route reliably.
Required skills and tools
| Skill | Category | Purpose |
|---|---|---|
| Knowledge retrieval | data | Vector search against your KB datastore; returns top-N articles by semantic similarity to the ticket description |
| Zendesk ticket reader | integration | Fetches full ticket record plus last 5 interactions from Zendesk API |
| Zendesk response writer | integration | Posts draft response as internal note or sends to customer depending on approval outcome |
| Snowflake customer lookup | data (optional) | Retrieves customer tier, health score, and account value from data warehouse |
Skills are packaged serverless functions that execute in a sandbox. Attach them to the relevant agent step under Tools → Skills. Do not attach skills to agents that do not need them. See /docs/skills/index.
If your knowledge base is hosted in an external system (Confluence, Notion, a custom CMS), consider using an MCP server that provides read access to that system, routed through the MCP Gateway. See /docs/mcp-servers/index.
Recommended workflow design
Build the workflow in Workflows → New Workflow. Use the following step sequence:
-
triggerstep — typewebhook(Zendesk ticket.created event) orevent(bus topicsupport/ticket-created). Configure the trigger to pass the full ticket payload to step 2. -
datastep — Zendesk ticket reader skill. Input: ticket ID from trigger payload. Output: full ticket record, customer email, ticket history. If Snowflake is connected, run a paralleldatastep to retrieve customer tier and health score. -
agentstep — classification — classification agent. Input: ticket subject, description, customer record, customer tier. Output:issue_type,urgency,confidence,summary. Storecustomer_tierand last interaction date in agent memory. -
skillstep — knowledge retrieval — input: ticket description andissue_typefrom step 3. Output: top 3 KB articles with titles, content, and relevance scores. -
agentstep — response drafting — response drafting agent. Input: ticket, classification output, KB articles. Output: draft response text with cited article title. -
logicstep — routing — branch onurgencyandconfidence:- If
urgencyislowANDconfidence>= 0.85: route to notify step - If
urgencyismediumorhigh, ORconfidence< 0.7: route to approval step - If
issue_typeisbilling,legal, oraccount: always route to approval step regardless of confidence
- If
-
notifystep — auto-send draft response to customer via Zendesk response writer skill. Only reached for high-confidence, low-risk responses. -
approvalstep — see approval configuration in the next section. Reached for all high-risk, low-confidence, or sensitive issue types. -
actionstep — post approved (and optionally edited) response via Zendesk response writer skill. -
storagestep — log classification, draft, routing decision, and outcome to your audit datastore.
Human approval rules
These actions must always trigger an approval step and must never be auto-sent:
- Refunds, credits, or billing adjustments
- Account closures or suspensions
- Legal commitments or policy exceptions of any kind
- Responses to tickets that reference legal threats or regulatory complaints
- Any PII change initiated by the agent (email address, billing address, password reset)
- Any response where the agent's confidence is below 0.7
These can be handled with a notify step (no approval required) when confidence is at or above 0.85:
- FAQ answers matched to a verified knowledge base article
- Links to knowledge base articles
- Status updates for known, acknowledged incidents
Approval step configuration for high-risk actions:
action: Send support response
risk: high
slaMinutes: 60
assignees:
- [email protected]
- [email protected]
evidence:
- label: Customer Tier
value: "{{customer_tier}}"
tone: amber
- label: Issue Type
value: "{{issue_type}}"
tone: red
- label: Agent Confidence
value: "{{confidence_pct}}"
tone: amber
- label: Draft Response
value: "{{draft_preview}}"
tone: slate
rationale: "{{agent_rationale}}"
confidence: "{{confidence}}"
Configure a secondary assignee so SLA does not breach when the primary reviewer is unavailable. Set a Slack notification to fire at 80% of SLA time (48 minutes for a 60-minute SLA) to prompt action. Approval authority requires the platform group approvers. See /docs/approvals/index.
Security and permission model
| Role | Permissions |
|---|---|
admin | Configure Zendesk and Slack connections, publish workflow, manage approval assignees |
editor | Create and edit agents, skills, and workflow; view run history |
viewer | View workflow runs and approval status; cannot approve or edit |
the identity service approvers group | Grant or reject approval requests |
The Zendesk API key is stored in the secrets vault. It is never returned by any API response. The audit event connection.accessed is emitted each time the Zendesk connection is used.
Do not store customer PII in agent memory. Agent memory is readable by any editor or admin via the API. Store customer tier and interaction date only — not email addresses, phone numbers, or account credentials.
Audit events emitted by this workflow: run.started, run.completed, run.failed, approval.granted, approval.rejected, approval.sla_breach, agent.memory_set, connection.accessed.
Evaluation checklist
Before deploying to production, validate all of the following with a test set of 20 or more real tickets:
- Classification agent assigns
issue_typecorrectly in at least 90% of test cases - Classification agent assigns
urgencycorrectly in at least 90% of test cases; verify no high-urgency tickets are classified as low - Draft responses cite knowledge base articles and do not contain invented policy details or commitments
- All
billing,legal, andaccountissue types route to the approval step — test 100% of test cases in each category - Responses with
confidence< 0.7 always route to approval — never to the notify step - Agent never posts a response to Zendesk without human review when the approval gate should have fired
- Run debugger shows the full tool call chain for each ticket (classification → KB retrieval → draft)
- Agent memory correctly stores
customer_tierandlast_interaction_dateand those values are visible in subsequent runs for the same customer
Rollout plan
Phase 1 (week 1–2): Shadow mode Run the workflow alongside your current process. The agent classifies tickets and drafts responses, but no response is sent and no approval is requested. Review agent outputs daily. Compare classifications and drafts against what your team actually did.
Phase 2 (week 3–4): Approval-only mode Enable the approval step for all responses. Every draft goes to a reviewer before sending. This surfaces the full range of agent outputs under real conditions without auto-sending anything. Review approval rejection reasons to identify system prompt gaps.
Phase 3 (month 2+): Enable auto-notify for low-risk responses
Once the approval rejection rate for high-confidence, low-risk responses drops below 5%, enable the notify path for those cases. Keep approval gates for all other routes. Track the rejection rate and re-review if it rises.
Common failure modes
Agent hallucinates policy details not in the knowledge base. The agent invents an answer when no KB article matches the ticket. Mitigation: Scope the knowledge retrieval skill to verified KB articles only. Add an explicit instruction in the system prompt: "If no relevant article is found, do not answer the question. State that you could not find a matching article." Add a confidence threshold check — if the KB retrieval step returns no results above the relevance threshold, route directly to approval.
Agent classifies urgency as low when the customer is at risk of churning.
Without customer health data, the agent cannot see churn signals. Mitigation: Add a Snowflake data step to retrieve customer health score and account value. Add a logic rule: if health score is below threshold or account value is above a defined amount, override urgency to high regardless of the agent's classification.
Approval SLA breaches when the support manager is unavailable.
A single assignee becomes a bottleneck. Mitigation: Configure at least two assignees on the approval step. Set the Slack notification to fire at 80% of SLA time. Consider setting slaMinutes to 120 for after-hours tickets and 60 for business hours, using a logic step to set the SLA value based on time of day.
ROI assumptions
The following inputs are illustrative. Replace every value with your actual data before presenting to stakeholders.
| Input | Assumed value | Notes |
|---|---|---|
| Tickets per month | 2,000 | Replace with actual volume |
| Average handle time per ticket (current) | 12 min | Measure in your ticketing system |
| Minutes saved per ticket with agent | 6 min | Agent drafts response; human edits and sends |
| Human review time per ticket | 3 min | Estimated for reviewing and sending agent draft |
| Loaded hourly cost of support staff | $45 | Include benefits and overhead |
| Escalation rate without agent | 30% | Tickets escalated to L2 |
| Estimated escalation rate with agent | 15% | Needs monitoring after 60-day pilot |
At these inputs: 2,000 tickets x 6 min saved x ($45/60) = $9,000/month in handle time reduction. The escalation rate reduction adds further leverage but requires 60 days of post-deployment data to quantify reliably.
Use the ROI calculator to run your own numbers: /tools/ai-agent-roi-calculator?use_case=customer-support.
FAQ
Can the agent automatically send responses to customers?▾
Conditionally. Auto-send via the notify step is recommended only for high-confidence (above 0.85), low-risk responses — specifically FAQ answers and status updates on known issues. Billing disputes, account changes, legal threats, and any response with confidence below 0.7 must go through the approval step. Do not enable auto-send for an issue type before you have reviewed at least 20 real agent drafts for that type.
What happens if the knowledge base article is outdated?▾
The agent retrieves whatever is in the datastore at the time of the run. Outdated articles produce outdated responses. The agent has no way to know whether an article reflects current policy. Review and update your knowledge base regularly. Consider adding a last_reviewed date field to your KB articles and filtering out articles older than a defined threshold in the retrieval skill.
How do I prevent the agent from making promises to customers?▾
Use explicit system prompt instructions: 'Do not make commitments, promises, or policy exceptions. Do not reference credits, refunds, or compensation unless explicitly told to.' Add approval routing for any response that the agent classifies as issue_type billing or legal. System prompt instructions alone are not sufficient — always back them with approval gates for high-risk issue types.
Can the agent handle multiple languages?▾
This depends on the model you select for the agent steps. Confirm multilingual capability with your provider documentation before deploying to multilingual ticket queues. Test with native-language tickets from your most common languages. Do not assume that a model trained on English data will produce acceptable quality in other languages without explicit testing.
What role do I need to set up this playbook?▾
You need the editor role to create workflows, agents, and skills. You need the admin role to configure connections (Zendesk, Slack, Snowflake). Approvers require the identity service approvers group in addition to the editor or admin role.
Related pages
- Approvals — how approval steps work, SLA configuration, evidence tones
- Agents — agent trust levels, system prompt guidance, persistent memory
- Skills — how skills are packaged and attached to agents
- Connections — setting up Zendesk, Slack, and Snowflake connections
- Workflow Runs — using the run debugger to trace step outputs
- Sales Research Agent Playbook — related agentic workflow for the revenue team
- Operations Agent Playbook — internal process automation patterns
- Approval Policy Template — ready-to-use policy structure for the approval gates in this playbook
- System Prompt Template — starting-point system prompt structure for the classification and drafting agents