Customer Support Agent Playbook

When a new support ticket arrives, this agent retrieves the customer record, classifies the issue type and urgency, searches the knowledge base for relevant articles, and drafts a response — all before a human sees the ticket. Low-confidence or high-risk responses (billing disputes, legal threats, account changes) are held for mandatory human review via an approval step. High-confidence, low-risk responses (FAQ answers, status updates) can notify the customer automatically. The agent does not guarantee correct responses — it generates a draft that reduces handle time and surfaces relevant context for the support agent.


What this agent does

The customer support agent handles the intake and initial response cycle for inbound support tickets. Its job is not to replace your support team. Its job is to reduce the manual work on the first-touch: pulling customer context, matching the issue to known solutions, and producing a draft response that a human can send or discard in under a minute.

The workflow begins when a ticket arrives. A webhook trigger fires from Zendesk the moment a new ticket is created, or a event trigger fires from the bus on the support/ticket-created topic if your platform routes events internally. The agent then:

  1. Retrieves the customer record and ticket history from Zendesk via a data step
  2. Classifies the issue type and urgency using a reasoning-category agent step
  3. Searches the knowledge base for matching articles using a knowledge retrieval skill step
  4. Drafts a response using a second agent step
  5. Routes based on urgency: low urgency with confidence above 0.85 goes to a notify step (auto-notify customer with draft); high urgency or low confidence goes to an approval step
  6. Logs the outcome, classification, and draft to a storage step for audit and quality review

The agent stores customer tier and last interaction date in persistent agent memory, so context accumulates across tickets from the same customer.


Best-fit use cases

This playbook is well-suited when:

  • Your support team handles high ticket volume with repetitive issue types (password resets, billing questions, shipping status, known outages)
  • First-response time is a key SLA metric and your team spends significant time on initial triage before a human touches the ticket
  • Your knowledge base is well-maintained and up to date — the agent's response quality is directly tied to the quality of the KB articles it retrieves
  • You have a clear escalation policy that maps issue types to risk levels — this is what drives the approval gate logic

When not to use this agent

Do not deploy this agent when:

  • Your knowledge base is sparse, outdated, or unstructured — the agent will retrieve poor-quality content and generate unhelpful drafts
  • All ticket types are novel, complex, or require deep account investigation — the agent adds overhead without saving time if nearly all tickets require full human review
  • Your team lacks a defined escalation policy — the approval routing logic requires a mapping from issue type to risk level, and that policy must exist before you build it into the workflow
  • You are handling regulated communications (healthcare, financial advice, legal) that require licensed professionals — this agent is not a substitute for professional review, even with an approval gate

Required connections and data sources

ConnectionPurposeAuth method
ZendeskTicket trigger, ticket data retrieval, response actionAPI Key
SlackNotify support manager on high-risk approvals; SLA breach alertsOAuth 2.0
Snowflake (optional)Customer tier, health score, account value for enriched contextService Account

Configure connections in Workspace → Connections before building the workflow. The Zendesk connection requires an API key with read and write access to tickets and users. See /docs/connections/index for setup instructions.

For the webhook trigger, configure a Zendesk webhook pointing to your ProvenanceOne workspace trigger URL. The event payload should include the ticket ID, subject, description, requester email, and priority field.


Create two agent configurations for this workflow: one for classification and one for response drafting.

Classification agent (category: reasoning, trust: medium):

You classify inbound customer support tickets.

Given a ticket subject, description, and customer record, return a JSON object with:
- issue_type: one of [billing, account, technical, shipping, legal, general]
- urgency: one of [low, medium, high]
- confidence: a number between 0 and 1 representing your confidence in the classification
- summary: a one-sentence summary of the customer's issue

Do not invent details. Base your classification only on the ticket content and customer record provided.
Do not classify as "low" urgency if the customer is an Enterprise tier account or has referenced cancellation, legal action, or a regulatory complaint.

Response drafting agent (category: reasoning, trust: medium):

You draft initial responses to customer support tickets.

Given a ticket, customer record, and a set of knowledge base articles, draft a response that:
- Addresses the customer's specific issue
- References only information from the provided knowledge base articles
- Does not make commitments, promises, or policy exceptions
- Does not reference credits, refunds, or compensation unless explicitly told to
- Ends with a clear next step for the customer

If no relevant knowledge base article matches the issue, say so clearly and do not invent an answer.
Cite the article title you used in your response.

Keep system prompts scoped. A broad prompt like "be a helpful support assistant" produces unpredictable outputs. A prompt that enumerates the exact output format and constraints produces outputs that downstream steps can parse and route reliably.


Required skills and tools

SkillCategoryPurpose
Knowledge retrievaldataVector search against your KB datastore; returns top-N articles by semantic similarity to the ticket description
Zendesk ticket readerintegrationFetches full ticket record plus last 5 interactions from Zendesk API
Zendesk response writerintegrationPosts draft response as internal note or sends to customer depending on approval outcome
Snowflake customer lookupdata (optional)Retrieves customer tier, health score, and account value from data warehouse

Skills are packaged serverless functions that execute in a sandbox. Attach them to the relevant agent step under Tools → Skills. Do not attach skills to agents that do not need them. See /docs/skills/index.

If your knowledge base is hosted in an external system (Confluence, Notion, a custom CMS), consider using an MCP server that provides read access to that system, routed through the MCP Gateway. See /docs/mcp-servers/index.


Build the workflow in Workflows → New Workflow. Use the following step sequence:

  1. trigger step — type webhook (Zendesk ticket.created event) or event (bus topic support/ticket-created). Configure the trigger to pass the full ticket payload to step 2.

  2. data step — Zendesk ticket reader skill. Input: ticket ID from trigger payload. Output: full ticket record, customer email, ticket history. If Snowflake is connected, run a parallel data step to retrieve customer tier and health score.

  3. agent step — classification — classification agent. Input: ticket subject, description, customer record, customer tier. Output: issue_type, urgency, confidence, summary. Store customer_tier and last interaction date in agent memory.

  4. skill step — knowledge retrieval — input: ticket description and issue_type from step 3. Output: top 3 KB articles with titles, content, and relevance scores.

  5. agent step — response drafting — response drafting agent. Input: ticket, classification output, KB articles. Output: draft response text with cited article title.

  6. logic step — routing — branch on urgency and confidence:

    • If urgency is low AND confidence >= 0.85: route to notify step
    • If urgency is medium or high, OR confidence < 0.7: route to approval step
    • If issue_type is billing, legal, or account: always route to approval step regardless of confidence
  7. notify step — auto-send draft response to customer via Zendesk response writer skill. Only reached for high-confidence, low-risk responses.

  8. approval step — see approval configuration in the next section. Reached for all high-risk, low-confidence, or sensitive issue types.

  9. action step — post approved (and optionally edited) response via Zendesk response writer skill.

  10. storage step — log classification, draft, routing decision, and outcome to your audit datastore.


Human approval rules

These actions must always trigger an approval step and must never be auto-sent:

  • Refunds, credits, or billing adjustments
  • Account closures or suspensions
  • Legal commitments or policy exceptions of any kind
  • Responses to tickets that reference legal threats or regulatory complaints
  • Any PII change initiated by the agent (email address, billing address, password reset)
  • Any response where the agent's confidence is below 0.7

These can be handled with a notify step (no approval required) when confidence is at or above 0.85:

  • FAQ answers matched to a verified knowledge base article
  • Links to knowledge base articles
  • Status updates for known, acknowledged incidents

Approval step configuration for high-risk actions:

action: Send support response
risk: high
slaMinutes: 60
assignees:
  - [email protected]
  - [email protected]
evidence:
  - label: Customer Tier
    value: "{{customer_tier}}"
    tone: amber
  - label: Issue Type
    value: "{{issue_type}}"
    tone: red
  - label: Agent Confidence
    value: "{{confidence_pct}}"
    tone: amber
  - label: Draft Response
    value: "{{draft_preview}}"
    tone: slate
rationale: "{{agent_rationale}}"
confidence: "{{confidence}}"

Configure a secondary assignee so SLA does not breach when the primary reviewer is unavailable. Set a Slack notification to fire at 80% of SLA time (48 minutes for a 60-minute SLA) to prompt action. Approval authority requires the platform group approvers. See /docs/approvals/index.


Security and permission model

RolePermissions
adminConfigure Zendesk and Slack connections, publish workflow, manage approval assignees
editorCreate and edit agents, skills, and workflow; view run history
viewerView workflow runs and approval status; cannot approve or edit
the identity service approvers groupGrant or reject approval requests

The Zendesk API key is stored in the secrets vault. It is never returned by any API response. The audit event connection.accessed is emitted each time the Zendesk connection is used.

Do not store customer PII in agent memory. Agent memory is readable by any editor or admin via the API. Store customer tier and interaction date only — not email addresses, phone numbers, or account credentials.

Audit events emitted by this workflow: run.started, run.completed, run.failed, approval.granted, approval.rejected, approval.sla_breach, agent.memory_set, connection.accessed.


Evaluation checklist

Before deploying to production, validate all of the following with a test set of 20 or more real tickets:

  • Classification agent assigns issue_type correctly in at least 90% of test cases
  • Classification agent assigns urgency correctly in at least 90% of test cases; verify no high-urgency tickets are classified as low
  • Draft responses cite knowledge base articles and do not contain invented policy details or commitments
  • All billing, legal, and account issue types route to the approval step — test 100% of test cases in each category
  • Responses with confidence < 0.7 always route to approval — never to the notify step
  • Agent never posts a response to Zendesk without human review when the approval gate should have fired
  • Run debugger shows the full tool call chain for each ticket (classification → KB retrieval → draft)
  • Agent memory correctly stores customer_tier and last_interaction_date and those values are visible in subsequent runs for the same customer

Rollout plan

Phase 1 (week 1–2): Shadow mode Run the workflow alongside your current process. The agent classifies tickets and drafts responses, but no response is sent and no approval is requested. Review agent outputs daily. Compare classifications and drafts against what your team actually did.

Phase 2 (week 3–4): Approval-only mode Enable the approval step for all responses. Every draft goes to a reviewer before sending. This surfaces the full range of agent outputs under real conditions without auto-sending anything. Review approval rejection reasons to identify system prompt gaps.

Phase 3 (month 2+): Enable auto-notify for low-risk responses Once the approval rejection rate for high-confidence, low-risk responses drops below 5%, enable the notify path for those cases. Keep approval gates for all other routes. Track the rejection rate and re-review if it rises.


Common failure modes

Agent hallucinates policy details not in the knowledge base. The agent invents an answer when no KB article matches the ticket. Mitigation: Scope the knowledge retrieval skill to verified KB articles only. Add an explicit instruction in the system prompt: "If no relevant article is found, do not answer the question. State that you could not find a matching article." Add a confidence threshold check — if the KB retrieval step returns no results above the relevance threshold, route directly to approval.

Agent classifies urgency as low when the customer is at risk of churning. Without customer health data, the agent cannot see churn signals. Mitigation: Add a Snowflake data step to retrieve customer health score and account value. Add a logic rule: if health score is below threshold or account value is above a defined amount, override urgency to high regardless of the agent's classification.

Approval SLA breaches when the support manager is unavailable. A single assignee becomes a bottleneck. Mitigation: Configure at least two assignees on the approval step. Set the Slack notification to fire at 80% of SLA time. Consider setting slaMinutes to 120 for after-hours tickets and 60 for business hours, using a logic step to set the SLA value based on time of day.


ROI assumptions

The following inputs are illustrative. Replace every value with your actual data before presenting to stakeholders.

InputAssumed valueNotes
Tickets per month2,000Replace with actual volume
Average handle time per ticket (current)12 minMeasure in your ticketing system
Minutes saved per ticket with agent6 minAgent drafts response; human edits and sends
Human review time per ticket3 minEstimated for reviewing and sending agent draft
Loaded hourly cost of support staff$45Include benefits and overhead
Escalation rate without agent30%Tickets escalated to L2
Estimated escalation rate with agent15%Needs monitoring after 60-day pilot

At these inputs: 2,000 tickets x 6 min saved x ($45/60) = $9,000/month in handle time reduction. The escalation rate reduction adds further leverage but requires 60 days of post-deployment data to quantify reliably.

Use the ROI calculator to run your own numbers: /tools/ai-agent-roi-calculator?use_case=customer-support.


FAQ

Can the agent automatically send responses to customers?

Conditionally. Auto-send via the notify step is recommended only for high-confidence (above 0.85), low-risk responses — specifically FAQ answers and status updates on known issues. Billing disputes, account changes, legal threats, and any response with confidence below 0.7 must go through the approval step. Do not enable auto-send for an issue type before you have reviewed at least 20 real agent drafts for that type.

What happens if the knowledge base article is outdated?

The agent retrieves whatever is in the datastore at the time of the run. Outdated articles produce outdated responses. The agent has no way to know whether an article reflects current policy. Review and update your knowledge base regularly. Consider adding a last_reviewed date field to your KB articles and filtering out articles older than a defined threshold in the retrieval skill.

How do I prevent the agent from making promises to customers?

Use explicit system prompt instructions: 'Do not make commitments, promises, or policy exceptions. Do not reference credits, refunds, or compensation unless explicitly told to.' Add approval routing for any response that the agent classifies as issue_type billing or legal. System prompt instructions alone are not sufficient — always back them with approval gates for high-risk issue types.

Can the agent handle multiple languages?

This depends on the model you select for the agent steps. Confirm multilingual capability with your provider documentation before deploying to multilingual ticket queues. Test with native-language tickets from your most common languages. Do not assume that a model trained on English data will produce acceptable quality in other languages without explicit testing.

What role do I need to set up this playbook?

You need the editor role to create workflows, agents, and skills. You need the admin role to configure connections (Zendesk, Slack, Snowflake). Approvers require the identity service approvers group in addition to the editor or admin role.