Get Knowledge Assistant
Do you want access to Knowledge Assistant? Contact your Zingtree Account Manager or our Support Team.
This guide explains every metric across the Knowledge Assistant Analytics dashboards—what it measures, why it matters, and what good performance looks like.
How to Read This Guide
This guide is designed for business stakeholders. Each dashboard section explains:
- What the dashboard is used for
- The key question it answers
- A breakdown of each metric (KPI), including targets and filters
Accessing the Knowledge Assistant Dashboards
If you have configured Knowledge Assistant and they have been used, you can access the dashboards from Knowledge Assistant > Overview: Knowledge Assistant data can be found in Dashboard 3 described below.
Data Freshness
| Freshness Label | Meaning |
|---|---|
| Real-time | Reflects events within seconds of occurring |
| Updated hourly | Refreshed approximately once per hour |
Filters
| Filter | Description | Default |
|---|---|---|
| Date range | Restrict results to a calendar date range | Last 7 days |
| Intent | Focus on a specific user intent/topic | All intents |
| Assistant | Focus on a specific Knowledge Assistant | All assistants |
| AI Model | Filter by AI provider/model | All models |
Note: Date filters operate at the day level (not time-of-day). Any time selection is ignored.
Downloading the Filtered Data
Knowledge Assistant Analytics can be downloaded to a PDF file from the top-right of the Dashboards:
Dashboard 3: Knowledge Assistant
Purpose: Measures quality, speed, and reliability of AI-powered self-service answers
Key Question: Is the Knowledge Assistant providing good, fast answers?
| Card | KPI(s) | What It Measures | Target / Benchmark | Data Freshness | Filters |
|---|---|---|---|---|---|
| Assistant Performance Overview | Queries, answer rate, error rate, response time, satisfaction | Overall health and performance of each assistant. | >99% answer; <1% error; >85% satisfaction | Updated hourly | Date, Org, Assistant |
| Daily Assistant Trends | Volume, answered vs failed | Reliability trends over time. | ≥99% answered; failures near zero | Updated hourly | Date, Org, Assistant |
| KA Session Detail | Session ID, turns, duration, status | Detailed session-level troubleshooting and QA view. | N/A | Real-time | Date, Org, Assistant |
| Response Time by Assistant | Avg and p95 time | Speed of answer generation. | p95 < 3,500 ms | Updated hourly | Date, Org, Assistant |
| Model Usage in KA | Volume and latency per model | Which models are used and how they perform. | p95 < 3,500 ms | Real-time | Date, Org, Assistant |
| Top Queries by Assistant | Most frequent queries | Identifies user demand and content gaps. | N/A | Real-time | Date, Org, Assistant |
| Avg Turns per Session | Average turns | Interaction efficiency. | <3 turns | Real-time | Date, Org, Assistant |
| p95 Turns per Session | 95th percentile | Worst-case interaction complexity. | <6 turns | Real-time | Date, Org, Assistant |
| Avg Session Duration | Seconds | Total time to resolution. | <120 seconds | Real-time | Date, Org, Assistant |
| Distinct Assistants Used | Count | How many assistants receive traffic. | All configured assistants active | Real-time | Date, Org |
| AI Error Breakdown | Error type, endpoint, model | Breakdown of failures for troubleshooting. | <1% | Real-time | Date, Org |
Glossary
| Term | Definition |
|---|---|
| Session | A complete interaction (conversation or Knowledge Assistant) |
| Turn | One user message and one AI response |
| Channel | Where AI is used (Conversations, Knowledge Assistant, Shared) |
| Containment | Resolution without a human agent |
| Containment Rate | % resolved by AI (>70% target) |
| KA Containment Lift | Improvement vs baseline without KA |
| Authenticated Users | Logged-in users |
| Anonymous Sessions | Unidentified sessions |
| Total Reach | Authenticated users + anonymous sessions |
| Decision Band | Confidence level (HIGH, CLARIFY, LOW, FALLBACK) |
| Auto-confirm | AI acts without user confirmation |
| Explicit Confirm | User confirmation required before action |
| Answer Rate | % of queries answered |
| KA Error Rate | % of failed Knowledge Assistant queries |
| AI Error Rate | % of failed AI API calls |
| p95 Response Time | Time experienced by 95% of users |
| Week-over-Week (WoW) | Comparison of current vs previous 7 days |
| Recognition Rate | % correctly classified requests |
| Satisfaction Rate | % positive feedback |