Data Insights — User Guide
Audience: Analysts and data owners who want conversational analytics: ask in natural language, get KPIs, charts, tables, and narrative insights in one place.
Overview
Data Insights (/data-insights) pairs a specialized Data Insights Agent with a live dashboard that updates as you chat. Unlike generic chat, this view is optimized for metrics, plots, and tabular results you can resize, inspect, and reuse.
Typical outcomes:
- Executive-style KPIs with optional trend arrows
- Charts (bar, line, pie, etc.—whatever the agent and renderer support)
- Data tables for drill-down rows
- Narrative bullets summarizing what matters
- Suggested next charts you can click to generate
Layout: visualization + chat
Left: visualization area
Empty state — Prompts you to start the conversation.
Loading — “Generating visualization” style feedback while the first artifacts arrive.
When data exists, sections can include:
| Block | Description |
|---|---|
| KPI row | Cards with title, value, description, icon (and color), optional trend (up/down with value and color). |
| Charts grid | Responsive grid of chart cards. Each card has a title and actions to copy or view the underlying Python (e.g. Plotly) used to build the chart. |
| Data table | Column headers + rows when the agent returns tabular results. |
| Key insights | Text list; special markers may render as dividers between groups. |
| Suggested visualizations | Cards showing title, chart type, dimension; click to request that chart from the agent. |
Resizable divider — Drag between visualization and chat to give more space to either side.
Right: chat (Data Insights Agent)
The side chat is wired specifically to the Data Insights Agent (data_insights_agent). It supports:
- Model selection from workspace-configured LLMs
- Session id tracking so follow-ups stay in context
- Context builder — each message can include current dashboard state (existing charts, datasets in play) so answers stay grounded
- Streaming toggle and tool-grouping options where exposed
Events from chat to the dashboard (conceptually): new visualization, KPI, table, insights text, suggested charts; session changed / new session may clear or refresh the board so you do not mix unrelated analyses.
Core workflows
| Goal | Example prompt |
|---|---|
| Trend | “Show me revenue by month for the last four quarters.” |
| Ranking | “Top 10 products by units sold last month.” |
| Segmentation | “Break down signups by channel and region.” |
| Quality | “Highlight any weeks where conversion dropped more than 20% vs prior week.” |
| Follow-on | After a chart: “Now split that by region” or “Export the SQL you used conceptually as documentation.” |
Code transparency and reuse
Power users can open or copy the Python behind a chart to:
- Re-run in a notebook
- Adapt styling
- Embed in internal reporting
This is intentional transparency: you see how the visualization was produced, not only the picture.
Model selection
Available models come from the same admin-configured pool as other agents (Model Settings). If responses are slow or low quality, try another model or narrow the question (smaller date range, fewer dimensions).
Troubleshooting
| Symptom | Likely cause | What to try |
|---|---|---|
| Empty dashboard after prompt | Agent still computing or returned text only | Wait; ask explicitly for a chart or table |
| Chart fails to render | Invalid spec or browser/WebGL limits | Ask for a simpler chart type; retry |
| Stale panels after new session | Session reset cleared state | Expected—re-run the analysis |
| Wrong database or dataset | Context not attached | In other agents you attach DB context; here ensure the agent knows which connection/schema to use per your setup |
Related
- AI Chat — General agents, execution modes, attachments
- Database Explorer — Direct SQL exploration
- Database Connections — Ensure data sources exist