Natural-language data queries, powered by DataHub + LangGraph
Ask a question. Get SQL, results, and a chart — in one turn.
Analytics Agent connects to your data warehouse and answers questions in plain English — writing SQL, running it, and rendering charts automatically. Connect it to DataHub and it gains real knowledge of your tables, columns, and business definitions, so it writes better SQL and can explain what it found in terms your team already uses. DataHub is optional — the agent works without it, just with less context.
Requires Python 3.11+
# Install and launch — no git clone, no repo, no Docker
pip install datahub-analytics-agent
analytics-agent quickstart
# Or with uv (no virtualenv management):
uvx datahub-analytics-agent quickstartThis starts the server at http://localhost:8100 and opens the browser, where a setup wizard walks you through choosing a model and entering your API key. Config and the database are stored in ~/.datahub/analytics-agent/.
Re-running analytics-agent quickstart restarts the server without any prompts. To re-open the setup wizard, use analytics-agent quickstart --reconfigure.
Other server commands:
analytics-agent start # start from existing config (no wizard)
analytics-agent stop # stop the running server
analytics-agent status # show whether server is running + URL
analytics-agent logs # tail ~/.datahub/analytics-agent/logs/agent.log
analytics-agent config # open config dir in $EDITOR or print its pathRequires: Docker, DataHub CLI (
pip install acryl-datahub),uv, Python 3.11+
git clone https://github.com/datahub-project/analytics-agent.git
cd analytics-agent
bash quickstart.shThe script starts a local DataHub instance, loads the Olist e-commerce sample dataset and catalog metadata, then builds and launches Analytics Agent at http://localhost:8100. Postgres data is persisted to ~/.datahub/analytics-agent/postgres-data/ so it survives container restarts.
Using AWS Bedrock? Export LLM_PROVIDER=bedrock before running the script. The script will verify your AWS credentials and Bedrock access before starting the container, and mount ~/.aws read-only so boto3 picks up your profiles and SSO cache automatically.
| Context Quality | A live status bar scores how well your DataHub catalog supported the agent (1–5). Hover for the LLM's reasoning. The score improves as you document your data. |
/improve-context |
Type /improve-context after any conversation to get a numbered list of documentation improvements the agent wishes it had — then approve and publish them to DataHub in one click. |
| Plain-English → SQL → Chart | Ask "top 5 categories by revenue" — the agent writes SQL, runs it, and auto-renders a Vega-Lite chart, all in one turn. |
| Multi-turn memory | Follow-ups like "make it a pie chart" or "filter to Q3" work across turns. |
| Collapsible reasoning | Tool calls and agent thinking are shown but collapsed — visible when you want them, out of the way when you don't. |
| Multiple connections | Add and manage Snowflake, BigQuery, PostgreSQL, MySQL, and other SQLAlchemy-compatible databases from Settings. Each has its own encrypted credentials. |
| Light and dark themes | Four built-in themes with a switcher in the bottom-left corner. |
This section is for hacking on the agent itself. For everyday use,
analytics-agent quickstartis simpler.
Prerequisites: uv, mise (manages Node + pnpm), Python 3.11+
git clone https://github.com/datahub-project/analytics-agent.git
cd analytics-agent
mise install # installs Node 22 + pnpm (reads .mise.toml)
make install # uv sync + pnpm install
make start # builds frontend, starts backend at :8100Open http://localhost:8100 — a setup wizard handles the LLM key and connections on first run.
Without
make:uv sync && cd frontend && pnpm install && pnpm build && cd .. && uv run uvicorn analytics_agent.main:app --port 8100
Before the first uvicorn start (or after pulling a release that adds migrations), run:
uv run analytics-agent bootstrapThis applies Alembic migrations, seeds engines and context platforms from config.yaml, and writes first-run setting defaults. The command is idempotent — re-running it on an up-to-date database is a no-op.
For Kubernetes deployments, the Helm chart runs analytics-agent bootstrap automatically as a pre-install/pre-upgrade hook (see helm/analytics-agent/README.md).
cp .env.example .env # then edit as needed# LLM — pick one provider (or leave blank and use the wizard)
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...
# DataHub (optional — can also be added via Settings → Connections)
DATAHUB_GMS_URL=https://your-instance.acryl.io/gms
DATAHUB_GMS_TOKEN=eyJhbGci...| Command | What it does |
|---|---|
make start |
Build frontend if stale, start backend |
make start-remote |
Start + show DataHub connection status |
make nuke |
Wipe the DB and start from scratch |
make dev |
Hot-reload backend (use make dev-full for frontend HMR too) |
make logs |
Tail backend logs |
# Terminal 1 — backend (dev)
uv run uvicorn analytics_agent.main:app --reload --port 8101
# Terminal 2 — frontend HMR (http://localhost:5173, proxies /api/* to :8101)
cd frontend && pnpm dev# DataHub Cloud (Acryl)
datahub init --sso --host https://your-instance.acryl.io/gms --token-duration ONE_MONTH
# Self-hosted
datahub init --host http://localhost:8080 --username datahub --password datahub
# Verify the connection
curl -s -X POST http://localhost:8100/api/settings/connections/datahub/test# config.yaml
engines:
- type: snowflake
name: snowflake
connection:
account: "${SNOWFLAKE_ACCOUNT}"
warehouse: "${SNOWFLAKE_WAREHOUSE}"
database: "${SNOWFLAKE_DATABASE}"
schema: "${SNOWFLAKE_SCHEMA}"
user: "${SNOWFLAKE_USER}"Generate an RSA key pair, upload the public key to Snowflake, then set SNOWFLAKE_PRIVATE_KEY (base64-encoded PEM) in .env.
Settings → Connections → Authentication → SSO — opens a browser window for your IdP.
BigQuery authenticates exclusively via a GCP service account. Three credential formats are supported — use whichever fits your deployment:
Export the raw service-account JSON (single line, no newlines):
export BIGQUERY_CREDENTIALS_JSON='{"type":"service_account","project_id":"my-project",...}'Or add it to .env:
BIGQUERY_CREDENTIALS_JSON={"type":"service_account","project_id":"my-project",...}Then reference the project in config.yaml:
# config.yaml
engines:
- type: bigquery
name: prod
connection:
project: "${BIGQUERY_PROJECT}"
dataset: "${BIGQUERY_DATASET}" # optional default datasetEncode your key file once:
base64 -i my-service-account.json | tr -d '\n'Then paste the output into config.yaml:
engines:
- type: bigquery
name: prod
connection:
project: my-gcp-project
dataset: my_dataset # optional
credentials_base64: "ey..."Useful for local development or when the key file is mounted into the container:
engines:
- type: bigquery
name: prod
connection:
project: my-gcp-project
credentials_path: /secrets/sa-key.jsonThe service account needs at minimum:
| Role | Purpose |
|---|---|
roles/bigquery.dataViewer |
Read tables and schemas |
roles/bigquery.jobUser |
Run queries |
Set LLM_PROVIDER to one of the values below, or use the Settings → Model wizard in the UI.
| Provider | LLM_PROVIDER value |
Auth |
|---|---|---|
| Anthropic (default) | anthropic |
ANTHROPIC_API_KEY |
| OpenAI | openai |
OPENAI_API_KEY |
| Google Gemini | google |
GOOGLE_API_KEY |
| AWS Bedrock | bedrock |
AWS credential chain |
| OpenAI-compatible proxy | openai-compatible |
OPENAI_COMPAT_BASE_URL + optional OPENAI_COMPAT_API_KEY |
Anthropic
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...Default models: claude-sonnet-4-6 (main), claude-haiku-4-5-20251001 (chart/quality/delight).
OpenAI
LLM_PROVIDER=openai
OPENAI_API_KEY=sk-...Default models: gpt-4o (main), gpt-4o-mini (chart/quality/delight).
Google Gemini
LLM_PROVIDER=google
GOOGLE_API_KEY=AIza...Default models: gemini-2.0-flash (main), gemini-1.5-flash (chart/quality/delight).
AWS Bedrock
Runs Anthropic models via Bedrock. Auth falls back to the standard AWS credential chain (env vars, ~/.aws/credentials, IAM role). Set AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY (and optionally AWS_SESSION_TOKEN) to override. AWS_REGION defaults to us-west-2.
LLM_PROVIDER=bedrock
AWS_REGION=us-west-2
LLM_MODEL=us.anthropic.claude-sonnet-4-5-20250929-v1:0OpenAI-compatible proxy (LiteLLM, vLLM, Ollama, …)
Any proxy that speaks the OpenAI chat completions API (/v1/chat/completions) works — LiteLLM, vLLM, Ollama, Azure OpenAI custom endpoints, etc. No extra dependencies required.
LLM_PROVIDER=openai-compatible
OPENAI_COMPAT_BASE_URL=https://litellm.myorg.com/v1 # required
OPENAI_COMPAT_API_KEY=sk-... # optional — omit if proxy uses network-level auth
LLM_MODEL=llama3.2 # model name as the proxy expects itYou can also configure the proxy URL and model through Settings → Model in the UI.
Model tiers — override individual tiers independently
| Task | Env var | Purpose |
|---|---|---|
| Main analysis agent | LLM_MODEL |
SQL generation, reasoning |
| Chart generation | CHART_LLM_MODEL |
Vega-Lite chart spec |
| Context quality scoring | QUALITY_LLM_MODEL |
1–5 catalog quality score |
| Titles & greeting | DELIGHT_LLM_MODEL |
Short text generation |
LLM_PROVIDER=anthropic
LLM_MODEL=claude-opus-4-7 # upgrade just the agent
QUALITY_LLM_MODEL=claude-sonnet-4-6 # or use a stronger model for quality scoringThe analytics-agent quickstart path uses SQLite at ~/.datahub/analytics-agent/data/agent.db. The Docker quickstart uses Postgres, with data persisted to ~/.datahub/analytics-agent/postgres-data/. For dev/Helm deployments, set DATABASE_URL explicitly — see .env.example for Postgres and SQLite formats.
Settings (top-right) manages:
- Connections — test, edit, add, and delete engine connections
- Authentication — per-connection: Password, Private Key, SSO, PAT, OAuth
- Tool toggles — enable/disable individual DataHub or engine tools
- Write-back skills —
publish_analysisandsave_correction(enabled by default) - Prompt — customize the system prompt
- Display — app name and logo
docker build -f docker/Dockerfile -t analytics-agent .
docker run -p 8100:8100 --env-file .env analytics-agentcd frontend && pnpm build && cd ..
uv run uvicorn analytics_agent.main:app --host 0.0.0.0 --port 8100analytics-agent/
├── backend/src/analytics_agent/
│ ├── agent/ # LangGraph ReAct graph, streaming, chart generation, analysis
│ ├── api/ # FastAPI routes: conversations, chat (SSE), settings, oauth
│ ├── context/ # DataHub tool loader (datahub_agent_context)
│ ├── db/ # SQLAlchemy models + Alembic migrations
│ │ └── models.py # Conversation, Message, Integration, Setting
│ ├── engines/ # Pluggable query engines (Snowflake, BigQuery, SQLAlchemy-based)
│ ├── prompts/ # System prompt (system_prompt.md) + chart prompt
│ └── skills/ # Write-back skills: publish-analysis, save-correction,
│ # improve-context (/improve-context slash command)
└── frontend/src/
├── components/Chat/ # MessageList, MessageInput, ContextStatusBar
├── components/Settings/
├── api/ # fetch wrappers for REST + SSE stream reader
└── store/ # Zustand: conversations, display, theme
SSE event flow:
User message → POST /api/conversations/{id}/messages
→ resolver.py resolves credentials → configured engine
→ LangGraph ReAct agent (DataHub tools + engine tools)
→ astream_events → TEXT / TOOL_CALL / TOOL_RESULT / SQL / CHART / COMPLETE
→ Frontend renders each event type inline
→ Background: context quality scored async, stored on conversation row

