WrenAI — Open Context Layer for AI Agents

Docs

Canner%2FWrenAI | Trendshift

> 📣 **2026-05-07** — Wren Engine has merged into this repo under [`core/`](./core). The previous `Canner/wren-engine` repo is archived. The previous WrenAI GenBI app is preserved on the [`legacy/v1`](https://github.com/Canner/WrenAI/tree/legacy/v1) branch (tag `v1-final`). [Read the announcement →](https://github.com/Canner/WrenAI/discussions/2205) --- ## Why WrenAI? AI agents fail on business data not because they can't write SQL — they fail because they don't know what your warehouse means. Overlapping tables, inconsistent naming, metric definitions scattered across dashboards and SQL files: an LLM with raw database access guesses just as badly as a new hire on day one. WrenAI is the open context layer that fills that gap. You model your business in **[MDL](./core/wren-mdl/)** (Modeling Definition Language) — entities, relationships, calculations, governed access patterns — and any agent (Claude, Cursor, ChatGPT, internal copilots, customer-facing apps) queries through the same layer your analysts already use. A Rust engine powered by [Apache DataFusion](https://datafusion.apache.org/) translates the modeled SQL and runs it against 20+ data sources (PostgreSQL, BigQuery, Snowflake, Spark, etc.). Use it as a Python SDK, a CLI, a WASM module in the browser, or as building blocks for agent skills. ## Quick start The fastest path is to let an AI coding agent (Claude Code, Cursor, Aider, etc.) drive the install: ```bash # Install WrenAI skills into your AI agent npx skills add Canner/WrenAI --skill '*' ``` Start a new agent session and ask: > Use the `wren-onboarding` skill to install and set up Wren AI Core. The `wren-onboarding` skill walks the agent through environment checks, package install, project scaffolding, the first data source connection, and a first query. Full CLI guide and manual install steps: [`core/wren/README.md`](./core/wren/README.md). Installable extras for each connector are listed there. ## Supported Data Sources Wren Engine is built to work across modern data stacks, including warehouses, databases, and file-based sources. Current open source support includes connectors such as: - Amazon S3 - Apache Spark - Apache Doris - Athena - BigQuery - ClickHouse - Databricks - DuckDB - Google Cloud Storage - Local files - MinIO - MySQL - Oracle - PostgreSQL - Redshift - SQL Server - Snowflake - Trino See the connector API docs in the project documentation for the latest connection schemas and capabilities. ## Repository map | Path | What's there | |---|---| | [`core/`](./core) | Rust engine + Python/WASM bindings + CLI. The context layer's core machinery. | |   [`core/wren-core/`](./core/wren-core) | Rust semantic engine (Cargo workspace). | |   [`core/wren-core-base/`](./core/wren-core-base) | Manifest types (`Model`, `Column`, `Cube`, `Relationship`, `View`). | |   [`core/wren-core-py/`](./core/wren-core-py) | PyO3 bindings (PyPI: `wren-core`). | |   [`core/wren-core-wasm/`](./core/wren-core-wasm) | WebAssembly build for in-browser semantic SQL (npm: `wren-core-wasm`). | |   [`core/wren/`](./core/wren) | Python SDK + `wren` CLI (PyPI: `wren-engine`). | |   [`core/wren-mdl/`](./core/wren-mdl) | MDL JSON schema. | | [`skills/`](./skills) | CLI-based agent skills (`wren-generate-mdl`, `wren-usage`, `wren-dlt-connector`, `wren-onboarding`). | | [`sdk/`](./sdk) | Framework integrations. [`sdk/wren-langchain/`](./sdk/wren-langchain) (PyPI: `wren-langchain`) is shipped; CrewAI / Pydantic-AI / Goose / LlamaIndex / Mastra are _coming soon_. | | [`examples/`](./examples) | End-to-end example projects — _coming soon_. | | [`docs/core/`](./docs/core) | Module documentation. | ## Community - **Discussions**: [github.com/Canner/WrenAI/discussions](https://github.com/Canner/WrenAI/discussions) - **Issues**: [github.com/Canner/WrenAI/issues](https://github.com/Canner/WrenAI/issues) - **Discord**: [discord.gg/canner](https://discord.gg/canner) - **Docs site**: [docs.getwren.ai](https://docs.getwren.ai) ## License WrenAI is multi-licensed: - **`core/**`, `sdk/**`, `skills/**`, `examples/**`, root-level files** — [Apache License 2.0](LICENSE-APACHE-2.0) - **`docs/**`** — [Creative Commons Attribution 4.0 International (CC BY 4.0)](LICENSE-CC-BY-4.0) Future modules may be introduced under [GNU Affero General Public License v3.0](LICENSE-AGPL-3.0); the full text is committed here pre-emptively. See [LICENSE](LICENSE) for the authoritative path-to-license map. Published packages declare their effective license in their package manifest (`Cargo.toml`, `pyproject.toml`, `package.json`). ## Our Contributors

⬆️ Back to Top