Submit

Bonnard CLI

@Bonnard

Ultra-fast to deploy agentic-first mcp-ready semantic layer. Let your data be like water. Bonnard is a CLI for building and deploying semantic layers. Define metrics and dimensions in YAML, deploy a governed MCP server, and serve AI agents and BI tools. Supports Snowflake, BigQuery, Databricks, PostgreSQL, and more. Ships with native integrations for Claude Code, Cursor, and Codex. GitHub: https://github.com/meal-inc/bonnard-cli Docs: https://docs.bonnard.dev/docs/ npm: https://www.npmjs.com/package/@bonnard/cli
Overview

Bonnard -the semantic engine for MCP clients, AI agents, and data teams

The semantic engine for MCP clients. Define metrics once, query from anywhere.

npm version MIT License Discord

Docs · Getting Started · Changelog · Discord · Website


Bonnard is an agent-native semantic layer CLI. Deploy an MCP server and governed analytics API in minutes -for AI agents, BI tools, and data teams. Define metrics and dimensions in YAML, validate locally, and ship to production. Works with Snowflake, BigQuery, Databricks, and PostgreSQL. Ships with native integrations for Claude Code, Cursor, and Codex. Built with TypeScript.

Why Bonnard?

Most semantic layers were built for dashboards and retrofitted for AI. Bonnard was built the other way around -agent-native from day one with Model Context Protocol (MCP) as a core feature, not a plugin. One CLI takes you from an empty directory to a production semantic layer serving AI agents, BI tools, and human analysts through a single governed API.

Bonnard architecture -data sources flow through the semantic layer to AI agents, BI tools, and MCP clients

Quick Start

No install required. Run directly with npx:

npx @bonnard/cli init

Or install globally:

npm install -g @bonnard/cli

Then follow the setup flow:

bon init                      # Scaffold project + agent configs
bon datasource add            # Connect your warehouse
bon validate                  # Check your models locally
bon login                     # Authenticate
bon deploy                    # Ship it

No warehouse yet? Start exploring with a full retail demo dataset:

bon datasource add --demo

Requires Node.js 20+.

Agent-Native from Day One

When you run bon init, Bonnard generates context files so AI coding agents understand your semantic layer from the first prompt:

you@work my-project % bon init

Initialised Bonnard project
Core files:
  bon.yaml
  bonnard/cubes/
  bonnard/views/
Agent support:
  .claude/rules/bonnard.md
  .claude/skills/bonnard-get-started/
  .cursor/rules/bonnard.mdc
  AGENTS.md
AgentWhat gets generated
Claude Code.claude/rules/bonnard.md + skill templates in .claude/skills/
Cursor.cursor/rules/bonnard.mdc with frontmatter configuration
CodexAGENTS.md + skills directory

Set up your MCP server so agents can query your semantic layer directly:

bon mcp setup                 # Configure MCP server
bon mcp test                  # Verify the connection

Auto-Detected from Your Project

Auto-detected warehouses and data tools -Snowflake, BigQuery, PostgreSQL, Databricks, DuckDB, dbt, Dagster, Prefect, Airflow, Looker, Cube, Evidence, SQLMesh, Soda, Great Expectations

Bonnard automatically detects your warehouses and data tools. Point it at your project and it discovers schemas, tables, and relationships.

  • Snowflake -full support including Snowpark
  • Google BigQuery -native integration
  • Databricks -SQL warehouses and Unity Catalog
  • PostgreSQL -including cloud-hosted variants (Supabase, Neon, RDS)
  • DuckDB -local development and testing
  • dbt -model and profile import
  • Dagster, Prefect, Airflow -orchestration tools
  • Looker, Cube, Evidence -existing BI layers
  • SQLMesh, Soda, Great Expectations -data quality and transformation

Querying

Query your semantic layer from the terminal using JSON or SQL syntax:

# JSON query
bon query --measures revenue,order_count --dimensions product_category --time-dimension created_at

# SQL query
bon query --sql "SELECT product_category, MEASURE(revenue) FROM orders GROUP BY 1"

Agents connected via MCP can run the same queries programmatically, with full access to your governed metric definitions.

Project Structure

my-project/
├── bon.yaml              # Project configuration
├── bonnard/
│   ├── cubes/            # Metric and dimension definitions
│   └── views/            # Curated query interfaces
├── .bon/                 # Local credentials (gitignored)
├── .claude/              # Claude Code agent context
├── .cursor/              # Cursor agent context
└── AGENTS.md             # Codex agent context

CI/CD

Deploy from your pipeline with the --ci flag for non-interactive mode:

bon deploy --ci

Handles automatic datasource synchronisation and skips interactive prompts. Fits into GitHub Actions, GitLab CI, or any pipeline that runs Node.js.

Commands

CommandDescription
bon initScaffold a new project with agent configs
bon datasource addConnect a data source (or --demo for sample data)
bon datasource add --from-dbtImport from dbt profiles
bon datasource listList connected data sources
bon validateValidate models locally before deploying
bon deployDeploy semantic layer to production
bon deploymentsList active deployments
bon diffPreview changes before deploying
bon annotateAdd metadata and descriptions to models
bon queryRun queries from the terminal (JSON or SQL)
bon mcp setupConfigure MCP server for agent access
bon mcp testTest MCP connection
bon docsBrowse or search documentation from the CLI
bon login / bon logoutManage authentication
bon whoamiCheck current session

For the full CLI reference, see the documentation.

Documentation

Community

Contributions are welcome. If you find a bug or have an idea, open an issue or submit a pull request.

License

MIT

© 2025 MCP.so. All rights reserved.

Build with ShipAny.