Connect Claude to BigQuery with MCP
Use Claude with BigQuery through MCP for table discovery, schema inspection, and analytical SQL. Compare the direct BigQuery server with Gateway-style alternatives, copy a starter config, and keep your project scope under control.
Connect Claude to BigQuery with MCP
If your goal is “let Claude inspect BigQuery datasets and run analytical SQL without building custom glue code first,” this is the shortest path.
Recommended MCP servers for this use case
| Server | Best for | Not ideal when | Auth | Permission risk |
|---|---|---|---|---|
| BigQuery MCP Server | Direct BigQuery table listing, schema description, and query execution | You want an API layer with extra governance controls | GCP project access | Medium |
| Centralmind Gateway | Teams that want a governed API-like layer, telemetry, and PII controls | You want the fewest moving parts | Connection string + gateway config | Medium |
| Connect Claude to SQL Databases with MCP | Comparing BigQuery against MySQL, Snowflake, ClickHouse, or Fireproof | You already know BigQuery is the destination | Mixed | Medium |
Quick selection (30 seconds)
- Pick BigQuery MCP Server if your main need is straightforward dataset inspection and query execution.
- Pick Gateway if you need more explicit control over exposure, telemetry, or generated API structure.
- Limit dataset scope early instead of exposing an entire project by default.
Copy-paste config (Claude Desktop)
{
"mcpServers": {
"bigquery": {
"command": "uvx",
"args": [
"mcp-server-bigquery",
"--project",
"your-gcp-project-id",
"--location",
"us-central1",
"--dataset",
"analytics"
]
}
}
}Repeat --dataset only for the datasets Claude actually needs.
BigQuery checklist
| Item | Required | Sensitive | Notes |
|---|---|---|---|
--project | Yes | No | Use the least-broad GCP project possible |
--location | Yes | No | Must match dataset region |
--dataset | Recommended | No | Strongly prefer explicit dataset scoping |
| GCP auth context | Yes | Yes | Keep service-account or local auth scope narrow |
First tool-call prompts
- “List the tables in the analytics dataset and describe the most relevant ones.”
- “Describe the schema of the events table and identify fields needed for retention analysis.”
- “Run a read-only query for weekly active users in the last 8 weeks.”
Risk and permission notes
- BigQuery risk comes from project and dataset scope, not only from the tool itself.
- Limit access to analytics datasets first; do not expose broad multi-team projects casually.
- If query cost matters, monitor early usage so exploratory prompts do not create noisy spend.
FAQ
Should I use direct BigQuery MCP or Gateway?
Use direct BigQuery MCP when speed matters most. Use Gateway when governance, filtering, or auditability is the higher priority.
Do I need to expose every dataset in the project?
No. In most cases you should explicitly pass only the datasets that support the target workflow.
What is the fastest way to reduce BigQuery risk?
Scope the project, scope the datasets, and start with read-only exploratory queries.
Related pages
Sources and freshness
- Sources: official server pages linked above.
- Updated: March 15, 2026.
Last updated on