Overview
The BigQuery MCP server provides read-only access to Google BigQuery β querying data with full SQL power and inspecting dataset/table metadata. It supports GoogleSQLSELECT queries, INFORMATION_SCHEMA views, dry-run cost estimation, and built-in AI.FORECAST for time-series forecasting.
Read-only β No DDL (
CREATE/DROP/ALTER) or DML (INSERT/UPDATE/DELETE). Schema and data management must be done via the GCP Console, bq CLI, or Terraform.How to Add BigQuery
Enable the BigQuery API
- Go to the Google Cloud Console
- Select your project (or create one)
- Navigate to APIs & Services β Library
- Search for BigQuery API and click Enable
Check Permissions
Your Google account must have at minimum
roles/bigquery.dataViewer + roles/bigquery.jobUser on the project. Check this under IAM & Admin β IAM in the GCP Console.What You Can Do
SQL Queries
Run GoogleSQL SELECT queries with dry-run cost estimation and AI.FORECAST support
Schema Inspection
Browse datasets, tables, columns, types, and row counts
Access Auditing
View dataset ACLs β who has OWNER, WRITER, READER access
Cost Management
Validate queries with dry-run mode before incurring costs
Use Cases
Data Exploration
- Project Inventory:
"What datasets and tables do I have in my BigQuery project?" - Schema Discovery:
"Show me the schema of the analytics.users table" - Access Audit:
"Who has access to my analytics dataset?"
Querying
- Row Counts:
"How many rows are in each table in my analytics dataset?" - Analysis:
"Show me the top 10 most common names in the US between 1990 and 2013" - Joins:
"Write a query to join users and events and show me each user's event count" - Forecasting:
"Forecast the next 7 days of sales from my iowa_liquor_sales data"
Cost Control
- Dry Run:
"Validate this query without running it: SELECT ..." - Job History:
"Show me all jobs run in my project in the last 24 hours using INFORMATION_SCHEMA"
Available Tools (5)
Metadata Inspection
list_dataset_ids
list_dataset_ids
list_dataset_ids β List all datasets in a project. Own-project only β returns 403 on external projects. Use INFORMATION_SCHEMA.SCHEMATA via execute_sql for cross-project metadata.get_dataset_info
get_dataset_info
get_dataset_info β Get full metadata for a dataset including creation time, location, time-travel hours, and the full ACL (access control list with roles and user grants).list_table_ids
list_table_ids
list_table_ids β List all tables in a dataset. Returns {} when the dataset is empty.get_table_info
get_table_info
get_table_info β Get full table metadata including schema (field names, types, modes), row count, storage size, table type (TABLE/VIEW/EXTERNAL/MATERIALIZED_VIEW), and timestamps.Query Execution
execute_sql
execute_sql
execute_sql β Run a GoogleSQL SELECT query against any accessible table. Supports dryRun: true to validate and estimate cost without executing. Also supports INFORMATION_SCHEMA queries and AI.FORECAST for time-series forecasting.SELECT only β No DDL or DML operations.Metadata tools are own-project only β
list_dataset_ids, get_dataset_info, list_table_ids, and get_table_info return 403 on external projects. Use INFORMATION_SCHEMA via execute_sql instead.Timestamps are epoch floats β All timestamps are returned as Unix epoch float strings (e.g. "1.7722368E9"), not ISO 8601. Use FORMAT_TIMESTAMP in your query for readable output.Always use LIMIT β Unbounded queries on large tables produce huge responses and incur unnecessary cost. Use dryRun: true first for expensive queries.Partition filtering β BigQuery tables are often partitioned by date. Filter on the partition column to dramatically reduce cost.
