DeepAgents
Connect a DeepAgents agent to Civic using the langchain-mcp-adapters package, which bridges DeepAgents' tool interface with Civic's Streamable HTTP MCP transport.
Prerequisites
- Python 3.11+
- A Civic account at app.civic.com with a configured toolkit
- A Civic token (generate from Install → MCP URL)
- An API key for your chosen LLM provider (e.g., Anthropic, OpenAI)
Installation
Using uv (recommended):
uv add deepagents langchain-mcp-adapters langchain-anthropic fastapi uvicorn python-dotenv
Or with pip:
pip install deepagents langchain-mcp-adapters langchain-anthropic fastapi uvicorn python-dotenv
Environment Variables
# Your full Civic toolkit URL (include profile param for production agents)
CIVIC_URL=https://app.civic.com/hub/mcp?profile=your-toolkit&lock=true
# Civic token generated from app.civic.com → Install → MCP URL
CIVIC_TOKEN=your-civic-token
# API key for your chosen LLM provider
ANTHROPIC_API_KEY=your-anthropic-key # or OPENAI_API_KEY, etc.
How to generate a Civic token and configure toolkit URL parameters
Connecting to Civic
Use MultiServerMCPClient to connect to the Civic MCP Hub during app startup, then pass the discovered tools to create_deep_agent:
import os
from contextlib import asynccontextmanager
from fastapi import FastAPI
from deepagents import create_deep_agent
from langchain_mcp_adapters.client import MultiServerMCPClient
mcp_client = None
agent = None
@asynccontextmanager
async def lifespan(app: FastAPI):
global mcp_client, agent
mcp_client = MultiServerMCPClient({
"civic-nexus": {
"transport": "streamable_http",
"url": os.environ["CIVIC_URL"],
"headers": {"Authorization": f"Bearer {os.environ['CIVIC_TOKEN']}"},
}
})
tools = await mcp_client.get_tools()
agent = create_deep_agent(
model="anthropic:claude-sonnet-4-6", # or "openai:gpt-4o", etc.
tools=tools,
system_prompt="You are a helpful assistant with access to calendar and email tools.",
)
yield
await mcp_client.__aexit__(None, None, None)
app = FastAPI(lifespan=lifespan)
Running the Agent
from fastapi import FastAPI
from pydantic import BaseModel
class ChatRequest(BaseModel):
message: str
thread_id: str = "default"
@app.post("/chat")
async def chat(request: ChatRequest):
result = await agent.ainvoke(
{"messages": [{"role": "user", "content": request.message}]},
config={"configurable": {"thread_id": request.thread_id}},
)
return {"response": result["messages"][-1].content}
Start the server:
uv run uvicorn main:app --reload
Production Configuration
Lock to a Toolkit
For production agents, always lock to a specific toolkit using the profile and lock URL parameters:
CIVIC_URL=https://app.civic.com/hub/mcp?profile=your-production-toolkit&lock=true
When locked, the agent cannot switch toolkits or modify its own guardrails — preventing prompt injection attacks from escaping the defined tool scope.
Multi-Account Setup
For organization accounts, include the accountId parameter:
CIVIC_URL=https://app.civic.com/hub/mcp?profile=support&accountId=org_abc123
Pre-load Skills
Load specific Skills at session start using the skills parameter:
CIVIC_URL=https://app.civic.com/hub/mcp?profile=support&skills=escalation,canned-responses
Environment Variable Reference
| Variable | Description |
|---|---|
CIVIC_URL | Full Civic toolkit URL including profile and any URL parameters |
CIVIC_TOKEN | Civic token from app.civic.com → Install → MCP URL |
ANTHROPIC_API_KEY | API key for your LLM provider (e.g., Anthropic, OpenAI) |
Reference Implementation
A complete reference implementation including a FastAPI server, streaming chat UI, and production patterns is available at:
github.com/civicteam/deepagents-reference-implementation-civic