The Guardrail Proxy implements security rules to filter and validate requests between AI assistants and MCP tools. It’s designed to add LLM-specific safety controls without modifying the underlying MCP servers. This allows you to enforce policies like domain blocking, file size limits, and custom validation logic at the proxy layer.Think of it as a security checkpoint that all AI requests must pass through before reaching your tools. It can inspect, modify, or reject requests based on configurable rules.
This flask is currently distilling: The guardrail system is functional and being tested with various security rule configurations. We’re actively developing additional rule types and improving the configuration interface. Contact us if you’d like to implement custom security policies for your MCP tools.