Skip to main content

Guardrails

note

Guardrails are now a core Civic feature! For the complete documentation on using guardrails, see Guardrails in Civic.

Overview​

Guardrails are security policies that control how AI assistants interact with tools and data. Unlike traditional access controls that simply grant or deny access, guardrails provide nuanced, context-aware rules that make AI systems safer and more predictable.

Why guardrails matter​

AI assistants are increasingly powerful, with access to email, code repositories, databases, and more. This power comes with unique risks:

  • The AI is not a human: Cannot distinguish between legitimate requests and prompt injection attacks
  • Scale amplifies risk: What takes a human hours, an AI can do in seconds
  • Context windows have limits: Large responses can overwhelm the AI

Learn more​

For complete documentation on guardrails, including:

  • How to manage guardrails through natural conversation
  • Built-in PII protection and prompt injection detection
  • Request and response guardrail types
  • The guardrail hierarchy (account, toolkit, user)
  • Common examples and best practices

See the Guardrails documentation in Civic.