Trace every call
One decorator captures every prompt, response, latency, token cost, and risk score. Searchable. Filterable. Exportable.
@shield.monitor("my-agent") Trace, score, block, audit. One SDK, one decorator, one API call.
One decorator captures every prompt, response, latency, token cost, and risk score. Searchable. Filterable. Exportable.
@shield.monitor("my-agent") check_guardrails() runs in <100ms before your LLM call. Stop prompt injection, policy violations, dangerous actions before they happen — not after.
shield.check_guardrails(name, output) AgentShield assigns low/medium/high/critical risk to every agent response based on pattern matching and (Pro+) LLM judgment. Filter, alert, audit.
{ risk: "high", reason: "..." } Per-agent budget caps. When daily/monthly spend hits threshold, agents stop calling. No more surprise $5,000 bills from a runaway loop.
shield.set_budget("agent", $50) Tamper-evident logs over the full operational lifetime. Auto-generated audit reports. Article 12 compliant by default. Article 14 human-in-loop hooks supported.
shield.export_compliance_report() Works with OpenAI, Anthropic, LangChain, CrewAI, MCP, custom SDKs. Python first, JS/TS support coming. Or just call the REST API directly.
POST /api/events
Add one decorator for observability. Call check_guardrails() to block dangerous actions before execution.
# pip install agentshield-ai
from agentshield import AgentShield
from openai import OpenAI
shield = AgentShield(api_key="your-key")
client = OpenAI()
@shield.monitor("support-bot") # traces + risk-scores every call
def my_agent(prompt):
r = client.chat.completions.create(model="gpt-4o-mini", messages=[{"role":"user","content":prompt}])
return r.choices[0].message.content @shield.monitor traces every call + assigns a risk score after execution. For pre-execution blocking, add check_guardrails() before your LLM call.
Real incidents from production AI agents. Each one would have been caught — or prevented — by AgentShield.
Feb 14, 2024 · Air Canada
Court ruled against the airline. AgentShield would have flagged the unauthorized policy commitment before execution.
Dec 18, 2023 · Chevrolet of Watsonville
Prompt injection bypassed sales guardrails. Pre-execution check_guardrails() blocks this kind of manipulation.
Mar 19, 2024 · ServiceNow
Risk score on action_taken='process_refund' with value>10000 would have triggered alert. Logged for audit.
57 adversarial scenarios. Get a security analysis in 30 seconds. No signup.
Run Stress Test →