AS
AgentShield

Everything you need
to run AI agents safely.

Trace, score, block, audit. One SDK, one decorator, one API call.

01 / 06

Trace every call

One decorator captures every prompt, response, latency, token cost, and risk score. Searchable. Filterable. Exportable.

@shield.monitor("my-agent")
02 / 06

Block before execution

check_guardrails() runs in <100ms before your LLM call. Stop prompt injection, policy violations, dangerous actions before they happen — not after.

shield.check_guardrails(name, output)
03 / 06

Risk score on every call

AgentShield assigns low/medium/high/critical risk to every agent response based on pattern matching and (Pro+) LLM judgment. Filter, alert, audit.

{ risk: "high", reason: "..." }
04 / 06

Cost & budget enforcement

Per-agent budget caps. When daily/monthly spend hits threshold, agents stop calling. No more surprise $5,000 bills from a runaway loop.

shield.set_budget("agent", $50)
05 / 06

EU AI Act Article 12 ready

Tamper-evident logs over the full operational lifetime. Auto-generated audit reports. Article 12 compliant by default. Article 14 human-in-loop hooks supported.

shield.export_compliance_report()
06 / 06

Stack agnostic

Works with OpenAI, Anthropic, LangChain, CrewAI, MCP, custom SDKs. Python first, JS/TS support coming. Or just call the REST API directly.

POST /api/events

Set up in 9 lines of code

Add one decorator for observability. Call check_guardrails() to block dangerous actions before execution.

agent.py
# pip install agentshield-ai
from agentshield import AgentShield
from openai import OpenAI

shield = AgentShield(api_key="your-key")
client = OpenAI()

@shield.monitor("support-bot")  # traces + risk-scores every call
def my_agent(prompt):
    r = client.chat.completions.create(model="gpt-4o-mini", messages=[{"role":"user","content":prompt}])
    return r.choices[0].message.content

@shield.monitor traces every call + assigns a risk score after execution. For pre-execution blocking, add check_guardrails() before your LLM call.

This is already happening.

Real incidents from production AI agents. Each one would have been caught — or prevented — by AgentShield.

POLICY VIOLATION

Feb 14, 2024 · Air Canada

Chatbot promised a refund that didn't exist

Court ruled against the airline. AgentShield would have flagged the unauthorized policy commitment before execution.

PROMPT INJECTION

Dec 18, 2023 · Chevrolet of Watsonville

GPT chatbot agreed to sell SUV for $1

Prompt injection bypassed sales guardrails. Pre-execution check_guardrails() blocks this kind of manipulation.

AUTHORIZATION FAILURE

Mar 19, 2024 · ServiceNow

AI agent approved $50K refund without authorization

Risk score on action_taken='process_refund' with value>10000 would have triggered alert. Logged for audit.

See it work on your agent.

57 adversarial scenarios. Get a security analysis in 30 seconds. No signup.

Run Stress Test →