API Documentation

Integrate AgentShield into your AI agents in minutes.

Quick Start

  1. Create a free account
  2. Copy your API key from the Dashboard
  3. Install the SDK: pip install agentshield-ai
  4. Start monitoring your agents in 3 lines of code

SDK Python SDK

The easiest way to integrate AgentShield. Install the official Python package:

pip install agentshield-ai

Basic Usage

from agentshield import AgentShield

shield = AgentShield(api_key="ask_your_key_here")

# Send an event for analysis
result = shield.track(
    agent_name="my-chatbot",
    user_input="Do you offer free shipping?",
    agent_output="Yes, we offer free shipping on orders over $50.",
    action_taken="respond",
)

if result["alert_triggered"]:
    print(f"ALERT: {result['alert_reason']}")

Decorator (Auto-Monitor)

Wrap any agent function to automatically track every call:

from agentshield import AgentShield

shield = AgentShield(api_key="ask_your_key_here")

@shield.monitor("customer-support")
def my_agent(question):
    # Your LLM call here
    return call_llm(question)

# Every call is now monitored automatically
response = my_agent("Can I get a refund?")

Get Stats

stats = shield.stats()
print(f"Total events: {stats['total_events']}")
print(f"Alert rate: {stats['alert_rate']}%")

Tracing

with shield.start_trace("support-bot") as trace:
    trace.span("llm_call", "call_gpt4",
               input_text=prompt, output_text=response,
               tokens_input=150, tokens_output=300,
               model_used="gpt-4")
    trace.span("tool_call", "search_database",
               input_text="query", output_text="results")

Approvals

approval_id = shield.request_approval(
    agent_name="billing-bot",
    action_description="Process refund of $2,500",
    estimated_value=2500.0
)

result = shield.wait_for_approval(approval_id, timeout=300)
if result["status"] == "approved":
    process_refund()

SDK Reference

Method Description
shield.track(...) Send a single event for risk analysis. Accepts optional: trace_id, session_id, tokens_input, tokens_output, model_used, estimated_cost.
@shield.monitor(name) Decorator that auto-tracks every call to your agent function.
shield.stats() Get aggregated statistics (total events, alerts, risk breakdown).
shield.start_trace(agent_name) Returns a Trace context manager. Use with shield.start_trace("bot") as trace: then call trace.span(...).
trace.span(span_type, name, ...) Add a span to a trace. Accepts: input_text, output_text, tokens_input, tokens_output, model_used.
shield.request_approval(...) Request human approval. Args: agent_name, action_description, estimated_value. Returns approval_id string.
shield.wait_for_approval(id, ...) Poll for approval decision. Args: approval_id, timeout=300, poll_interval=5. Returns status dict.

View on PyPI: pypi.org/project/agentshield-ai

SDK Framework Integrations

AgentShield integrates with popular AI agent frameworks. Every agent execution, tool call, and LLM call is automatically traced — zero code changes to your agents.

LANGCHAIN

LangChain / LangGraph

Automatic tracing for LangChain chains and agents via a callback handler. Captures every LLM call, tool use, chain step, and retrieval.

pip install agentshield-ai[langchain]
from agentshield import AgentShield
from agentshield.langchain_callback import AgentShieldCallbackHandler

shield = AgentShield(api_key="ask_your_key_here")
callback = AgentShieldCallbackHandler(shield, agent_name="my-chain")

# Pass to any LangChain chain or agent
chain = prompt | llm | parser
result = chain.invoke({"input": "Hello"}, config={"callbacks": [callback]})
What's captured: LLM calls (model, tokens, cost), tool calls (name, args, result), chain steps, retrieval queries, errors — all with parent-child relationships.
CREWAI

CrewAI

Automatic tracing for CrewAI crews via an event listener. Captures crew lifecycle, agent executions, task results, tool calls, and LLM calls.

pip install agentshield-ai[crewai]
from agentshield import AgentShield
from agentshield.crewai_listener import AgentShieldCrewAIListener

shield = AgentShield(api_key="ask_your_key_here")
listener = AgentShieldCrewAIListener(shield, agent_name="my-crew")

# That's it! Now run any CrewAI crew — everything is traced automatically.
crew = Crew(agents=[researcher, writer], tasks=[research_task, write_task])
crew.kickoff()
What's captured: Crew kickoff/completion, agent executions (role, goal), task lifecycle, tool calls (name, args, cache hits), LLM calls (model, tokens, response) — all with parent-child span tree.
OPENAI

OpenAI Agents SDK

Automatic tracing for OpenAI Agents via a TracingProcessor. Captures agent executions, LLM generations, tool calls, handoffs, and guardrails.

pip install agentshield-ai openai-agents
from agentshield import AgentShield
from agentshield.openai_agents_tracer import AgentShieldOpenAIAgentsTracer

shield = AgentShield(api_key="ask_your_key_here")
tracer = AgentShieldOpenAIAgentsTracer(shield, agent_name="my-agent")

# That's it! All agent activity is traced automatically.
from agents import Agent, Runner
agent = Agent(name="Bot", instructions="Be helpful.")
result = Runner.run_sync(agent, "Hello!")
What's captured: Agent executions, LLM generations (model, tokens, cost), tool/function calls, agent handoffs, guardrail evaluations — all with parent-child span tree.
MCP

Model Context Protocol (MCP)

NEW

Zero-code integration for MCP-compatible AI agents. Works with Claude Desktop, Cursor, Windsurf, Cline, and any tool that supports the Model Context Protocol. No SDK code needed — just add a JSON config.

1. Install

pip install agentshield-ai[mcp]

2. Add to your MCP config

Claude Desktop: ~/.claude/claude_desktop_config.json  |  Cursor: .cursor/mcp.json

{
  "mcpServers": {
    "agentshield": {
      "command": "python",
      "args": ["-m", "agentshield.mcp_server"],
      "env": {
        "AGENTSHIELD_API_KEY": "ask_your_key_here"
      }
    }
  }
}

3. That's it!

Your AI agent now has access to AgentShield tools: track_event, start_trace, add_span, end_trace, request_approval, check_approval, get_stats, get_cost_summary.

Available tools: track_event (risk analysis), start_trace / add_span / end_trace (execution tracing), request_approval / check_approval (human-in-the-loop), get_stats, get_cost_summary.
Install all integrations at once: pip install agentshield-ai[all]

Authentication

All API requests require an API key sent in the X-API-Key header.

Your API key is available on your Dashboard after signing up. Keys follow the format ask_...

curl -H "X-API-Key: ask_your_key_here" \
  https://useagentshield.net/api/events/stats

MCPS Identity Verification

AgentShield supports MCPS (MCP Secure) — a cryptographic identity verification protocol for AI agents using ECDSA P-256 signatures. Agents with verified MCPS passports get improved risk scoring, trust indicators in the dashboard, and adaptive alert thresholds — higher trust agents receive fewer false positives while anomalies are escalated faster.

MCPS uses trust levels from L0 (unsigned) to L4 (fully audited). Higher trust levels reduce the risk score and adjust alert sensitivity:

Level Name Description
L0UnsignedNo passport attached
L1IdentifiedPassport present, signature not verified
L2VerifiedECDSA signature verified
L3ScannedVerified + security scan passed
L4AuditedVerified + full audit trail

Alert Thresholds by Trust Level

Trust Level Minimum Alert Level Behavior
L0-L1Low (all alerts fire)Full monitoring — unknown agent
L2Medium (low suppressed)Reduced noise — verified identity
L3-L4High (low + medium suppressed)Anomaly detection — trusted agent

Below-threshold alerts are suppressed automatically. The risk level is still recorded for scoring and reporting, but no alert is triggered — reducing dashboard noise while maintaining full observability.

Usage with SDK

from agentshield import AgentShield

shield = AgentShield(
    api_key="ask_your_key",
    mcps_passport={
        "agent_id": "your-agent-uuid",
        "trust_level": "L2",
        "signature": "base64-ecdsa-signature",
        "timestamp": "2026-03-14T10:00:00Z",
        "issuer": "mcps-authority-id",
    }
)
# All API calls now include the MCPS passport header

Raw HTTP Header

X-MCPS-Passport: {"agent_id": "...", "trust_level": "L2", "signature": "...", "timestamp": "..."}

MCPS protocol by CyberSecAI Ltd (MIT License).

Base URL

https://useagentshield.net/api

POST /api/events

Send an event every time your AI agent processes a request. AgentShield will analyze it in real-time and return a risk assessment.

Request Body

Field Type Required Description
agent_name string Yes Unique name for your agent (e.g. "customer-support-bot")
event_type string No "response" (default), "action", or "error"
user_input string No What the user asked your agent
agent_output string No What your agent responded — this is analyzed for risks
action_taken string No Action the agent performed (e.g. "apply_discount", "process_refund")
value float No Monetary value involved (used for anomaly detection)
metadata object No Any additional data you want to attach
trace_id string No Link this event to a trace
session_id string No Group events into a session
tokens_input int No Input tokens used
tokens_output int No Output tokens used
model_used string No LLM model name (e.g. "gpt-4", "claude-3")
estimated_cost float No Estimated cost in USD

Response

Field Type Description
event_id int Unique event ID
risk_level string "low", "medium", "high", or "critical"
alert_triggered bool Whether this event triggered an alert
alert_reason string Why the alert was triggered (null if no alert)

Example

REQUEST
curl -X POST https://useagentshield.net/api/events \
  -H "Content-Type: application/json" \
  -H "X-API-Key: ask_your_key_here" \
  -d '{
    "agent_name": "support-bot",
    "event_type": "response",
    "user_input": "Can I get a refund?",
    "agent_output": "I have processed your full refund of $500",
    "action_taken": "process_refund",
    "value": 500.00
  }'
RESPONSE
{
  "event_id": 42,
  "risk_level": "low",
  "alert_triggered": false,
  "alert_reason": null
}

GET /api/events/stats

Get aggregated statistics for all events sent by your agents.

Response

{
  "total_events": 1250,
  "total_alerts": 23,
  "alert_rate": 1.8,
  "risk_breakdown": {
    "low": 1100,
    "medium": 127,
    "high": 20,
    "critical": 3
  },
  "agents": [
    {"agent_name": "support-bot", "count": 800, "alerts": 15},
    {"agent_name": "sales-agent", "count": 450, "alerts": 8}
  ]
}

Agent Tracing Starter+

Capture end-to-end traces of your agent's execution, including LLM calls, tool usage, and sub-agent handoffs.

POST /api/traces

Start a new trace for an agent execution.

Request Body

Field Type Required Description
agent_name string Yes Name of the agent being traced
session_id string No Group traces into a session
metadata object No Additional context for this trace

Response

Field Type Description
trace_id string Unique trace identifier
status string "active"
created_at string ISO 8601 timestamp

Example

curl -X POST https://useagentshield.net/api/traces \
  -H "Content-Type: application/json" \
  -H "X-API-Key: ask_your_key_here" \
  -d '{"agent_name": "support-bot", "session_id": "sess_abc123"}'

PUT /api/traces/{trace_id}

Complete or mark a trace as errored.

Request Body

Field Type Required Description
status string Yes "completed" or "error"
curl -X PUT https://useagentshield.net/api/traces/trc_abc123 \
  -H "Content-Type: application/json" \
  -H "X-API-Key: ask_your_key_here" \
  -d '{"status": "completed"}'

GET /api/traces

List traces with optional filters.

Query Parameters

Param Type Description
page int Page number (default: 1)
per_page int Results per page (default: 20)
agent_name string Filter by agent name
risk_level string Filter by risk level
session_id string Filter by session

Response

{
  "traces": [...],
  "total": 150,
  "page": 1,
  "per_page": 20
}

GET /api/traces/{trace_id}

Get a single trace with its nested spans tree.

Response

{
  "trace": { "trace_id": "trc_abc123", "agent_name": "support-bot", "status": "completed", ... },
  "spans": [
    { "span_id": "spn_1", "span_type": "llm_call", "name": "call_gpt4", "children": [...] }
  ]
}

POST /api/spans

Add a span to an active trace.

Request Body

Field Type Required Description
trace_id string Yes Parent trace ID
span_type string Yes llm_call, tool_call, retrieval, function, agent_handoff, or custom
name string Yes Descriptive name for this span
parent_span_id string No Nest under a parent span
input_text string No Input to this span
output_text string No Output from this span
tokens_input int No Input tokens used
tokens_output int No Output tokens used
model_used string No LLM model name
estimated_cost float No Estimated cost in USD
duration_ms int No Duration in milliseconds
metadata object No Additional span data
curl -X POST https://useagentshield.net/api/spans \
  -H "Content-Type: application/json" \
  -H "X-API-Key: ask_your_key_here" \
  -d '{
    "trace_id": "trc_abc123",
    "span_type": "llm_call",
    "name": "call_gpt4",
    "input_text": "What is our refund policy?",
    "output_text": "Our refund policy allows...",
    "tokens_input": 150,
    "tokens_output": 300,
    "model_used": "gpt-4"
  }'

PUT /api/spans/{span_id}

Update or complete an existing span.

Request Body

Field Type Description
output_text string Output from this span
status string "completed" or "error"
duration_ms int Duration in milliseconds
tokens_input int Input tokens used
tokens_output int Output tokens used
estimated_cost float Estimated cost in USD
curl -X PUT https://useagentshield.net/api/spans/spn_xyz789 \
  -H "Content-Type: application/json" \
  -H "X-API-Key: ask_your_key_here" \
  -d '{"status": "completed", "duration_ms": 1250}'

Human-in-the-Loop Approvals Starter+

Pause agent execution and require human approval before high-risk actions proceed.

POST /api/approvals

Request human approval for an agent action.

Request Body

Field Type Required Description
agent_name string Yes Name of the requesting agent
action_description string Yes Human-readable description of the action
trace_id string No Link to a trace
risk_level string No Risk level override
estimated_value float No Monetary value of the action

Response

Field Type Description
approval_id string Unique approval request ID
status string "pending"
expires_at string ISO 8601 expiration timestamp
curl -X POST https://useagentshield.net/api/approvals \
  -H "Content-Type: application/json" \
  -H "X-API-Key: ask_your_key_here" \
  -d '{
    "agent_name": "billing-bot",
    "action_description": "Process refund of $2,500",
    "estimated_value": 2500.0
  }'

GET /api/approvals/{approval_id}

Check the status of an approval request.

Response

Field Type Description
approval_id string Unique approval ID
status string "pending", "approved", "rejected", or "expired"
decided_by string Email of the reviewer (null if pending)
decision_reason string Reason provided by the reviewer
curl https://useagentshield.net/api/approvals/apr_abc123 \
  -H "X-API-Key: ask_your_key_here"

POST /api/approvals/{approval_id}/decide

Approve or reject a pending request (typically called from the dashboard).

Request Body

Field Type Required Description
decision string Yes "approved" or "rejected"
reason string No Reason for the decision
curl -X POST https://useagentshield.net/api/approvals/apr_abc123/decide \
  -H "Content-Type: application/json" \
  -H "X-API-Key: ask_your_key_here" \
  -d '{"decision": "approved", "reason": "Verified with customer"}'

GET /api/approvals

List all approval requests. Filter by status with ?status=pending.

curl https://useagentshield.net/api/approvals?status=pending \
  -H "X-API-Key: ask_your_key_here"

POST /api/approval-rules

Create an automatic approval rule.

Request Body

Field Type Description
agent_name string Agent this rule applies to
rule_type string risk_level, value_threshold, action_type, or always
rule_value string Value for the rule (e.g. "high", "1000")
auto_approve_after_minutes int Auto-approve if no decision after N minutes
curl -X POST https://useagentshield.net/api/approval-rules \
  -H "Content-Type: application/json" \
  -H "X-API-Key: ask_your_key_here" \
  -d '{"agent_name": "billing-bot", "rule_type": "value_threshold", "rule_value": "1000"}'

GET /api/approval-rules

List all approval rules.

curl https://useagentshield.net/api/approval-rules \
  -H "X-API-Key: ask_your_key_here"

DELETE /api/approval-rules/{id}

Delete an approval rule.

curl -X DELETE https://useagentshield.net/api/approval-rules/rule_abc123 \
  -H "X-API-Key: ask_your_key_here"

Cost Attribution Starter+

Track LLM costs per agent, model, and time period. Set budget alerts to avoid surprise bills.

GET /api/costs/summary

Get cost summary for the current billing month.

Response

Field Type Description
total_cost float Total cost in USD this month
total_tokens_input int Total input tokens
total_tokens_output int Total output tokens
total_traces int Total traces this month
avg_cost_per_trace float Average cost per trace
curl https://useagentshield.net/api/costs/summary \
  -H "X-API-Key: ask_your_key_here"

GET /api/costs/by-agent

Get cost breakdown grouped by agent.

Response

[
  { "agent_name": "support-bot", "total_cost": 45.20, "total_tokens": 1500000, "trace_count": 320 },
  { "agent_name": "billing-bot", "total_cost": 12.80, "total_tokens": 420000, "trace_count": 85 }
]

GET /api/costs/by-model

Get cost breakdown grouped by LLM model.

Response

[
  { "model": "gpt-4", "total_cost": 38.50, "total_tokens": 800000, "span_count": 250 },
  { "model": "claude-3", "total_cost": 19.50, "total_tokens": 1100000, "span_count": 155 }
]

GET /api/costs/daily

Get daily cost trend for the last 30 days.

Response

[
  { "date": "2026-03-05", "total_cost": 2.40, "trace_count": 15 },
  { "date": "2026-03-04", "total_cost": 3.10, "trace_count": 22 },
  ...
]

POST /api/costs/budgets

Create a budget alert for an agent.

Request Body

Field Type Description
agent_name string Agent to set budget for
budget_type string "daily", "weekly", or "monthly"
budget_amount float Budget limit in USD
alert_at_percent int Alert when this % of budget is used (e.g. 80)
curl -X POST https://useagentshield.net/api/costs/budgets \
  -H "Content-Type: application/json" \
  -H "X-API-Key: ask_your_key_here" \
  -d '{"agent_name": "support-bot", "budget_type": "monthly", "budget_amount": 100.0, "alert_at_percent": 80}'

GET /api/costs/budgets

List all budget alerts.

curl https://useagentshield.net/api/costs/budgets \
  -H "X-API-Key: ask_your_key_here"

DELETE /api/costs/budgets/{id}

Delete a budget alert.

curl -X DELETE https://useagentshield.net/api/costs/budgets/bgt_abc123 \
  -H "X-API-Key: ask_your_key_here"

Agent Risk Score Starter+ NEW

Continuous risk score (0-1000) per agent based on alert rate, error rate, cost stability, approval compliance, and more. Higher score = safer agent.

GET /api/risk-score?agent_name=X

Get risk score for a specific agent. Recalculates on each request.

Response

{
  "agent_name": "support-bot",
  "risk_score": 720,
  "previous_score": 735,
  "score_change": -15,
  "band": "good",
  "color": "blue",
  "components": {
    "alert_rate": {"score": 900, "weight": 25},
    "risk_distribution": {"score": 800, "weight": 20},
    "hallucination_rate": {"score": 500, "weight": 15},
    ...
  }
}

GET /api/risk-score/all

Get risk scores for all agents.

[
  {"agent_name": "support-bot", "risk_score": 720, "band": "good", "score_change": -15},
  {"agent_name": "billing-bot", "risk_score": 450, "band": "fair", "score_change": 30}
]

SDK Usage

score = shield.get_risk_score("support-bot")
print(f"Score: {score['risk_score']}/1000 ({score['band']})")

Cost Prediction Starter+ NEW

Predict the cost of an agent task BEFORE running, based on historical traces. Returns p25/p50/p95 cost estimates with confidence.

GET /api/costs/predict?agent_name=X&model_used=Y

Get cost prediction for next agent execution. Requires 5+ completed traces.

Response

{
  "prediction_id": "prd_abc123...",
  "agent_name": "support-bot",
  "predicted_cost": {"low": 0.42, "mid": 1.23, "high": 4.87},
  "predicted_tokens": {"input": 1500, "output": 800},
  "confidence": 0.87,
  "based_on_traces": 26,
  "budget_recommendation": 5.84,
  "model_used": "gpt-4"
}

POST /api/costs/predict/track

Track actual cost against a prediction to measure accuracy.

// Request
{"prediction_id": "prd_abc123", "actual_cost": 1.45}

// Response
{"prediction_id": "prd_abc123", "predicted_mid": 1.23, "actual_cost": 1.45, "accuracy_pct": 0.82}

SDK Usage

prediction = shield.predict_cost("support-bot", model_used="gpt-4")
print(f"Estimated: ${prediction['predicted_cost']['mid']}")

Blast Radius Starter+ NEW

Estimate the maximum potential damage an agent can cause based on its permissions, history, and financial exposure. Get actionable mitigations.

GET /api/blast-radius?agent_name=X

Get full blast radius analysis for an agent.

Response

{
  "agent_name": "billing-bot",
  "blast_radius_score": 67,
  "blast_radius_band": "high",
  "estimated_max_damage_usd": 23000.00,
  "reversibility_score": 35,
  "scope": {
    "tool_types": ["llm_call", "tool_call"],
    "has_destructive_actions": true,
    "max_single_transaction": 2500.00
  },
  "risk_factors": [
    "Agent has executed destructive actions 12 times",
    "No approval rules for transactions up to $2,500.00"
  ],
  "mitigations": [
    "Add approval rules for financial actions",
    "Set approval rule for transactions > $1,250"
  ]
}

GET /api/blast-radius/all

Get blast radius summary for all agents.

[
  {"agent_name": "billing-bot", "blast_radius_score": 67, "blast_radius_band": "high", "estimated_max_damage_usd": 23000},
  {"agent_name": "support-bot", "blast_radius_score": 12, "blast_radius_band": "low", "estimated_max_damage_usd": 0}
]

SDK Usage

blast = shield.get_blast_radius("billing-bot")
print(f"Score: {blast['blast_radius_score']}/100 ({blast['blast_radius_band']})")
print(f"Max damage: ${blast['estimated_max_damage_usd']:,.0f}")

Pre-Production Testing Starter+

Run adversarial, bias, edge-case, and compliance tests against your agents before deploying to production.

POST /api/test-suites

Create a new test suite for an agent.

Request Body

Field Type Required Description
name string Yes Name for this test suite
agent_name string Yes Agent to test
test_type string No adversarial, bias, edge_case, compliance, or custom
custom_cases array No Array of custom test case objects

Response

Field Type Description
suite_id string Unique test suite ID
test_cases array Generated test cases
status string "created"
curl -X POST https://useagentshield.net/api/test-suites \
  -H "Content-Type: application/json" \
  -H "X-API-Key: ask_your_key_here" \
  -d '{"name": "Refund safety tests", "agent_name": "billing-bot", "test_type": "adversarial"}'

GET /api/test-suites

List all test suites.

curl https://useagentshield.net/api/test-suites \
  -H "X-API-Key: ask_your_key_here"

GET /api/test-suites/{suite_id}

Get a test suite with its results.

curl https://useagentshield.net/api/test-suites/ts_abc123 \
  -H "X-API-Key: ask_your_key_here"

POST /api/test-suites/{suite_id}/results

Submit test results for a suite run.

curl -X POST https://useagentshield.net/api/test-suites/ts_abc123/results \
  -H "Content-Type: application/json" \
  -H "X-API-Key: ask_your_key_here" \
  -d '{"results": [{"case_id": "tc_1", "passed": true, "output": "..."}]}'

Compliance Reports Pro+

Generate compliance reports for EU AI Act, NIST AI RMF, and export full audit trails.

GET /api/reports/compliance

Generate a compliance report for a specific framework.

Query Parameters

Param Type Description
framework string eu_ai_act or nist_ai_rmf
period_days int Report period in days (default: 30)
curl "https://useagentshield.net/api/reports/compliance?framework=eu_ai_act&period_days=30" \
  -H "X-API-Key: ask_your_key_here"

GET /api/reports/audit-trail

Export a full audit trail of all agent activity.

Query Parameters

Param Type Description
format string json or csv
curl "https://useagentshield.net/api/reports/audit-trail?format=csv" \
  -H "X-API-Key: ask_your_key_here"

Error Codes

Code Meaning
401 Invalid API key
403 Plan does not include this feature
429 Plan limit reached (agent limit or monthly event limit)
422 Invalid request body (missing required fields)

What We Detect

AgentShield analyzes your agent's output in real-time for these risk categories:

CRITICAL
  • • Discrimination (age, gender, race, disability)
  • • Zero/negative values in payments
HIGH
  • • Unauthorized promises (free forever, unlimited)
  • • Medical or legal advice
  • • Unauthorized refund/discount promises
  • • Unusually high transaction values
MEDIUM
  • • Agent errors
  • • High-value transactions (>$1,000)
LOW
  • • Normal agent behavior
  • • No risks detected

Plan Limits

Feature Free ($0) Starter ($49/mo) Pro ($149/mo) Enterprise
API + Dashboard Yes Yes Yes Yes
Agents 1 5 20 Unlimited
Events / month 1,000 50,000 500,000 Unlimited
Risk Analysis Keyword AI-Powered AI-Powered AI-Powered
Agent Tracing 10K/mo 100K/mo Unlimited
Approvals 100/mo 1K/mo Unlimited
Cost Tracking Yes Yes Yes
Testing 10 runs/mo 100 runs/mo Unlimited
Compliance Reports Yes Yes
Email Alerts Yes Yes Yes Yes
Webhooks / Slack Yes Yes Yes Yes

Code Examples

Python (SDK — Recommended)

from agentshield import AgentShield

shield = AgentShield(api_key="ask_your_key_here")

result = shield.track(
    agent_name="my-chatbot",
    user_input=user_message,
    agent_output=agent_response,
    action_taken="respond",
)

if result["alert_triggered"]:
    print(f"ALERT: {result['alert_reason']}")

Python (requests)

import requests

response = requests.post(
    "https://useagentshield.net/api/events",
    headers={"X-API-Key": "ask_your_key_here"},
    json={
        "agent_name": "my-chatbot",
        "user_input": user_message,
        "agent_output": agent_response,
        "action_taken": "respond",
    }
)
result = response.json()

JavaScript / Node.js

const response = await fetch("https://useagentshield.net/api/events", {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    "X-API-Key": "ask_your_key_here",
  },
  body: JSON.stringify({
    agent_name: "my-chatbot",
    user_input: userMessage,
    agent_output: agentResponse,
    action_taken: "respond",
  }),
});

const result = await response.json();
if (result.alert_triggered) {
  console.warn(`ALERT: ${result.alert_reason}`);
}