Observability
Every prompt, tool call, scan finding, and HITL decision lands in your SIEM and your dashboards. DefenseClaw ships local Grafana/Loki/Tempo and local Splunk stacks you can stand up with one command.
The DefenseClaw gateway is observable by design. Every decision — admission, guardrail verdict, judge call, scanner finding, HITL approval — is structured, correlated by trace_id/span_id, and emitted on three independent rails:
You can use any combination of those — they're independent sinks, not exclusive backends. Most teams start with one of the bundled local stacks while they're building policy, then add their production SIEM as a second sink without rewriting anything.
The two bundled stacks
Local OTel + Grafana stack
One-command Compose stack: OTel Collector, Loki, Tempo, Prometheus, Grafana — all wired to the gateway by setup local-observability up.
Splunk (local + Enterprise)
defenseclaw setup splunk runs three independent pipelines: --logs (local Splunk in Docker), --enterprise (remote HEC), --o11y (Splunk Observability Cloud).
What lands in observability
Every event has the same envelope so you can correlate across rails:
{
"ts": "2026-05-08T12:00:01.234Z",
"trace_id": "4bf92f3577b34da6a3ce929d0e0e4736",
"span_id": "00f067aa0ba902b7",
"agent": "claudecode",
"session_id": "...",
"kind": "guardrail.verdict",
"decision": "deny",
"rule": "secrets/aws_access_key",
"severity": "high",
"actor": { "operator": "vineeth", "host": "..." },
"payload": { "matched": "AKIA****", "tool": "fs.write" }
}The kind field is your top-level taxonomy:
kind | Emitted when |
|---|---|
admission.scan | Skill or MCP scanner finishes a scan |
admission.decision | Watcher applies an OPA verdict |
guardrail.verdict | A regex / LLM judge / inspector resolves |
judge.call | The LLM judge is invoked (with model + token usage) |
hitl.prompt | Operator is asked for approval |
hitl.decision | Operator answered allow/deny |
tool.call / tool.result | The agent invoked an MCP/native tool |
proxy.request / proxy.response | Gateway-proxied LLM call |
gateway.lifecycle | Start / reload / shutdown |
You don't pick the schema in setup — you pick the destinations. Sinks are configured in audit_sinks: of ~/.defenseclaw/config.yaml and the bundled commands maintain that block for you.
Pick a starting point
The fastest way to see what DefenseClaw is doing. One command brings up Grafana on :3000 with pre-built dashboards.
defenseclaw setup local-observability upYou already have Splunk in the org. Point HEC at the gateway and you're done — no extra infra to operate.
defenseclaw setup splunk --enterprise \
--hec-endpoint https://splunk.example.com:8088 \
--hec-token "$DEFENSECLAW_SPLUNK_HEC_TOKEN"Run the local Grafana stack for engineering teams and forward the same events to Splunk for SOC. Each sink is independent — failures don't cascade.
defenseclaw setup local-observability up
defenseclaw setup splunk --enterprise --hec-endpoint ... --hec-token ...The gateway will fan out every event to both rails. Verify with defenseclaw tui (audit panel) or tail -f ~/.defenseclaw/gateway.jsonl | jq.
Common questions
OpenClaw integration
How DefenseClaw integrates with OpenClaw end-to-end — fetch interceptor, before_tool_call hook, correlation headers, plugin-mediated HITL approvals, and the audit loop.
Local observability stack
One-command OpenTelemetry + Loki + Tempo + Prometheus + Grafana stack, pre-wired to the DefenseClaw gateway. Grafana on :3000, dashboards seeded, no manual config.