CiscoCiscoDefenseClaw
Setup

Setup unified LLM key

Wire up DEFENSECLAW_LLM_KEY — the single environment variable that powers the LLM judge, the MCP / skill / plugin scanners, and any custom LLM call DefenseClaw makes through Bifrost.

DefenseClaw routes every LLM call through one of two layers: the in-process Bifrost SDK (used by the gateway and judge) or LiteLLM (used by the scanner SDKs). Both layers derive the provider-specific key (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.) from a single canonical knob — DEFENSECLAW_LLM_KEY — plus the model prefix you pass on the model field.

Set it once. Verify it with defenseclaw keys check. Override per-component only when you need a separate billing or rate-limit posture for the judge or a specific scanner.

Why one key

Operators were drowning in N=4 keys for an install (judge, skill scanner, MCP scanner, gateway upstream) — usually all pointing at the same OpenAI / Anthropic / Bedrock account. The unified key collapses that to one env var, with optional per-component overrides for the rare case where they diverge.

The three-step happy path

Set the key (interactive, hidden prompt)

defenseclaw keys set DEFENSECLAW_LLM_KEY

Click prompts for the value with hide_input=True, then writes it to ~/.defenseclaw/.env with 0o600 permissions. The value is masked in stdout (abcd…wxyz), masked in audit, and never echoed.

Verify it landed

defenseclaw keys check

Exits 0 when every required key for the current config is set, non-zero otherwise. CI-safe — no colour codes in the diff.

Run guardrail

defenseclaw setup guardrail

The wizard now sees DEFENSECLAW_LLM_KEY set and skips the "we will need an LLM key" sub-prompt for the judge / scanner sections.

Watch the flow

~/code/your-agent-repo
`keys list` shows what is needed and where it would come from. `keys set` writes to ~/.defenseclaw/.env. `keys check` is the CI gate.

All four keys subcommands

Prop

Type

Resolution order (what keys list actually reads)

The CLI resolves credential values from the most specific source down:

1. process environment    (export DEFENSECLAW_LLM_KEY=sk-…)
2. ~/.defenseclaw/.env    (written by `defenseclaw keys set`)
3. unset                  (REQUIRED → marked MISSING; OPTIONAL → tolerated)

This order is implemented by cli/defenseclaw/credentials.py::resolve and is the same code path that defenseclaw quickstart and defenseclaw doctor use, so what you see in keys list is exactly what the running gateway and scanners see.

No macOS Keychain

DefenseClaw does not read or write the macOS Keychain. The ~/.defenseclaw/.env file is the only on-disk credential store; transient overrides go through os.environ. Use a secret manager (1Password, Vault, AWS Secrets Manager) plus export in your shell init if you need to keep secrets out of ~/.defenseclaw/.env.

Per-component overrides

For most installs, DEFENSECLAW_LLM_KEY is the only knob you ever set. When you want a different key for a specific feature — say, a higher rate-limit account for the judge, or a separate billing line for the skill-scanner LLM second opinion — you point that one component at its own env var:

~/.defenseclaw/config.yaml
guardrail:
  judge:
    enabled: true
    llm:
      provider: anthropic
      model: claude-sonnet-4-5
      api_key_env: JUDGE_ANTHROPIC_KEY     # override; falls through to DEFENSECLAW_LLM_KEY otherwise

scanners:
  skill_scanner:
    use_llm: true
    llm:
      api_key_env: SKILL_SCANNER_OPENAI_KEY

Then store those custom env names just like the canonical key:

defenseclaw keys set JUDGE_ANTHROPIC_KEY
defenseclaw keys set SKILL_SCANNER_OPENAI_KEY
defenseclaw keys check    # both REQUIRED entries should now be ✓ set

keys list automatically resolves the env name from cfg.resolve_llm(), so the table you see is always the env vars you actually configured — not the canonical defaults.

Non-interactive (CI / scripts)

defenseclaw keys set DEFENSECLAW_LLM_KEY --value "$LLM_KEY"
defenseclaw keys check

--value skips the hidden prompt entirely; keys check returns a non-zero exit code that fails the build cleanly.

For multi-key bootstraps in CI (rare; usually a single key is enough):

for ENV in DEFENSECLAW_LLM_KEY CISCO_AI_DEFENSE_API_KEY SPLUNK_ACCESS_TOKEN; do
  defenseclaw keys set "$ENV" --value "${!ENV}"   # bash indirection
done
defenseclaw keys check

Provider routing — Bifrost vs LiteLLM

Both routing layers consume DEFENSECLAW_LLM_KEY plus the model prefix on the configured model name. You do not need to set provider-specific env vars:

ComponentLayerHow it picks the provider
Gateway upstream LLMBifrost (Go SDK)guardrail.llm.model prefix (openai/, anthropic/, bedrock/, vertex/, azure/, ollama/, …)
LLM judgeBifrostguardrail.judge.llm.model prefix
Skill scanner second opinionLiteLLM (Python SDK)scanners.skill_scanner.llm.model prefix
MCP scanner introspectionLiteLLMscanners.mcp_scanner.llm.model prefix

Local providers (ollama/, vllm/, lm_studio/) need no key — keys list correctly classifies them as NOT_USED even when the feature is on.

Bifrost provider catalog

Inside the Go gateway, every provider call goes through the embedded Bifrost SDK — DefenseClaw never speaks directly to a provider. Bifrost handles auth, retries, streaming, and routing, and gives DefenseClaw one consistent shape for tool calls, completions, and embeddings.

mapProviderKey is the source of truth for what's wired today:

ProviderBifrost keyNotes
OpenAIopenaiDefault for gpt-*, o1-*, o3-*, o4-*
AnthropicanthropicDefault for claude-*
Google GeminigeminiDefault for gemini-*
AWS Bedrockamazon-bedrockResolves region/profile from env
Azure OpenAIazureNeeds deployment name + endpoint
OpenRouteropenrouterMulti-provider passthrough
GroqgroqLower-latency open models
MistralmistralMistral Cloud
OllamaollamaLocal model server (no key needed)
Vertex AIvertexGoogle Vertex
CoherecohereCohere Cloud
PerplexityperplexityPerplexity API
CerebrascerebrasCerebras Cloud
FireworksfireworksFireworks AI
xAIxaiGrok models
HuggingFacehuggingfaceHF Inference Endpoints
ReplicatereplicateReplicate hosted models
vLLMvllmSelf-hosted vLLM (no key needed)

You don't pick the Bifrost key directly. You pick a model — the gateway maps gpt-4o-miniopenai, claude-3-5-sonnet-20241022anthropic, and so on. The unified key is then handed to whichever provider the model resolves to.

What consumes the key

What gets stored where

WhereWhatNotes
~/.defenseclaw/.envCanonical credential store0o600, atomic tmp+rename writes, no comments, no metadata.
~/.defenseclaw/audit.dbAction log entry per keys setRecords actor=cli:operator action=config.update target=dotenv:<ENV> with before/after = had_value. The value is never logged.
~/.defenseclaw/config.yamlThe *_env references onlyThe actual secret never enters config.yaml.
os.environHighest-priority lookupA shell export always wins over the dotenv copy. Useful for one-shot debugging.

Reference