In ClawCentral, AI providers and API keys are configured via Admin UI → Settings → AI Providers. The openclaw.json config examples and CLI commands (openclaw onboard, openclaw models set) described below apply to self-hosted OpenClaw deployments.
Model providers
This page covers LLM/model providers (not chat channels like WhatsApp/Telegram). For model selection rules, see /concepts/models.
Quick rules
- Model refs use
provider/model(example:opencode/claude-opus-4-6). - If you set
agents.defaults.models, it becomes the allowlist. - CLI helpers:
openclaw onboard,openclaw models list,openclaw models set <provider/model>. - Provider plugins can inject model catalogs via
registerProvider({ catalog }); OpenClaw merges that output intomodels.providersbefore writingmodels.json. - Provider manifests can declare
providerAuthEnvVarsso generic env-based auth probes do not need to load plugin runtime. The remaining core env-var map is now just for non-plugin/core providers and a few generic-precedence cases such as Anthropic API-key-first onboarding. - Provider plugins can also own provider runtime behavior via
resolveDynamicModel,prepareDynamicModel,normalizeResolvedModel,capabilities,prepareExtraParams,wrapStreamFn,formatApiKey,refreshOAuth,buildAuthDoctorHint,isCacheTtlEligible,buildMissingAuthMessage,suppressBuiltInModel,augmentModelCatalog,isBinaryThinking,supportsXHighThinking,resolveDefaultThinkingLevel,isModernModelRef,prepareRuntimeAuth,resolveUsageAuth, andfetchUsageSnapshot. - Note: provider runtime
capabilitiesis shared runner metadata (provider family, transcript/tooling quirks, transport/cache hints). It is not the same as the public capability model which describes what a plugin registers (text inference, speech, etc.).
Plugin-owned provider behavior
Provider plugins can now own most provider-specific logic while OpenClaw keeps the generic inference loop.
Typical split:
auth[].run/auth[].runNonInteractive: provider owns onboarding/login flows foropenclaw onboard,openclaw models auth, and headless setupwizard.setup/wizard.modelPicker: provider owns auth-choice labels, legacy aliases, onboarding allowlist hints, and setup entries in onboarding/model pickerscatalog: provider appears inmodels.providersresolveDynamicModel: provider accepts model ids not present in the local static catalog yetprepareDynamicModel: provider needs a metadata refresh before retrying dynamic resolutionnormalizeResolvedModel: provider needs transport or base URL rewritescapabilities: provider publishes transcript/tooling/provider-family quirksprepareExtraParams: provider defaults or normalizes per-model request paramswrapStreamFn: provider applies request headers/body/model compat wrappersformatApiKey: provider formats stored auth profiles into the runtimeapiKeystring expected by the transportrefreshOAuth: provider owns OAuth refresh when the sharedpi-airefreshers are not enoughbuildAuthDoctorHint: provider appends repair guidance when OAuth refresh failsisCacheTtlEligible: provider decides which upstream model ids support prompt-cache TTLbuildMissingAuthMessage: provider replaces the generic auth-store error with a provider-specific recovery hintsuppressBuiltInModel: provider hides stale upstream rows and can return a vendor-owned error for direct resolution failuresaugmentModelCatalog: provider appends synthetic/final catalog rows after discovery and config mergingisBinaryThinking: provider owns binary on/off thinking UXsupportsXHighThinking: provider opts selected models intoxhighresolveDefaultThinkingLevel: provider owns default/thinkpolicy for a model familyisModernModelRef: provider owns live/smoke preferred-model matchingprepareRuntimeAuth: provider turns a configured credential into a short lived runtime tokenresolveUsageAuth: provider resolves usage/quota credentials for/usageand related status/reporting surfacesfetchUsageSnapshot: provider owns the usage endpoint fetch/parsing while core still owns the summary shell and formatting
Current bundled examples:
anthropic: Claude 4.6 forward-compat fallback, auth repair hints, usage endpoint fetching, and cache-TTL/provider-family metadataopenrouter: pass-through model ids, request wrappers, provider capability hints, and cache-TTL policygithub-copilot: onboarding/device login, forward-compat model fallback, Claude-thinking transcript hints, runtime token exchange, and usage endpoint fetchingopenai: GPT-5.4 forward-compat fallback, direct OpenAI transport normalization, Codex-aware missing-auth hints, Spark suppression, synthetic OpenAI/Codex catalog rows, thinking/live-model policy, and provider-family metadatagoogleandgoogle-gemini-cli: Gemini 3.1 forward-compat fallback and modern-model matching; Gemini CLI OAuth also owns auth-profile token formatting, usage-token parsing, and quota endpoint fetching for usage surfacesmoonshot: shared transport, plugin-owned thinking payload normalizationkilocode: shared transport, plugin-owned request headers, reasoning payload normalization, Gemini transcript hints, and cache-TTL policyzai: GLM-5 forward-compat fallback,tool_streamdefaults, cache-TTL policy, binary-thinking/live-model policy, and usage auth + quota fetchingmistral,opencode, andopencode-go: plugin-owned capability metadatabyteplus,cloudflare-ai-gateway,huggingface,kimi-coding,modelstudio,nvidia,qianfan,synthetic,together,venice,vercel-ai-gateway, andvolcengine: plugin-owned catalogs onlyqwen-portal: plugin-owned catalog, OAuth login, and OAuth refreshminimaxandxiaomi: plugin-owned catalogs plus usage auth/snapshot logic
The bundled openai plugin now owns both provider ids: openai and
openai-codex.
That covers providers that still fit OpenClaw's normal transports. A provider that needs a totally custom request executor is a separate, deeper extension surface.
API key rotation
- Supports generic provider rotation for selected providers.
- Configure multiple keys via:
OPENCLAW_LIVE_<PROVIDER>_KEY(single live override, highest priority)<PROVIDER>_API_KEYS(comma or semicolon list)<PROVIDER>_API_KEY(primary key)<PROVIDER>_API_KEY_*(numbered list, e.g.<PROVIDER>_API_KEY_1)
- For Google providers,
GOOGLE_API_KEYis also included as fallback. - Key selection order preserves priority and deduplicates values.
- Requests are retried with the next key only on rate-limit responses (for example
429,rate_limit,quota,resource exhausted). - Non-rate-limit failures fail immediately; no key rotation is attempted.
- When all candidate keys fail, the final error is returned from the last attempt.
Built-in providers (pi-ai catalog)
OpenClaw ships with the pi‑ai catalog. These providers require no
models.providers config; just set auth + pick a model.
OpenAI
- Provider:
openai - Auth:
OPENAI_API_KEY - Optional rotation:
OPENAI_API_KEYS,OPENAI_API_KEY_1,OPENAI_API_KEY_2, plusOPENCLAW_LIVE_OPENAI_KEY(single override) - Example models:
openai/gpt-5.4,openai/gpt-5.4-pro - CLI:
openclaw onboard --auth-choice openai-api-key - Default transport is
auto(WebSocket-first, SSE fallback) - Override per model via
agents.defaults.models["openai/<model>"].params.transport("sse","websocket", or"auto") - OpenAI Responses WebSocket warm-up defaults to enabled via
params.openaiWsWarmup(true/false) - OpenAI priority processing can be enabled via
agents.defaults.models["openai/<model>"].params.serviceTier - OpenAI fast mode can be enabled per model via
agents.defaults.models["<provider>/<model>"].params.fastMode openai/gpt-5.3-codex-sparkis intentionally suppressed in OpenClaw because the live OpenAI API rejects it; Spark is treated as Codex-only
{
agents: { defaults: { model: { primary: "openai/gpt-5.4" } } },
}
Anthropic
- Provider:
anthropic - Auth:
ANTHROPIC_API_KEYorclaude setup-token - Optional rotation:
ANTHROPIC_API_KEYS,ANTHROPIC_API_KEY_1,ANTHROPIC_API_KEY_2, plusOPENCLAW_LIVE_ANTHROPIC_KEY(single override) - Example model:
anthropic/claude-opus-4-6 - CLI:
openclaw onboard --auth-choice token(paste setup-token) oropenclaw models auth paste-token --provider anthropic - Direct API-key models support the shared
/fasttoggle andparams.fastMode; OpenClaw maps that to Anthropicservice_tier(autovsstandard_only) - Policy note: setup-token support is technical compatibility; Anthropic has blocked some subscription usage outside Claude Code in the past. Verify current Anthropic terms and decide based on your risk tolerance.
- Recommendation: Anthropic API key auth is the safer, recommended path over subscription setup-token auth.
{
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
}
OpenAI Code (Codex)
- Provider:
openai-codex - Auth: OAuth (ChatGPT)
- Example model:
openai-codex/gpt-5.4 - CLI:
openclaw onboard --auth-choice openai-codexoropenclaw models auth login --provider openai-codex - Default transport is
auto(WebSocket-first, SSE fallback) - Override per model via
agents.defaults.models["openai-codex/<model>"].params.transport("sse","websocket", or"auto") - Shares the same
/fasttoggle andparams.fastModeconfig as directopenai/* openai-codex/gpt-5.3-codex-sparkremains available when the Codex OAuth catalog exposes it; entitlement-dependent- Policy note: OpenAI Codex OAuth is explicitly supported for external tools/workflows like OpenClaw.
{
agents: { defaults: { model: { primary: "openai-codex/gpt-5.4" } } },
}