FAQ
Some answers below describe self-hosted OpenClaw deployments. ClawCentral users manage everything via the Admin UI — no local gateway, CLI, or config file editing is needed. Sign in at https://<your-tenant>.clawcentral.io.
Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS, multi-agent, OAuth/API keys, model failover). For runtime diagnostics, see Troubleshooting. For the full config reference, see Configuration.
Table of contents
- [Quick start and first-run setup]
- I am stuck - fastest way to get unstuck
- Recommended way to install and set up OpenClaw
- How do I open the dashboard after onboarding?
- How do I authenticate the dashboard (token) on localhost vs remote?
- What runtime do I need?
- Does it run on Raspberry Pi?
- Any tips for Raspberry Pi installs?
- It is stuck on "wake up my friend" / onboarding will not hatch. What now?
- Can I migrate my setup to a new machine (Mac mini) without redoing onboarding?
- Where do I see what is new in the latest version?
- Cannot access docs.clawcentral.io (SSL error)
- Difference between stable and beta
- How do I install the beta version and what is the difference between beta and dev
- How do I try the latest bits?
- How long does install and onboarding usually take?
- Installer stuck? How do I get more feedback?
- Windows install says git not found or openclaw not recognized
- Windows exec output shows garbled Chinese text what should I do
- The docs did not answer my question - how do I get a better answer
- How do I install OpenClaw on Linux?
- How do I install OpenClaw on a VPS?
- Where are the cloud/VPS install guides?
- Can I ask OpenClaw to update itself?
- What does onboarding actually do?
- Do I need a Claude or OpenAI subscription to run this?
- Can I use Claude Max subscription without an API key
- How does Anthropic "setup-token" auth work?
- Where do I find an Anthropic setup-token?
- Do you support Claude subscription auth (Claude Pro or Max)?
- Why am I seeing
HTTP 429: rate_limit_errorfrom Anthropic? - Is AWS Bedrock supported?
- How does Codex auth work?
- Do you support OpenAI subscription auth (Codex OAuth)?
- How do I set up Gemini CLI OAuth
- Is a local model OK for casual chats?
- How do I keep hosted model traffic in a specific region?
- Do I have to buy a Mac Mini to install this?
- Do I need a Mac mini for iMessage support?
- If I buy a Mac mini to run OpenClaw, can I connect it to my MacBook Pro?
- Can I use Bun?
- Telegram: what goes in
allowFrom? - Can multiple people use one WhatsApp number with different OpenClaw instances?
- Can I run a "fast chat" agent and an "Opus for coding" agent?
- Does Homebrew work on Linux?
- Difference between the hackable git install and npm install
- Can I switch between npm and git installs later?
- Should I run the Gateway on my laptop or a VPS?
- How important is it to run OpenClaw on a dedicated machine?
- What are the minimum VPS requirements and recommended OS?
- Can I run OpenClaw in a VM and what are the requirements
- What is OpenClaw?
- Skills and automation
- How do I customize skills without keeping the repo dirty?
- Can I load skills from a custom folder?
- How can I use different models for different tasks?
- The bot freezes while doing heavy work. How do I offload that?
- Cron or reminders do not fire. What should I check?
- How do I install skills on Linux?
- Can OpenClaw run tasks on a schedule or continuously in the background?
- Can I run Apple macOS-only skills from Linux?
- Do you have a Notion or HeyGen integration?
- How do I use my existing signed-in Chrome with OpenClaw?
- Sandboxing and memory
- Where things live on disk
- Config basics
- What format is the config? Where is it?
- I set
gateway.bind: "lan"(or"tailnet") and now nothing listens / the UI says unauthorized - Why do I need a token on localhost now?
- Do I have to restart after changing config?
- How do I disable funny CLI taglines?
- How do I enable web search (and web fetch)?
- config.apply wiped my config. How do I recover and avoid this?
- How do I run a central Gateway with specialized workers across devices?
- Can the OpenClaw browser run headless?
- How do I use Brave for browser control?
- Remote gateways and nodes
- How do commands propagate between Telegram, the gateway, and nodes?
- How can my agent access my computer if the Gateway is hosted remotely?
- Tailscale is connected but I get no replies. What now?
- Can two OpenClaw instances talk to each other (local + VPS)?
- Do I need separate VPSes for multiple agents
- Is there a benefit to using a node on my personal laptop instead of SSH from a VPS?
- Do nodes run a gateway service?
- Is there an API / RPC way to apply config?
- Minimal sane config for a first install
- How do I set up Tailscale on a VPS and connect from my Mac?
- How do I connect a Mac node to a remote Gateway (Tailscale Serve)?
- Should I install on a second laptop or just add a node?
- Env vars and .env loading
- Sessions and multiple chats
- How do I start a fresh conversation?
- Do sessions reset automatically if I never send
/new? - Is there a way to make a team of OpenClaw instances one CEO and many agents
- Why did context get truncated mid-task? How do I prevent it?
- How do I completely reset OpenClaw but keep it installed?
- I'm getting "context too large" errors - how do I reset or compact?
- Why am I seeing "LLM request rejected: messages.content.tool_use.input field required"?
- Why am I getting heartbeat messages every 30 minutes?
- Do I need to add a "bot account" to a WhatsApp group?
- How do I get the JID of a WhatsApp group?
- Why does OpenClaw not reply in a group
- Do groups/threads share context with DMs?
- How many workspaces and agents can I create?
- Can I run multiple bots or chats at the same time (Slack), and how should I set that up?
- Models: defaults, selection, aliases, switching
- What is the "default model"?
- What model do you recommend?
- How do I switch models without wiping my config?
- Can I use self-hosted models (llama.cpp, vLLM, Ollama)?
- What do OpenClaw, Flawd, and Krill use for models?
- How do I switch models on the fly (without restarting)?
- Can I use GPT 5.2 for daily tasks and Codex 5.3 for coding
- Why do I see "Model … is not allowed" and then no reply?
- Why do I see "Unknown model: minimax/MiniMax-M2.5"?
- Can I use MiniMax as my default and OpenAI for complex tasks?
- Are opus / sonnet / gpt built-in shortcuts?
- How do I define/override model shortcuts (aliases)?
- How do I add models from other providers like OpenRouter or Z.AI?
- Model failover and "All models failed"
- Auth profiles: what they are and how to manage them
- Gateway: ports, "already running", and remote mode
- What port does the Gateway use?
- Why does
openclaw gateway statussayRuntime: runningbutRPC probe: failed? - Why does
openclaw gateway statusshowConfig (cli)andConfig (service)different? - What does "another gateway instance is already listening" mean?
- How do I run OpenClaw in remote mode (client connects to a Gateway elsewhere)?
- The Control UI says "unauthorized" (or keeps reconnecting). What now?
- I set gateway.bind tailnet but it cannot bind and nothing listens
- Can I run multiple Gateways on the same host?
- What does "invalid handshake" / code 1008 mean?
- Logging and debugging
- Where are logs?
- How do I start/stop/restart the Gateway service?
- I closed my terminal on Windows - how do I restart OpenClaw?
- The Gateway is up but replies never arrive. What should I check?
- "Disconnected from gateway: no reason" - what now?
- Telegram setMyCommands fails. What should I check?
- TUI shows no output. What should I check?
- How do I completely stop then start the Gateway?
- ELI5:
openclaw gateway restartvsopenclaw gateway - Fastest way to get more details when something fails
- Media and attachments
- Security and access control
- Is it safe to expose OpenClaw to inbound DMs?
- Is prompt injection only a concern for public bots?
- Should my bot have its own email GitHub account or phone number
- Can I give it autonomy over my text messages and is that safe
- Can I use cheaper models for personal assistant tasks?
- I ran /start in Telegram but did not get a pairing code
- WhatsApp: will it message my contacts? How does pairing work?
- Chat commands, aborting tasks, and "it will not stop"
First 60 seconds if something is broken
-
Quick status (first check)
openclaw statusFast local summary: OS + update, gateway/service reachability, agents/sessions, provider config + runtime issues (when gateway is reachable).
-
Pasteable report (safe to share)
openclaw status --allRead-only diagnosis with log tail (tokens redacted).
-
Daemon + port state
openclaw gateway statusShows supervisor runtime vs RPC reachability, the probe target URL, and which config the service likely used.
-
Deep probes
openclaw status --deepRuns gateway health checks + provider probes (requires a reachable gateway). See Health.
-
Tail the latest log
openclaw logs --followIf RPC is down, fall back to:
tail -f "$(ls -t /tmp/openclaw/openclaw-*.log | head -1)"File logs are separate from service logs; see Logging and Troubleshooting.
-
Run the doctor (repairs)
openclaw doctorRepairs/migrates config/state + runs health checks. See Doctor.
-
Gateway snapshot
openclaw health --json
openclaw health --verbose # shows the target URL + config path on errorsAsks the running gateway for a full snapshot (WS-only). See Health.
Quick start and first-run setup
I am stuck - fastest way to get unstuck
Use a local AI agent that can see your machine. That is far more effective than asking in Discord, because most "I'm stuck" cases are local config or environment issues that remote helpers cannot inspect.
- Claude Code: https://www.anthropic.com/claude-code/
- OpenAI Codex: https://openai.com/codex/
These tools can read the repo, run commands, inspect logs, and help fix your machine-level setup (PATH, services, permissions, auth files). Give them the full source checkout via the hackable (git) install:
curl -fsSL https://openclaw.ai/install.sh | bash -s -- --install-method git
This installs OpenClaw from a git checkout, so the agent can read the code + docs and
reason about the exact version you are running. You can always switch back to stable later
by re-running the installer without --install-method git.
Tip: ask the agent to plan and supervise the fix (step-by-step), then execute only the necessary commands. That keeps changes small and easier to audit.
If you discover a real bug or fix, please file a GitHub issue or send a PR: https://github.com/openclaw/openclaw/issues https://github.com/openclaw/openclaw/pulls
Start with these commands (share outputs when asking for help):
openclaw status
openclaw models status
openclaw doctor
What they do:
openclaw status: quick snapshot of gateway/agent health + basic config.openclaw models status: checks provider auth + model availability.openclaw doctor: validates and repairs common config/state issues.
Other useful CLI checks: openclaw status --all, openclaw logs --follow,
openclaw gateway status, openclaw health --verbose.
Quick debug loop: First 60 seconds if something is broken. Install docs: Install, Installer flags, Updating.
Recommended way to install and set up OpenClaw
The repo recommends running from source and using onboarding:
curl -fsSL https://openclaw.ai/install.sh | bash
openclaw onboard --install-daemon
The wizard can also build UI assets automatically. After onboarding, you typically run the Gateway on port 18789.
From source (contributors/dev):
git clone https://github.com/openclaw/openclaw.git
cd openclaw
pnpm install
pnpm build
pnpm ui:build # auto-installs UI deps on first run
openclaw onboard
If you don't have a global install yet, run it via pnpm openclaw onboard.
How do I open the dashboard after onboarding
The wizard opens your browser with a clean (non-tokenized) dashboard URL right after onboarding and also prints the link in the summary. Keep that tab open; if it didn't launch, copy/paste the printed URL on the same machine.
How do I authenticate the dashboard token on localhost vs remote
Localhost (same machine):
- Open
http://127.0.0.1:18789/. - If it asks for auth, paste the token from
gateway.auth.token(orOPENCLAW_GATEWAY_TOKEN) into Control UI settings. - Retrieve it from the gateway host:
openclaw config get gateway.auth.token(or generate one:openclaw doctor --generate-gateway-token).
Not on localhost:
- Tailscale Serve (recommended): keep bind loopback, run
openclaw gateway --tailscale serve, openhttps://<magicdns>/. Ifgateway.auth.allowTailscaleistrue, identity headers satisfy Control UI/WebSocket auth (no token, assumes trusted gateway host); HTTP APIs still require token/password. - Tailnet bind: run
openclaw gateway --bind tailnet --token "<token>", openhttp://<tailscale-ip>:18789/, paste token in dashboard settings. - SSH tunnel:
ssh -N -L 18789:127.0.0.1:18789 user@hostthen openhttp://127.0.0.1:18789/and paste the token in Control UI settings.
See Dashboard and Web surfaces for bind modes and auth details.
What runtime do I need
Node >= 22 is required. pnpm is recommended. Bun is not recommended for the Gateway.
Does it run on Raspberry Pi
Yes. The Gateway is lightweight - docs list 512MB-1GB RAM, 1 core, and about 500MB disk as enough for personal use, and note that a Raspberry Pi 4 can run it.
If you want extra headroom (logs, media, other services), 2GB is recommended, but it's not a hard minimum.
Tip: a small Pi/VPS can host the Gateway, and you can pair nodes on your laptop/phone for local screen/camera/canvas or command execution. See Nodes.
Any tips for Raspberry Pi installs
Short version: it works, but expect rough edges.
- Use a 64-bit OS and keep Node >= 22.
- Prefer the hackable (git) install so you can see logs and update fast.
- Start without channels/skills, then add them one by one.
- If you hit weird binary issues, it is usually an ARM compatibility problem.
It is stuck on wake up my friend onboarding will not hatch What now
That screen depends on the Gateway being reachable and authenticated. The TUI also sends "Wake up, my friend!" automatically on first hatch. If you see that line with no reply and tokens stay at 0, the agent never ran.
- Restart the Gateway:
openclaw gateway restart
- Check status + auth:
openclaw status
openclaw models status
openclaw logs --follow
- If it still hangs, run:
openclaw doctor
If the Gateway is remote, ensure the tunnel/Tailscale connection is up and that the UI is pointed at the right Gateway. See Remote access.
Can I migrate my setup to a new machine Mac mini without redoing onboarding
Yes. Copy the state directory and workspace, then run Doctor once. This keeps your bot "exactly the same" (memory, session history, auth, and channel state) as long as you copy both locations:
- Install OpenClaw on the new machine.
- Copy
$OPENCLAW_STATE_DIR(default:~/.openclaw) from the old machine. - Copy your workspace (default:
~/.openclaw/workspace). - Run
openclaw doctorand restart the Gateway service.
That preserves config, auth profiles, WhatsApp creds, sessions, and memory. If you're in remote mode, remember the gateway host owns the session store and workspace.
Important: if you only commit/push your workspace to GitHub, you're backing
up memory + bootstrap files, but not session history or auth. Those live
under ~/.openclaw/ (for example ~/.openclaw/agents/<agentId>/sessions/).
Related: Migrating, Where things live on disk, Agent workspace, Doctor, Remote mode.
Where do I see what is new in the latest version
Check the GitHub changelog: https://github.com/openclaw/openclaw/blob/main/CHANGELOG.md
Newest entries are at the top. If the top section is marked Unreleased, the next dated section is the latest shipped version. Entries are grouped by Highlights, Changes, and Fixes (plus docs/other sections when needed).
Cannot access docs.clawcentral.io (SSL error)
Some Comcast/Xfinity connections incorrectly block docs.clawcentral.io via Xfinity
Advanced Security. Disable it or allowlist docs.clawcentral.io, then retry. More
detail: Troubleshooting.
Please help us unblock it by reporting here: https://spa.xfinity.com/check_url_status.
If you still can't reach the site, contact support at clawcentral.io.
Difference between stable and beta
Stable and beta are npm dist-tags, not separate code lines:
latest= stablebeta= early build for testing
We ship builds to beta, test them, and once a build is solid we promote
that same version to latest. That's why beta and stable can point at the
same version.
See what changed: https://github.com/openclaw/openclaw/blob/main/CHANGELOG.md
How do I install the beta version and what is the difference between beta and dev
Beta is the npm dist-tag beta (may match latest).
Dev is the moving head of main (git); when published, it uses the npm dist-tag dev.
One-liners (macOS/Linux):
curl -fsSL --proto '=https' --tlsv1.2 https://openclaw.ai/install.sh | bash -s -- --beta
curl -fsSL --proto '=https' --tlsv1.2 https://openclaw.ai/install.sh | bash -s -- --install-method git
Windows installer (PowerShell): https://openclaw.ai/install.ps1
More detail: Development channels and Installer flags.
How long does install and onboarding usually take
Rough guide:
- Install: 2-5 minutes
- Onboarding: 5-15 minutes depending on how many channels/models you configure
If it hangs, use Installer stuck and the fast debug loop in I am stuck.
How do I try the latest bits
Two options:
- Dev channel (git checkout):
openclaw update --channel dev
This switches to the main branch and updates from source.
- Hackable install (from the installer site):
curl -fsSL https://openclaw.ai/install.sh | bash -s -- --install-method git
That gives you a local repo you can edit, then update via git.
If you prefer a clean clone manually, use:
git clone https://github.com/openclaw/openclaw.git
cd openclaw
pnpm install
pnpm build
Docs: Update, Development channels, Install.
Installer stuck How do I get more feedback
Re-run the installer with verbose output:
curl -fsSL https://openclaw.ai/install.sh | bash -s -- --verbose
Beta install with verbose:
curl -fsSL https://openclaw.ai/install.sh | bash -s -- --beta --verbose
For a hackable (git) install:
curl -fsSL https://openclaw.ai/install.sh | bash -s -- --install-method git --verbose
Windows (PowerShell) equivalent:
# install.ps1 has no dedicated -Verbose flag yet.
Set-PSDebug -Trace 1
& ([scriptblock]::Create((iwr -useb https://openclaw.ai/install.ps1))) -NoOnboard
Set-PSDebug -Trace 0
More options: Installer flags.
Windows install says git not found or openclaw not recognized
Two common Windows issues:
1) npm error spawn git / git not found
- Install Git for Windows and make sure
gitis on your PATH. - Close and reopen PowerShell, then re-run the installer.
2) openclaw is not recognized after install
-
Your npm global bin folder is not on PATH.
-
Check the path:
npm config get prefix -
Add that directory to your user PATH (no
\binsuffix needed on Windows; on most systems it is%AppData%\npm). -
Close and reopen PowerShell after updating PATH.
If you want the smoothest Windows setup, use WSL2 instead of native Windows. Docs: Windows.
Windows exec output shows garbled Chinese text what should I do
This is usually a console code page mismatch on native Windows shells.
Symptoms:
system.run/execoutput renders Chinese as mojibake- The same command looks fine in another terminal profile
Quick workaround in PowerShell:
chcp 65001
[Console]::InputEncoding = [System.Text.UTF8Encoding]::new($false)
[Console]::OutputEncoding = [System.Text.UTF8Encoding]::new($false)
$OutputEncoding = [System.Text.UTF8Encoding]::new($false)
Then restart the Gateway and retry your command:
openclaw gateway restart
If you still reproduce this on latest OpenClaw, track/report it in:
The docs did not answer my question - how do I get a better answer
Use the hackable (git) install so you have the full source and docs locally, then ask your bot (or Claude/Codex) from that folder so it can read the repo and answer precisely.
curl -fsSL https://openclaw.ai/install.sh | bash -s -- --install-method git
More detail: Install and Installer flags.
How do I install OpenClaw on Linux
Short answer: follow the Linux guide, then run onboarding.
- Linux quick path + service install: Linux.
- Full walkthrough: Getting Started.
- Installer + updates: Install & updates.
How do I install OpenClaw on a VPS
Any Linux VPS works. Install on the server, then use SSH/Tailscale to reach the Gateway.
Guides: exe.dev, Hetzner, Fly.io. Remote access: Gateway remote.
Where are the cloudVPS install guides
We keep a hosting hub with the common providers. Pick one and follow the guide:
- VPS hosting (all providers in one place)
- Fly.io
- Hetzner
- exe.dev
How it works in the cloud: the Gateway runs on the server, and you access it from your laptop/phone via the Control UI (or Tailscale/SSH). Your state + workspace live on the server, so treat the host as the source of truth and back it up.
You can pair nodes (Mac/iOS/Android/headless) to that cloud Gateway to access local screen/camera/canvas or run commands on your laptop while keeping the Gateway in the cloud.
Hub: Platforms. Remote access: Gateway remote. Nodes: Nodes, Nodes CLI.
Can I ask OpenClaw to update itself
Short answer: possible, not recommended. The update flow can restart the Gateway (which drops the active session), may need a clean git checkout, and can prompt for confirmation. Safer: run updates from a shell as the operator.
Use the CLI:
openclaw update
openclaw update status
openclaw update --channel stable|beta|dev
openclaw update --tag <dist-tag|version>
openclaw update --no-restart
If you must automate from an agent:
openclaw update --yes --no-restart
openclaw gateway restart
What does onboarding actually do
openclaw onboard is the recommended setup path. In local mode it walks you through:
- Model/auth setup (provider OAuth/setup-token flows and API keys supported, plus local model options such as LM Studio)
- Workspace location + bootstrap files
- Gateway settings (bind/port/auth/tailscale)
- Providers (WhatsApp, Telegram, Discord, Mattermost (plugin), Signal, iMessage)
- Daemon install (LaunchAgent on macOS; systemd user unit on Linux/WSL2)
- Health checks and skills selection
It also warns if your configured model is unknown or missing auth.
Do I need a Claude or OpenAI subscription to run this
No. You can run OpenClaw with API keys (Anthropic/OpenAI/others) or with local-only models so your data stays on your device. Subscriptions (Claude Pro/Max or OpenAI Codex) are optional ways to authenticate those providers.
If you choose Anthropic subscription auth, decide for yourself whether to use it: Anthropic has blocked some subscription usage outside Claude Code in the past. OpenAI Codex OAuth is explicitly supported for external tools like OpenClaw.
Docs: Anthropic, OpenAI, Local models, Models.
Can I use Claude Max subscription without an API key
Yes. You can authenticate with a setup-token instead of an API key. This is the subscription path.
Claude Pro/Max subscriptions do not include an API key, so this is the technical path for subscription accounts. But this is your decision: Anthropic has blocked some subscription usage outside Claude Code in the past. If you want the clearest and safest supported path for production, use an Anthropic API key.
How does Anthropic setuptoken auth work
claude setup-token generates a token string via the Claude Code CLI (it is not available in the web console). You can run it on any machine. Choose Anthropic token (paste setup-token) in onboarding or paste it with openclaw models auth paste-token --provider anthropic. The token is stored as an auth profile for the anthropic provider and used like an API key (no auto-refresh). More detail: OAuth.
Where do I find an Anthropic setuptoken
It is not in the Anthropic Console. The setup-token is generated by the Claude Code CLI on any machine:
claude setup-token
Copy the token it prints, then choose Anthropic token (paste setup-token) in onboarding. If you want to run it on the gateway host, use openclaw models auth setup-token --provider anthropic. If you ran claude setup-token elsewhere, paste it on the gateway host with openclaw models auth paste-token --provider anthropic. See Anthropic.
Do you support Claude subscription auth (Claude Pro or Max)
Yes - via setup-token. OpenClaw no longer reuses Claude Code CLI OAuth tokens; use a setup-token or an Anthropic API key. Generate the token anywhere and paste it on the gateway host. See Anthropic and OAuth.
Important: this is technical compatibility, not a policy guarantee. Anthropic has blocked some subscription usage outside Claude Code in the past. You need to decide whether to use it and verify Anthropic's current terms. For production or multi-user workloads, Anthropic API key auth is the safer, recommended choice.
Why am I seeing HTTP 429 ratelimiterror from Anthropic
That means your Anthropic quota/rate limit is exhausted for the current window. If you use a Claude subscription (setup-token), wait for the window to reset or upgrade your plan. If you use an Anthropic API key, check the Anthropic Console for usage/billing and raise limits as needed.
If the message is specifically:
Extra usage is required for long context requests, the request is trying to use
Anthropic's 1M context beta (context1m: true). That only works when your
credential is eligible for long-context billing (API key billing or subscription
with Extra Usage enabled).
Tip: set a fallback model so OpenClaw can keep replying while a provider is rate-limited. See Models, OAuth, and /gateway/troubleshooting#anthropic-429-extra-usage-required-for-long-context.
Is AWS Bedrock supported
Yes - via pi-ai's Amazon Bedrock (Converse) provider with manual config. You must supply AWS credentials/region on the gateway host and add a Bedrock provider entry in your models config. See Amazon Bedrock and Model providers. If you prefer a managed key flow, an OpenAI-compatible proxy in front of Bedrock is still a valid option.
How does Codex auth work
OpenClaw supports OpenAI Code (Codex) via OAuth (ChatGPT sign-in). Onboarding can run the OAuth flow and will set the default model to openai-codex/gpt-5.4 when appropriate. See Model providers.
Do you support OpenAI subscription auth Codex OAuth
Yes. OpenClaw fully supports OpenAI Code (Codex) subscription OAuth. OpenAI explicitly allows subscription OAuth usage in external tools/workflows like OpenClaw. Onboarding can run the OAuth flow for you.
See OAuth, Model providers,.
How do I set up Gemini CLI OAuth
Gemini CLI uses a plugin auth flow, not a client id or secret in openclaw.json.
Steps:
- Enable the plugin:
openclaw plugins enable google - Login:
openclaw models auth login --provider google-gemini-cli --set-default
This stores OAuth tokens in auth profiles on the gateway host. Details: Model providers.
Is a local model OK for casual chats
Usually no. OpenClaw needs large context + strong safety; small cards truncate and leak. If you must, run the largest MiniMax M2.5 build you can locally (LM Studio) and see /gateway/local-models. Smaller/quantized models increase prompt-injection risk - see Security.
How do I keep hosted model traffic in a specific region
Pick region-pinned endpoints. OpenRouter exposes US-hosted options for MiniMax, Kimi, and GLM; choose the US-hosted variant to keep data in-region. You can still list Anthropic/OpenAI alongside these by using models.mode: "merge" so fallbacks stay available while respecting the regioned provider you select.
Do I have to buy a Mac Mini to install this
No. OpenClaw runs on macOS or Linux (Windows via WSL2). A Mac mini is optional - some people buy one as an always-on host, but a small VPS, home server, or Raspberry Pi-class box works too.
You only need a Mac for macOS-only tools. For iMessage, use BlueBubbles (recommended) - the BlueBubbles server runs on any Mac, and the Gateway can run on Linux or elsewhere. If you want other macOS-only tools, run the Gateway on a Mac or pair a macOS node.
Docs: BlueBubbles, Nodes, Mac remote mode.
Do I need a Mac mini for iMessage support
You need some macOS device signed into Messages. It does not have to be a Mac mini - any Mac works. Use BlueBubbles (recommended) for iMessage - the BlueBubbles server runs on macOS, while the Gateway can run on Linux or elsewhere.
Common setups:
- Run the Gateway on Linux/VPS, and run the BlueBubbles server on any Mac signed into Messages.
- Run everything on the Mac if you want the simplest single‑machine setup.
Docs: BlueBubbles, Nodes, Mac remote mode.
If I buy a Mac mini to run OpenClaw can I connect it to my MacBook Pro
Yes. The Mac mini can run the Gateway, and your MacBook Pro can connect as a
node (companion device). Nodes don't run the Gateway - they provide extra
capabilities like screen/camera/canvas and system.run on that device.
Common pattern:
- Gateway on the Mac mini (always-on).
- MacBook Pro runs the macOS app or a node host and pairs to the Gateway.
- Use
openclaw nodes status/openclaw nodes listto see it.
Can I use Bun
Bun is not recommended. We see runtime bugs, especially with WhatsApp and Telegram. Use Node for stable gateways.
If you still want to experiment with Bun, do it on a non-production gateway without WhatsApp/Telegram.
Telegram what goes in allowFrom
channels.telegram.allowFrom is the human sender's Telegram user ID (numeric). It is not the bot username.
Onboarding accepts @username input and resolves it to a numeric ID, but OpenClaw authorization uses numeric IDs only.
Safer (no third-party bot):
- DM your bot, then run
openclaw logs --followand readfrom.id.
Official Bot API:
- DM your bot, then call
https://api.telegram.org/bot<bot_token>/getUpdatesand readmessage.from.id.
Third-party (less private):
- DM
@userinfobotor@getidsbot.
See /channels/telegram.
Can multiple people use one WhatsApp number with different OpenClaw instances
Yes, via multi-agent routing. Bind each sender's WhatsApp DM (peer kind: "direct", sender E.164 like +15551234567) to a different agentId, so each person gets their own workspace and session store. Replies still come from the same WhatsApp account, and DM access control (channels.whatsapp.dmPolicy / channels.whatsapp.allowFrom) is global per WhatsApp account. See Multi-Agent Routing and WhatsApp.
Can I run a fast chat agent and an Opus for coding agent
Yes. Use multi-agent routing: give each agent its own default model, then bind inbound routes (provider account or specific peers) to each agent. Example config lives in Multi-Agent Routing. See also Models and Configuration.
Does Homebrew work on Linux
Yes. Homebrew supports Linux (Linuxbrew). Quick setup:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"' >> ~/.profile
eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"
brew install <formula>
If you run OpenClaw via systemd, ensure the service PATH includes /home/linuxbrew/.linuxbrew/bin (or your brew prefix) so brew-installed tools resolve in non-login shells.
Recent builds also prepend common user bin dirs on Linux systemd services (for example ~/.local/bin, ~/.npm-global/bin, ~/.local/share/pnpm, ~/.bun/bin) and honor PNPM_HOME, NPM_CONFIG_PREFIX, BUN_INSTALL, VOLTA_HOME, ASDF_DATA_DIR, NVM_DIR, and FNM_DIR when set.
Difference between the hackable git install and npm install
- Hackable (git) install: full source checkout, editable, best for contributors. You run builds locally and can patch code/docs.
- npm install: global CLI install, no repo, best for "just run it." Updates come from npm dist-tags.
Docs: Getting started, Updating.
Can I switch between npm and git installs later
Yes. Install the other flavor, then run Doctor so the gateway service points at the new entrypoint.
This does not delete your data - it only changes the OpenClaw code install. Your state
(~/.openclaw) and workspace (~/.openclaw/workspace) stay untouched.
From npm → git:
git clone https://github.com/openclaw/openclaw.git
cd openclaw
pnpm install
pnpm build
openclaw doctor
openclaw gateway restart
From git → npm:
npm install -g openclaw@latest
openclaw doctor
openclaw gateway restart
Doctor detects a gateway service entrypoint mismatch and offers to rewrite the service config to match the current install (use --repair in automation).
Backup tips: see Backup strategy.
Should I run the Gateway on my laptop or a VPS
Short answer: if you want 24/7 reliability, use a VPS. If you want the lowest friction and you're okay with sleep/restarts, run it locally.
Laptop (local Gateway)
- Pros: no server cost, direct access to local files, live browser window.
- Cons: sleep/network drops = disconnects, OS updates/reboots interrupt, must stay awake.
VPS / cloud
- Pros: always-on, stable network, no laptop sleep issues, easier to keep running.
- Cons: often run headless (use screenshots), remote file access only, you must SSH for updates.
OpenClaw-specific note: WhatsApp/Telegram/Slack/Mattermost (plugin)/Discord all work fine from a VPS. The only real trade-off is headless browser vs a visible window. See Browser.
Recommended default: VPS if you had gateway disconnects before. Local is great when you're actively using the Mac and want local file access or UI automation with a visible browser.
How important is it to run OpenClaw on a dedicated machine
Not required, but recommended for reliability and isolation.
- Dedicated host (VPS/Mac mini/Pi): always-on, fewer sleep/reboot interruptions, cleaner permissions, easier to keep running.
- Shared laptop/desktop: totally fine for testing and active use, but expect pauses when the machine sleeps or updates.
If you want the best of both worlds, keep the Gateway on a dedicated host and pair your laptop as a node for local screen/camera/exec tools. See Nodes. For security guidance, read Security.
What are the minimum VPS requirements and recommended OS
OpenClaw is lightweight. For a basic Gateway + one chat channel:
- Absolute minimum: 1 vCPU, 1GB RAM, ~500MB disk.
- Recommended: 1-2 vCPU, 2GB RAM or more for headroom (logs, media, multiple channels). Node tools and browser automation can be resource hungry.
OS: use Ubuntu LTS (or any modern Debian/Ubuntu). The Linux install path is best tested there.
Docs: Linux, VPS hosting.
Can I run OpenClaw in a VM and what are the requirements
Yes. Treat a VM the same as a VPS: it needs to be always on, reachable, and have enough RAM for the Gateway and any channels you enable.
Baseline guidance:
- Absolute minimum: 1 vCPU, 1GB RAM.
- Recommended: 2GB RAM or more if you run multiple channels, browser automation, or media tools.
- OS: Ubuntu LTS or another modern Debian/Ubuntu.
If you are on Windows, WSL2 is the easiest VM style setup and has the best tooling compatibility. See Windows, VPS hosting. If you are running macOS in a VM, see macOS VM.
What is OpenClaw?
What is OpenClaw in one paragraph
OpenClaw is a personal AI assistant you run on your own devices. It replies on the messaging surfaces you already use (WhatsApp, Telegram, Slack, Mattermost (plugin), Discord, Google Chat, Signal, iMessage, WebChat) and can also do voice + a live Canvas on supported platforms. The Gateway is the always-on control plane; the assistant is the product.
Value proposition
OpenClaw is not "just a Claude wrapper." It's a local-first control plane that lets you run a capable assistant on your own hardware, reachable from the chat apps you already use, with stateful sessions, memory, and tools - without handing control of your workflows to a hosted SaaS.
Highlights:
- Your devices, your data: run the Gateway wherever you want (Mac, Linux, VPS) and keep the workspace + session history local.
- Real channels, not a web sandbox: WhatsApp/Telegram/Slack/Discord/Signal/iMessage/etc, plus mobile voice and Canvas on supported platforms.
- Model-agnostic: use Anthropic, OpenAI, MiniMax, OpenRouter, etc., with per-agent routing and failover.
- Local-only option: run local models so all data can stay on your device if you want.
- Multi-agent routing: separate agents per channel, account, or task, each with its own workspace and defaults.
- Open source and hackable: inspect, extend, and self-host without vendor lock-in.
Docs: Gateway, Channels, Multi-agent, Memory.
I just set it up what should I do first
Good first projects:
- Build a website (WordPress, Shopify, or a simple static site).
- Prototype a mobile app (outline, screens, API plan).
- Organize files and folders (cleanup, naming, tagging).
- Connect Gmail and automate summaries or follow ups.
It can handle large tasks, but it works best when you split them into phases and use sub agents for parallel work.