How ClawAgora Vets Every Workspace — Security After ClawHavoc

In late January 2026, security researchers discovered something alarming: up to 20% of all Skills on ClawHub — OpenClaw's official marketplace — were malicious. The attack, dubbed ClawHavoc, was not a theoretical vulnerability or a proof-of-concept exercise. It was a full-scale supply chain assault that deployed info-stealers, keyloggers, and reverse shells to thousands of OpenClaw users worldwide.
The fallout forced an industry-wide reckoning with a question that had been quietly building since OpenClaw's explosive growth: who is responsible for making sure the code running inside your AI agent is safe?
This post covers what happened during ClawHavoc, the specific vulnerabilities it exposed, how ClawAgora vets every workspace template before listing, and what you can do right now to protect your OpenClaw setup.
What Happened During ClawHavoc
ClawHavoc was not a single exploit. It was a coordinated campaign that exploited the weakest link in OpenClaw's ecosystem: the open-upload model of its official skill marketplace.
Between late January and early February 2026, attackers uploaded 1,184 malicious Skills to ClawHub. Initial audits by The Hacker News flagged 341 confirmed malicious packages; later counts exceeded 800. A single automated actor operating under the handle "hightower6eu" uploaded 354 packages alone — all without triggering any meaningful review process.
The barrier to entry was shockingly low. Publishing on ClawHub required only a GitHub account that was one week old. No identity verification. No code review. No sandboxed testing.
The attack surface was massive. At the time, Censys scans revealed 21,639 OpenClaw instances exposed to the internet with no authentication — and over 30,000 enterprise instances identified as at-risk. Even after the initial cleanup, 14,285 downloads of malicious packages persisted.
The Vulnerabilities ClawHavoc Exposed
What made ClawHavoc particularly dangerous was the sophistication and variety of its attack vectors. These were not crude exploits; they were tailored specifically to how OpenClaw agents work.
Social Engineering via SKILL.md
The most common vector was deceptively simple. Malicious Skills shipped with professional-looking SKILL.md documentation that included "prerequisites" — instructions that tricked users into running terminal commands or downloading tools from attacker-controlled sites. Because OpenClaw users expect Skills to come with setup instructions, the social engineering felt natural.
Typosquatting
Attackers registered Skills with names close to popular legitimate ones: crypto wallet trackers, YouTube utilities, and productivity tools. Users searching for common functionality would find the malicious version first.
Memory Poisoning — A Novel AI Agent Attack
This is where ClawHavoc broke new ground. Attackers crafted Skills that altered OpenClaw's persistent memory files — SOUL.md, MEMORY.md, and other workspace configuration files. Because OpenClaw agents read these files at the start of every session to establish identity and context, the poisoned memories would persist across sessions and subtly modify agent behavior over time. This attack vector is unique to AI agents and had no precedent in traditional software supply chain attacks.
The Payloads
The malicious Skills delivered serious payloads: Atomic macOS Stealer (AMOS) targeting browser credentials, iCloud Keychain data, crypto wallets, and SSH keys; keyloggers and remote access trojans on Windows; reverse shells for persistent access; and exfiltration of OpenClaw configuration files containing API keys and environment variables. Security firm Antiy classified the threat as Trojan/OpenClaw.PolySkill.
CVE-2026-25253: The Authentication Flaw
Running alongside ClawHavoc, researchers disclosed CVE-2026-25253 — a CVSS 8.8 vulnerability in OpenClaw's authentication system. Attackers could redirect WebSocket connections to steal authentication tokens via crafted URLs, leading to remote code execution. All OpenClaw versions before 2026.1.29 were affected.
Why the Open-Upload Model Failed
The root cause of ClawHavoc was not a bug in OpenClaw's code. It was an architectural decision: allowing anyone to publish Skills with minimal vetting.
ClawHub operated on the same model as early npm or the Chrome Web Store before its crackdowns — prioritize speed and openness, deal with abuse reactively. That model works when the blast radius of a bad package is limited. When the package runs inside an AI agent with access to your files, credentials, and connected services, the blast radius is your entire digital life.
A separate audit by Socket.dev found that 41.7% of popular Skills contained serious vulnerabilities — even ones that were not intentionally malicious. The combination of minimal review, low publishing barriers, and high-trust execution environments created a perfect storm.
How ClawAgora Vets Every Workspace
ClawAgora was built with the lessons of ClawHavoc in mind. Rather than operating as an open-upload marketplace, every workspace template goes through a structured review process before it reaches buyers.
Permissions Audit
Every Skill in a submitted workspace is analyzed for the permissions it requests and the system access it requires. Skills requesting broad filesystem access, network permissions, or shell execution capabilities receive elevated scrutiny. The goal is not to reject powerful Skills — many legitimate use cases require system access — but to ensure the access is justified and documented.
Static Analysis for Suspicious Patterns
Automated tooling scans SKILL.md files and script directories for the specific patterns that characterized ClawHavoc: base64-encoded commands, instructions to download external binaries, obfuscated code, references to known malicious domains, and social engineering language in setup instructions. These are the exact vectors that ClawHub failed to catch.
Sandboxed Test Execution
Before approval, workspace templates are deployed in an isolated test environment and executed through their documented use cases. This catches runtime-only malicious behavior — payloads that only activate when the agent processes certain inputs or after a delay.
Verified Creator Program
ClawAgora requires identity verification for sellers, backed by a $50 refundable deposit. This is the opposite of ClawHub's approach, where a week-old GitHub account was sufficient. The deposit filters out throwaway accounts and automated upload campaigns — exactly the kind of attack infrastructure used in ClawHavoc.
Isolated Runtime Infrastructure
Even after vetting, ClawAgora workspaces run on dedicated OCI compute instances (1–4 OCPU, 4–16 GB RAM per workspace) rather than shared infrastructure. This means a compromised workspace cannot move laterally to other users' environments. Platform-managed security handles SSL certificates, automated updates, and continuous monitoring.
Ongoing Monitoring
Listing is not the end of review. ClawAgora monitors listed workspaces for changes that could introduce new risks, community-reported issues, and emerging threat patterns from the broader OpenClaw security landscape.
OpenClaw Security Best Practices
Whether you use ClawAgora or manage your own OpenClaw instance, these practices significantly reduce your attack surface. They are drawn from advisories by Microsoft, Adversa.ai, and Bitdefender.
Update OpenClaw to version 2026.1.29 or later. This patches CVE-2026-25253 and adds gateway URL change confirmation prompts with strict origin validation.
Enable gateway authentication. Never run an OpenClaw instance without setting gateway.auth.password. The 21,639 unauthenticated instances found during ClawHavoc were trivially exploitable.
Bind the Control UI to localhost. Expose the web interface only on 127.0.0.1 and use a VPN or Tailscale for remote access. Direct internet exposure is the single most common misconfiguration.
Audit every Skill before installing. Read the full source code. Verify the publisher's identity and account age. Reject any Skill that instructs you to download external tools, paste base64 commands, or run scripts from unfamiliar domains.
Set API spending limits. Users affected by ClawHavoc reported surprise bills exceeding $200 per day from compromised LLM API keys. Configure hard spending caps on every connected provider.
Run in Docker with restricted permissions. Use --read-only and --cap-drop=ALL flags to limit what a compromised agent can do at the container level.
Use per-peer session isolation. Prevent context leakage between different users or sessions connecting to the same OpenClaw instance.
Rotate credentials quarterly. API keys, OAuth tokens, and SSH keys should be rotated on a regular schedule — not just when you suspect compromise.
The Bigger Picture
ClawHavoc was a wake-up call, but it was also predictable. Every developer ecosystem that grows fast enough eventually confronts its supply chain security problem — npm had it, PyPI had it, the Chrome Web Store had it. OpenClaw's version was uniquely dangerous because of what AI agents can access: not just code execution, but persistent memory, connected services, and the implicit trust users place in their agent's behavior.
The solution is not to avoid OpenClaw or abandon the Skill ecosystem. The solution is to move from implicit trust to verified trust. That means curated marketplaces with real vetting processes, isolated execution environments, verified creator identities, and ongoing monitoring — the model ClawAgora was built around.
If you are evaluating OpenClaw workspace platforms, ask one question: what happens between a seller uploading a workspace and a buyer running it? If the answer is "nothing," you are accepting the same risk model that enabled ClawHavoc.
At ClawAgora, the answer is a multi-step review process that catches the exact attack patterns used in the largest supply chain attack in AI agent history. That is not a feature — it is a prerequisite.
ClawAgora is a managed hosting platform and marketplace for OpenClaw workspace templates. Learn more about our security approach or browse verified workspaces.