What I Removed Before Selling My Workspace (And What I Kept)

image: /blog/images/what-i-removed-before-selling-my-workspace.png
The two most common objections I hear from potential sellers on ClawAgora are: "I don't want to give away my secret sauce," and "Sounds like a lot of work to clean it all up." Both are worth taking seriously.
So I did the exercise with my own workspace — a personal research assistant I have been running on my Ubuntu server since mid-February, with 27 skills, a hierarchical memory system, and 11 active project directories. (The full packaging walkthrough covers the end-to-end process. This post goes deeper on the sanitization decisions.)
What I removed
Project directories (all of them)
My workspace had 11 active project directories: an AI career toolkit, a cross-agent shared memory system, an AI research digest, a phone case marketplace, and seven others. None of them shipped.
This was the easiest cut. Projects are implementations — they only make sense in the context of my specific goals. A buyer does not want my half-built marketplace. They want the skills and systems that let me build marketplaces effectively.
Daily notes and personal memory
My workspace has a memory/ directory with a daily notes system — raw session logs in YYYY-MM-DD.md files, plus higher-level files like strategy.md, profile.md, and job-search.md. The daily notes contained entire conversations between me and my AI assistant. The strategy file had my business insights and competitive analysis. The profile file was a detailed record of my background, goals, and preferences.
All of this was removed. It is deeply personal — not in the credentials sense, but in the relational sense. These files are what make Jerry my assistant rather than a general-purpose tool.
Content-specific skills
Of the 27 skills in my workspace, 9 did not make the cut. douyin-compliance was tied to my Chinese content workflow. feishu-image-upload was specific to my Feishu integration. blog-launch and digest-onboard were internal operational skills. render-image depended on server-specific tooling. These skills only work in the context of my particular setup.
Personal analytics and content
The data/ directory (Douyin analytics), content/ directory (resumes, content drafts), and research/ directory (27 deep-research run outputs) were all removed. These contain my actual work product — useful to me, meaningless or confidential in someone else's hands.
Persona references
Here is the part that surprised me. My SOUL.md opened with: "You're Jerry. An alien cat who was rescued from a star war by Dr. Zheng." My MEMORY.md referenced my specific projects, strategic thinking, and personal context. The operational framework in AGENTS.md was general-purpose, but the persona layer on top was intensely personal.
The fix: I replaced "Dr. Zheng" with "your user" in the persona files. I kept the persona structure (it shows buyers how to create their own) but stripped the personal content.
And here is what I did NOT have to remove
Zero API keys. My .secrets/ directory was already gitignored. My .env was not checked in. There were no credentials scattered across config files. Not a single API key to strip.
This was the counterintuitive finding of the whole process. I expected the hard part to be credentials. It was not even a factor. Most sensitive data in a personal AI workspace is relational, not technical — persona files, memory entries, project references. If your workspace is set up with basic hygiene (gitignored secrets, environment variables for keys), the credentials problem is already solved. The real work is making the workspace usable for someone who is not you — and that is a product design challenge, not a security one.
What I kept
Here is where most of the value actually lives.
18 refined skills
Each skill is a self-contained directory with a SKILL.md file declaring its triggers, integration patterns, and usage, plus reference files and scripts. These took weeks to develop and refine through real use:
deep-research — A multi-phase parallel research pipeline: orchestrator → planner → parallel researchers (using DeepSeek V3) → two-pass synthesizer. Runs at about $1 per research session. The architecture handles researcher timeouts gracefully (a researcher that times out still writes partial findings, and the synthesizer works with what it gets).
lab-digest — Academic paper digests from arXiv and PubMed with Chinese-language summaries, running at about $0.27 per million tokens via DeepSeek V3.
brainstorming — Structured ideation with a hard gate: no implementation code is allowed before the design is explicitly approved. This single constraint prevents the most common AI coding failure mode.
systematic-debugging — Root-cause tracing with 12 reference files, teaching defense-in-depth and condition-based waiting patterns.
verification-before-completion — The "Iron Law: no completion claims without fresh verification evidence." Prevents the agent from claiming something works without actually checking.
Plus 13 more skills covering planning, code review, git workflows, parallel task dispatch, and skill authoring.
These do not contain secrets — they contain patterns. The structure of how to break down a research task across multiple agents, the specific instructions that produce consistent output, the guardrails that prevent common failure modes. This is the core value.
The operational framework
AGENTS.md defines how the assistant operates: planning requirements before implementation, reflection protocols after completing tasks, re-anchoring rules to prevent context drift, and checkpoint systems for long-running work. This represents hard-won lessons about making AI agents reliable over extended sessions.
The memory system structure
I stripped the content of my memory system but kept the architecture: the 3-tier hierarchy (injection → on-demand → cold archive), the write routing table (8 categories mapping to 8 target files), the admission criteria (a memory entry must pass 3 gates: needed in >80% of future sessions, persistent, and not findable by search). A buyer gets the empty structure and can populate it with their own context.
The persona framework
SOUL.md — sanitized — shows how to define an agent's personality, communication style, and behavioral guardrails. The buyer can customize the persona, but the structure shows them what to think about: tone, expertise areas, safety constraints, interaction patterns.
The "secret sauce" question
Here is what I realized during this process: my secret sauce was never in the workspace itself. It was in my ability to build the workspace.
The deep-research pipeline I kept? I can design new multi-agent architectures. The memory system? I develop variations on knowledge management patterns regularly. The debugging methodology? It comes from experience that does not transfer through a config file.
Selling a workspace template is like a chef selling a recipe. The recipe is valuable — it saves the buyer real time and gets them to a good result. But the chef's real edge is their ability to create new recipes. You do not lose that by sharing one recipe.
My workspace had 27 skills. I kept 18 and held back 9 — not because the 9 were more valuable, but because they were too specific to my setup to be useful to anyone else. The 18 I shipped are arguably more valuable precisely because they are general-purpose: they encode patterns that work across different contexts.
The line is clearer than you think
The mental model is simple:
Remove anything that identifies you or your specific context: persona content, personal memory entries, project directories, analytics data, platform-specific integrations.
Keep anything that demonstrates how you think: skill architectures, operational frameworks, memory system designs, workflow patterns, validation approaches.
Your buyers are not paying for your project code or personal context. They are paying for the hours of experimentation you have already done — the research pipeline architecture you refined through dozens of runs, the debugging methodology you distilled from real failures, the memory system design you iterated on until it actually worked.
If you have been sitting on a workspace you are proud of, try the exercise. Strip the personal context, keep the patterns, and see what you have got. The personal layer is thinner than you think — and the pattern layer underneath is more valuable than you expect.
Become a seller on ClawAgora — the seller center has a curation prompt that handles the sanitization and packaging for you. You make the decisions about what to keep and what to cut. Your agent handles the rest.
For a broader view of what makes a workspace template worth paying for, see The Anatomy of a High-Quality Workspace Template. And for the full story of ClawAgora's first listing, read Seller Zero: Why I Listed on a Marketplace With No Track Record.