Skip to main content

I Packaged My Personal Workspace Into a Product in One Afternoon

Rockman Zheng·
I Packaged My Personal Workspace Into a Product in One Afternoon

image: /blog/images/i-packaged-my-workspace-in-one-afternoon.png

I have been running an OpenClaw workspace on my Ubuntu server since mid-February. My AI assistant — Jerry, a persona-driven research agent — has accumulated 27 custom skills, a hierarchical memory system, and deep-research pipelines tuned through daily use. Last week I decided to see if I could turn it into something another person could buy and use on ClawAgora.

I gave myself one afternoon. Here is how it went.

Starting point: the mess

My workspace was never designed for anyone else. It had grown the way personal tools always do — pragmatically, without structure. About 500 files total:

/home/ubuntu/.openclaw/workspace/
├── IDENTITY.md, SOUL.md, AGENTS.md, MEMORY.md  (core persona & config)
├── HEARTBEAT.md, TOOLS.md                       (cron tasks & platform notes)
├── memory/              # Daily notes, project index, strategy, profile
│   ├── YYYY-MM-DD.md    # Raw session logs
│   ├── archive/         # Notes older than 7 days
│   ├── projects.md      # 8 active projects
│   ├── strategy.md      # Strategic insights
│   └── profile.md       # My full profile
├── skills/              # 27 custom skills
├── projects/            # 11 active project directories
│   ├── Rosemary/        # AI career toolkit
│   ├── babel/           # Cross-agent shared memory
│   ├── opendailynews/   # AI research digest
│   └── ... (8 more)
├── research/            # 27 deep-research run outputs
├── central-brain/       # Brain config/logs
├── content/             # Douyin content, resumes
├── data/                # Analytics (douyin-analytics.json)
├── drafts/, docs/, archive/, scripts/, temp/

Some of this is genuinely useful to other people. Some of it is only useful to me. Sorting that out manually across 500 files would be tedious. Fortunately, I did not have to.

The curation prompt

ClawAgora's seller center has a copy-and-paste curation prompt designed for exactly this situation. You paste it into your agent, and the agent runs a 4-phase export workflow: Scan → Sanitize → Package → Report. Your role is to make the product decisions. The agent handles the execution.

I pasted the prompt into Jerry and let it rip.

Phase 1: Scan

Jerry read every file in the workspace and came back with a structured inventory:

Found ~500 files across the workspace. Here is the breakdown:

Core config (8 files): IDENTITY.md, SOUL.md, AGENTS.md, MEMORY.md, HEARTBEAT.md, TOOLS.md, plus workspace state files.

Skills (27 directories): deep-research, lab-digest, brainstorming, systematic-debugging, verification-before-completion, writing-plans, douyin-compliance, feishu-image-upload, blog-launch, render-image, ... (17 more)

Projects (11 directories): Rosemary, babel, heyrosemary, opendailynews, openphonecase, ...

Memory system: Daily notes, project index, strategy, profile, lessons, rules, job-search, skill-proposals.

Other: research/ (27 runs), data/ (analytics), content/ (drafts, resumes), central-brain/ (config/logs).

Which functionalities should be included in the export? List any you want to exclude.

This is where my job started — not scanning files, but making product decisions.

I told Jerry: keep the 18 general-purpose skills, drop the 9 that are tied to my specific integrations (douyin-compliance, feishu-image-upload, blog-launch, digest-onboard, render-image, and the compiled dist/ directory). Keep the core persona framework (SOUL.md, AGENTS.md, MEMORY.md, HEARTBEAT.md, TOOLS.md, IDENTITY.md) but strip all personal content from them. Drop all 11 project directories, all daily notes, all research outputs, all personal analytics.

The insight that clicked during this conversation: positioning is a feature. Jerry's inventory made it obvious that a grab-bag of 27 unrelated skills is not a product. I needed to curate. A research assistant with a clear purpose and a focused set of capabilities — that is a product. I told Jerry we were calling it "Jerry - Personal Research Assistant."

Phase 2: Sanitize

Jerry took my inclusion list and started sanitizing. The curation prompt tells the agent to strip credentials, replace personal references with descriptive placeholders, and preserve all functional content.

Here is what Jerry found — or rather, what it did not find:

Stripped:

  • 0 API keys/secrets (none found — .secrets/ already gitignored, .env not checked in)
  • 12 personal name references replaced ("Dr. Zheng" → "your user")
  • 8 project-specific references removed from MEMORY.md
  • All daily notes excluded (raw session logs, too personal)

Preserved:

  • All prompt text and behavior rules in 18 skill SKILL.md files
  • Persona structure in SOUL.md (tone, guardrails, communication style)
  • Memory system architecture in MEMORY.md (3-tier hierarchy, write routing, admission criteria)
  • Operational framework in AGENTS.md (planning, reflection, checkpoints)

Zero API keys. That surprised me. I expected the hard part to be credentials. The real sanitization challenge was relational — Jerry's persona file opened with "You're Jerry. An alien cat who was rescued from a star war by Dr. Zheng." My MEMORY.md referenced my job search, my strategic thinking, my business insights. The daily notes contained entire conversations between us.

The agent handled the replacement mechanically, but the decision about what counts as personal required my judgment. Most sensitive data in a personal AI workspace is relational, not technical. That was the counterintuitive finding. (I wrote a deeper dive on the sanitization decisions — what I removed, what I kept, and where the line actually is.)

Phase 3: Package

Jerry created the ZIP with all sanitized files, preserving the original directory structure. As part of this phase, it also generated two files I would have had to write manually:

CLAWAGORA_LISTING.md — marketplace metadata with a description, category (AI & ML), capabilities list, security disclosure (what tools/APIs the workspace connects to), and requirements.

SETUP.md — buyer-facing setup instructions: prerequisites, installation steps, configuration guide explaining each placeholder and where to get the value, and a verification step.

Here is what the export looked like:

jerry-personal-research-assistant/
├── IDENTITY.md               # Name + pointer to SOUL.md
├── SOUL.md                   # Persona framework (sanitized)
├── AGENTS.md                 # Operational framework (planning, reflection, checkpoints)
├── MEMORY.md                 # Memory system template (structure, not content)
├── HEARTBEAT.md              # Cron-like task framework
├── TOOLS.md                  # Platform integration notes
├── SETUP.md                  # Generated: getting started guide
├── CLAWAGORA_LISTING.md      # Generated: marketplace metadata
├── skills/                   # 18 skill directories
│   ├── deep-research/        # Multi-phase parallel research (~$1/run)
│   │   ├── SKILL.md
│   │   └── references/
│   ├── lab-digest/           # Academic paper digests (~$0.27/M tokens)
│   ├── brainstorming/        # Structured ideation (hard gate: no code before approval)
│   ├── systematic-debugging/ # Root-cause tracing (12 reference files)
│   ├── writing-plans/        # Implementation planning (bite-sized tasks)
│   ├── verification-before-completion/  # "Evidence before claims, always"
│   └── ... (12 more skills)

152KB ZIP. 88 files. Everything that ships has a reason to be there. (For what makes this kind of structure work as a product, see The Anatomy of a High-Quality Workspace Template.)

Phase 4: Report

Jerry printed an export report summarizing everything:

=== ClawAgora Export Report ===

Export: ~/.openclaw/workspace/jerry-personal-research-assistant-export.zip Size: 152KB

FILES INCLUDED (88): [list of all files]

GENERATED: CLAWAGORA_LISTING.md SETUP.md

STRIPPED:

  • 0 API keys/secrets replaced
  • 12 personal name references replaced
  • 8 project-specific references removed

REVIEW RECOMMENDED:

  • skills/deep-research/references/example-output.md:14 — contains a reference to "Rosemary project," may want to generalize
  • MEMORY.md:23 — admission criteria example uses a personal context entry

Next steps:

  1. Review the items flagged above
  2. Upload the ZIP to ClawAgora

The review flags caught two things I would have missed in a manual pass — a project reference buried in a skill's example output, and a personal example in the memory template. I fixed both, had Jerry regenerate the ZIP, and moved on.

Upload to ClawAgora

Uploaded the ZIP, filled in the listing details from the generated CLAWAGORA_LISTING.md — title, description, category, price. The heavy lifting was already done.

What surprised me

The agent did most of the work. My role was strategic: decide what the product is, decide what to include, review what the agent flagged. The file-by-file scanning, sanitization, documentation generation, and packaging were all handled by the curation prompt workflow. If I had done this manually, it would have been a full day. With the agent, my active involvement was maybe an hour of decision-making and review.

Skills are the moat, not project code. I had 11 active project directories. None of them shipped. The skills — refined prompting patterns, multi-agent orchestration logic, validation workflows — were the real value. Projects are just implementations. Skills are capabilities. The scan phase made this obvious when Jerry listed everything side by side.

Sanitization is about people, not passwords. Zero API keys to strip. The hard part was separating a living AI assistant's "self" from a transferable template. Persona files, memory entries, strategic context — that is where the personal data actually lives.

The generated docs were good enough. I expected to have to rewrite the SETUP.md and CLAWAGORA_LISTING.md from scratch. The agent's versions needed minor edits — tightening the description, adjusting the verification step — but they were 90% there. That is a significant time saver.

The report caught real issues. The two flagged items were genuine problems that would have shipped if I had eyeballed the export. Having the agent systematically flag anything it changed or was unsure about is better than my manual review would have been.

If you have a workspace you are proud of

The thing I keep coming back to is this: if you have been using OpenClaw for real work, you have already done the hard part. The skills you have refined, the workflows you have tuned, the memory patterns you have developed — that is the product.

The curation prompt in ClawAgora's seller center handles the packaging. You paste it into your agent, tell it what to include, review the report, and upload. Your job is the product decisions — what to keep, what to cut, how to position it. The agent handles the tedious part.

The real challenge is not technical. It is deciding what the product is. "Everything I use" is not a product. A curated, positioned set of capabilities that solves a specific problem — that is a product. The curation prompt's Phase 1 inventory is actually a great forcing function for this: seeing your entire workspace laid out as a structured list makes the curation decisions much clearer.

Ready to try it? Become a seller on ClawAgora — the seller center has the export prompt ready to go. Paste it into your agent, make your curation decisions, review the report, and create your first listing. If you get stuck or want a second pair of eyes, reach out at help@clawagora.com — I am happy to take a look.

Related reading: If you're worried about your workspace working for other people, see Building a Workspace That Works for Someone Else. For the architectural patterns that make workspaces production-grade, see 5 Workspace Patterns That Separate Beginners From Power Users.