Floom Starter Pack

Reference and setup guide. Everything you need to install, update, and understand what the pack adds to your agent, plus the curation decisions behind it. Stable id anchors support deep links for people and agents (for example docs#quickstart).

Quickstart

The simplest path is to paste one prompt into your agent (Claude Code, Cursor, Codex, OpenCode, Kimi). Your agent reads the manifest and installs the right files for its runtime. No account. Nothing to configure.

Agent prompt
Set up Floom Starter Pack: https://floom.dev/starter

Or run the CLI locally:

Terminal
npx @floomhq/starter install

The CLI detects which agent config paths exist on your machine and writes only to the appropriate locations.

What gets installed

The pack writes three things per supported agent:

File Purpose
AGENTS.md / CLAUDE.md Activation rules. When to invoke each skill, using the AGENTS.md pattern Vercel found can reach a 100% invocation rate versus skills being skipped by default.
.floom/skills.json Skill manifest. The full list of installed skills, their sources, install counts, and profile tags. Used by the find-skills meta-skill for discovery without loading every skill file into context.
.floom/starter.lock Lock file. Pack version and install date. Used by npx @floomhq/starter update to pull only changed skills.

No executables. Nothing is added to your PATH, no daemons start, and no network connections stay open after install. The pack is static config and skill text.

Total size on disk: approximately 120 KB for the full 65-skill pack. Individual skill files average about 1.2 KB.

Supported agents

The pack supports five agents out of the box. Each reads skill context from a different path. When you run the installer, it detects which directories exist and writes only to those. If none are found, it prompts you to choose.

Agent Activation file Skills path
Claude Code ~/.claude/CLAUDE.md ~/.claude/skills/
Codex CLI ~/.codex/AGENTS.md ~/.codex/skills/
Cursor .cursorrules (project) .cursor/skills/
OpenCode AGENTS.md (project) .opencode/skills/
Kimi AGENTS.md (project) .kimi/skills/

Not included: Gemini CLI is not supported in this pack at this time.

Updating

The skill manifest is updated daily as new skills are published to skills.sh. To pull the latest version:

Terminal
npx @floomhq/starter update

This reads your existing .floom/starter.lock, fetches the latest manifest, and writes only files that changed. Existing activation rules in AGENTS.md or CLAUDE.md are preserved.

Agent prompt
Update Floom Starter Pack to latest: https://floom.dev/starter/update

Or pin the install to a specific subset of skills/profiles when refreshing:

Terminal
npx @floomhq/starter install --profiles core,dev --harness claude,codex

Idempotent. The collision check skips skills you have customised so re-running the command will not overwrite local edits. Add --dry-run to preview the plan without writing.

Uninstalling

Remove everything the pack added:

Terminal
npx @floomhq/starter uninstall --all

Deletes the .floom/ directory and removes the Floom activation block from AGENTS.md / CLAUDE.md. Will not remove unrelated files.

Remove specific skills, an entire profile, or one agent only:

Terminal
npx @floomhq/starter uninstall --skills pr-review,brand-voice
Terminal
npx @floomhq/starter uninstall --profiles dev
Terminal
npx @floomhq/starter uninstall --harness claude

Use --global with any install/uninstall command to write/remove the pack at the user level (~/.claude/, ~/.codex/) instead of the current project.

Privacy

The Floom Starter Pack is a local install. After the initial download, it operates on your machine.

  • No telemetry. No analytics. No usage data is collected or transmitted.
  • No account, sign-in, or API key is required for the pack itself.
  • The installer downloads skill files from GitHub. After install, there are no ongoing network calls by default.
  • Skill files are static text. They contain no code that executes automatically.
  • The find-skills meta-skill (Vercel Labs, part of the pack) may call skills.sh when your agent invokes it. That is on demand, not automatic.

License

The Floom Starter Pack installer and tooling is MIT licensed. You can use, modify, and redistribute it freely per the repository LICENSE.

Individual skills in the pack carry their own licenses. Breakdown across the 65 curated skills:

License Skills / sources Count
MIT superpowers, mattpocock, vercel-labs, coreyhaines31, scrapegraphai, wshobson, currents-dev, remotion-dev, and similar 40
Apache 2.0 benchflow-ai/skillsbench, pbakaus/impeccable, supabase/agent-skills, most anthropics/skills 19
Source-available docx, pdf, pptx, xlsx from anthropics/skills (use freely; not fully OSS) 3
Proprietary workplan, wireframe-to-react, video-polish (Floom team; rights granted for pack distribution) 3

Each skill file includes a license header. If you redistribute individual skills, preserve the header.

Architecture

The technical shape of the pack: how the installer resolves targets, what files get written, how agents discover installed skills, and what V0 guarantees.

Mental model

The package contains a manifest plus bundled skill folders. The installer resolves selected profiles, detects local agents, writes skills into their native roots, writes a local index, and adds instructions that teach agents to search locally.

V0 scope. Local compatibility infrastructure: curated skills without mandatory cloud accounts or MCP for the baseline discovery loop.

Install flow

When the user runs install, explicit targets win. If no target is passed, the CLI detects local agent config directories and installs only to those agents.

Terminal
npx @floomhq/starter install --profiles core,dev,writing --harness claude,codex --yes
  1. If --harness is provided, use the requested agents.
  2. Otherwise, detect local agent config directories: ~/.claude, ~/.codex (or CODEX_HOME), ~/.cursor, ~/.config/opencode, ~/.kimi.
  3. If any are found, install only to detected agents.
  4. If none are found, ask for explicit --harness.

Use --skills <list> to install a specific subset, --all to install every skill in every profile, and --global to write to user-level paths (~/.claude/ etc.) instead of the current project.

Files written

Each target receives skill folders in its native root, a harness instruction file, and a shared local index at ~/.floom/packs/starter-index.json.

Target Skills folder Instruction file
Claude Code~/.claude/skills~/.claude/CLAUDE.md
Codex CLI~/.codex/skills~/.codex/AGENTS.md
Cursor~/.cursor/skills-cursor~/.cursor/rules/floom-packs.mdc
OpenCode~/.config/opencode/skills~/.config/opencode/AGENTS.md
Kimi~/.kimi/skills~/.kimi/agents/floom-system.md

Conflict protection

Every installed skill gets a provenance file. Managed pack skills can be replaced by later installs; untracked user-created folders are protected by default.

  1. Plan a skill write. If the destination does not exist, copy the skill folder.
  2. If it does exist, check for a .floom-pack.json from @floomhq/starter.
  3. If the provenance file exists, replace the managed copy.
  4. If it does not, refuse to overwrite unless --force is passed.

Skill discovery

No MCP is required in V0. The discovery loop is local: injected instructions point the agent at the starter index and the find-skills meta-skill.

The agent reads the injected instructions, searches the local starter-index.json via find-skills, picks the matching SKILL.md files, and uses them for task execution.

MCP can augment search for larger libraries later. It is not a launch dependency for Floom Packs V0.

Data model

The manifest links profiles to skills, and skills to upstream source records, keeping provenance legible and auditable.

EntityFields
PackManifestid, name, version, defaultProfiles, targets
Profileid, name, description, skills[]
Skillslug, name, source, upstream
Sourcelabel, repo, commit, license, status

Verified behaviors

Behaviors confirmed across all five launch targets. These are release acceptance criteria:

  • Manifest references existing skill folders.
  • Every bundled skill has frontmatter and description.
  • Dry-run writes nothing.
  • Temp-root install writes skills, index, provenance, and instructions.
  • Target autodetection works.
  • Missing detected targets produces a clear error.
  • --harness claude,codex,cursor,opencode,kimi writes all five launch targets.
  • Untracked existing skills are not overwritten.

Methodology

Why we built it this way: curation, activation, and what Floom adds versus upstream catalogs.

How we curated 65 skills from 91,000

skills.sh indexes on the order of 91,035 skills. This pack ships a curated subset of 65. Selection rules:

  1. Install count floor (32,900+): every skills.sh skill in the pack has been installed by at least 32,900 agents as a proxy for real-world validation. The median in the pack is about 75,000 installs.
  2. License compliance: MIT, Apache 2.0, or Floom-owned. Skills requiring an API key are excluded. gstack skills were excluded when no redistributable license was confirmed.
  3. Description quality: skills need a clear "when it fires" trigger. Skills without an obvious trigger waste context and are not invoked reliably.
  4. No API-key-required skills: the pack should work with zero setup; skills that need external credentials are excluded or replaced with free alternatives.

The four source tiers, in priority order:

Source Skills in pack Why this tier
skills.sh59 skills (91%)Battle-tested install counts, diverse publishers
Superpowers (obra, MIT)8 (sub-source of skills.sh)High install counts in workflow and planning
SkillsBench (Apache 2.0)7 skillsAcademic validation on real benchmarks
Floom proprietary3 skillsLast resort: workplan, wireframe-to-react, video-polish where no proven equivalent exists

From 22 Floom-proprietary to 3. Earlier packs included more Floom-authored skills. Recent packs narrow proprietary additions as open equivalents appear.

How activation works (the 100% pattern from Vercel)

Vercel published research that agents skip about 56% of installed skills by default. Even with an explicit "use these skills" prompt, invocation may only reach about 70%. The AGENTS.md activation pattern can reach 100% in their evals, see Vercel's agent evals post (same pattern Anthropic documents for Claude Code).

The pattern embeds per-skill trigger conditions in the agent context file (CLAUDE.md or AGENTS.md) instead of only saying "you have N skills installed."

  1. Installer writes activation: each skill gets one or two trigger sentences, for example: when the user mentions test failures, invoke systematic-debugging.
  2. Compact manifest: .floom/skills.json holds name, source, profile tags. Agents do not load every full skill file until invoked.
  3. find-skills (Vercel Labs): when no installed trigger matches, the agent can search without loading every skill into context.
  4. Load on demand: the full SKILL.md enters context when a trigger fires (Simon Willison: "skills cost a few dozen tokens until invoked").

Why curation beats quantity

The SkillsBench paper ( arxiv.org/abs/2602.12670 ) tested three installation strategies. Reported direction:

+18.6pp Lift with 2-3 curated skills per task.
-2.9pp Drop with kitchen-sink install (all available skills).
-1.3pp Drop with self-generated skills (agent writes its own).

Irrelevant skills widen the option space the model must search. Kitchen-sink installs can force evaluation of hundreds of options per task, increasing latency and errors. Profile tags narrow what lands in activation so the agent sees relevant triggers first; find-skills can reach the rest when needed.

What Floom adds to skills.sh skills

Floom does not author most skills in the pack. Floom curates from the open ecosystem (mostly skills.sh) and adds five things on top:

  1. Activation rules per skill in AGENTS.md / CLAUDE.md so they fire. Vercel reported this lifts activation from about 53% to 100% in their setup.
  2. Cross-agent format translation: same skill, native install paths for five agents.
  3. License vetting: exclude API-key and ambiguous-license skills.
  4. Daily refresh and version tracking from upstream sources.
  5. Curated portfolio: a small slice from a very large public index.

You get the original author's skill plus Floom's activation, packaging, and update layers. Each skill links to its source repository; Floom does not sit between you and the author's code.

Why Floom doesn't pay for inference

Skills run inside your agent, on your API key, using your token budget. Floom is distribution: manifests, installers, activation, and curation, not model inference. That is intentional: skills in your agent can use your filesystem, git repo, and project context; skills on a third-party runtime generally cannot.