Skip to content
Go back

Claude Code Setup Log #4: Cross-Session Messaging, AI Design Vocabulary, and Skill Hygiene

Published:

More notes on what’s been working for me in my Claude Code setup.


1. Claude Peers MCP for cross-session messaging

I run a lot of Claude Code sessions simultaneously — different repos, different tasks. By default they’re completely isolated. Sometimes I want my different Claude Code pals to talk to each other and share context, and there’s no built-in way to do that.

Louis Arge’s Claude Peers MCP fixes this with a local SQLite broker. Install it as a user-scoped MCP server, and every Claude Code session gets three tools: list_peers to discover other active sessions, send_message to send a message that arrives instantly, and set_summary to describe what you’re working on (visible to other peers).

claude mcp add --transport stdio claude-peers \
  -- bun run ~/claude-peers-mcp/src/index.ts

My frontend agent can ask my backend agent “what does the JSON response look like for the new endpoint?” and the answer lands directly in the frontend’s context window. No tab-switching, no copy-pasting, no shared documents. The agents coordinate directly.

Pairs nicely with cmux — peers handles the messaging layer, cmux gives you the terminal panes with notification rings so you know when an agent finishes or needs input.

Example use: “List your peers and ask the one working on the API project what the auth middleware expects.”


2. Impeccable for frontend design that doesn’t look AI-generated

I’m not a designer by trade, but I do have design opinions. I don’t always have the words to describe when something looks off to me. I can tell a sidebar is crowded or a layout feels wrong, but I can’t always articulate why — which means I can’t prompt Claude to fix it effectively.

Paul Bakaus (former Google engineer, creator of Khroma) built Impeccable to solve exactly this. It’s a collection of 20 skills that give me and Claude a shared design vocabulary. 13K+ stars on GitHub.

The commands I use most:

CommandWhat it does
/teach-impeccableOne-time setup: gathers your project’s design context and saves it
/critiqueUX design review with scored feedback across hierarchy, clarity, emotional resonance
/auditTechnical quality checks — accessibility, performance, responsive
/polishFinal pass before shipping
/typesetFix font choices, hierarchy, sizing
/arrangeFix layout, spacing, visual rhythm

I ran /teach-impeccable once on my AM Dashboard repo — it analyzed the existing design system, saved the context, and now every subsequent command is project-aware. When a signals sidebar got crowded, I ran /critique and got scored feedback: visual hierarchy, information density, spacing rhythm. That gave me the language to prompt Claude to fix exactly what was wrong. Three commits later, the sidebar was usable.

The skill also includes anti-patterns that explicitly tell the AI what NOT to do — no bounce/elastic easing, no pure black (always tint), no cards nested inside cards, no defaulting to Inter when the project has a type system. These alone fixed patterns I’d been shipping for months without questioning them.

What it’s helped with: Before Impeccable, I’d build a feature, look at it, think “this looks… fine,” and ship it. Now I run /critique and can articulate what’s actually off. The difference between “it looks generic” and “the visual hierarchy is flat because every element has the same visual weight” is the difference between not knowing what to fix and knowing exactly what to fix.

Link: impeccable.style / GitHub


3. Skill library audit — 329 down to 70

Skill creep is inevitable and ugly. After a few months of building, importing, and auto-generating skills, I had 329 in my library. When I asked for help with a deployment, Claude matched a generic “typescript” skill I’d created months ago instead of my actual deployment skill. The wrong skills were winning the semantic matching contest.

I spent 30 minutes with Opus and a few sub-agents to evaluate and quarantine 259 of them. The prompt was roughly: “spin up a team of agents to carefully analyze each of my Skills and surface to me which I should remove.”

Claude dispatched sub-agents that categorized every skill, flagged the ones with no unique knowledge, and presented a ranked list. The biggest offenders:

Everything quarantined went to ~/.claude/skills-quarantine/ organized by batch, so nothing is permanently lost. Skill matching is responsive again — when I ask about deployments, the right skill loads on the first try.

My rule of thumb now: A skill earns its place only if it contains knowledge Claude doesn’t already have — specific error messages, non-obvious workarounds, project-specific context, or multi-step workflows that would take multiple iterations to discover. If it’s just “you are an expert at X,” delete it.

If you’re accumulating skills and noticing that the wrong ones keep loading, recommend even a simple prompt: “spin up a team of agents to carefully analyze each of my Skills and surface to me which I should remove.”


Share this post on:

Next Post
Claude Code Setup Log #3: Multi-Model Review, Doc Cleanup, and Terminal Upgrades