Sharing what’s been working for me lately in my Claude Code setup in case it’s helpful for others. This is less of a polished guide and more of a changelog — things I’ve added, how I actually use them, and what’s stuck.
1. Superpowers (obra’s workflow discipline skill)
Jesse Vincent’s superpowers skill (70K+ stars on GitHub) solves the biggest problem with AI coding agents: they skip steps. Instead of jumping straight to code, superpowers enforces a full workflow — brainstorming → planning → implementation → testing → review → finish. It includes 13 sub-skills that chain together. The ones I use most: brainstorming (Socratic clarification before any feature work), writing-plans (bite-sized implementation steps), executing-plans (systematic task-by-task execution with verification), and slack-to-code (turns Slack screenshots of bug reports into validated plans).
Example use: Someone Slacks me “the license count shows 38 but should be 100.” I paste the screenshot and superpowers kicks in: it extracts the issue, explores the codebase, identifies the root cause, proposes a fix with trade-offs, and writes an implementation plan — all before touching any code. A different example: after code review on a PR, I take the review comments, write them into a structured plan, then tell Claude “Use superpowers:executing-plans to implement this plan task-by-task” and it works through each comment systematically with verification at each step.
What it’s helped with: My full feature workflow is now brainstorming → writing-plans → using-git-worktrees → executing-plans → finishing-a-development-branch. Every step is a superpowers sub-skill. It prevents the “vibe coding” failure mode where you build the wrong thing fast. The brainstorming sub-skill alone has saved me from going down the wrong path dozens of times by forcing me to answer clarifying questions before committing to an approach.
Link: github.com/obra/superpowers
2. Self-improving meta-loop (claudeception + 181 skills + log-claude-change)
This is less a single tool and more a feedback loop I’ve built into how I use Claude Code. After any session where I learn something reusable, I run /claudeception (credit to @blader for the original repo) — a skill that analyzes the session, extracts patterns, and creates or updates skills automatically. Combined with /log-claude-change (which routes changelog entries to the right Obsidian file) and /skill-creator, this means my Claude Code setup gets smarter with every session.
Example use: “I know I’m going to need to make this filter adjustment in the future for different sales plays. Can you think hard about our session, and use the /claudeception and /skill-creator and /writing-skills skills to create the optimal skill for future sessions of claude code?”
What it’s helped with: I started with ~20 skills and now have 181. Most were extracted from real sessions — TanStack table patterns, Slack message composition, Unleash agent config, Docker debugging. The compound effect is real: skills I created months ago save me hours every week because Claude Code already knows my patterns. It’s the closest thing I’ve found to a genuinely self-improving development environment.
Link: github.com/blader/Claudeception
3. Select Star + Snowflake MCP servers together
I have both Select Star (data catalog) and Snowflake (data warehouse) configured as MCP servers in Claude Code. Select Star is the map — it knows what tables exist, what columns mean, and how data flows. Snowflake is the engine — it executes queries and returns results. Neither is sufficient alone.
Example use: “The count of overdue tasks is incorrect in the dashboard. Spin up a sub-agent that uses the SelectStar MCP server to find the right table that has this data and then use the Snowflake MCP server to figure out what characteristics are consistent in these tasks that would surface them as correctly categorized as overdue tasks.”
What it’s helped with: Turns multi-day data analysis (Slack the data team, wait, guess column names, debug errors) into a minutes-long conversation. Claude discovers the right table via Select Star, verifies the schema via Snowflake MCP, then writes and runs the query. I’ve set this up for colleagues who don’t write SQL — they just describe what they want to know.
Links: Select Star MCP | Snowflake MCP
4. PR Demo GIFs with agent-browser
I built a skill that teaches Claude Code to autonomously create animated GIF walkthroughs of UI features for pull requests. Claude reads the git diff, plans a 5-8 frame storyboard, launches a browser, navigates through the feature, takes screenshots, and stitches them into an optimized GIF — all from one prompt.
Example use: “/pr-demo-gif-agent-browser to fully showcase the new feature and I can attach to a Slack message I send to the Account Executives saying something like ‘I added a feature where you can look at your book of business and different cuts of your book of business such as the closed lost and the open opportunities…’”
What it’s helped with: PR reviewers can see a feature working in 10 seconds without pulling the branch. GitHub renders GIFs inline but not video, so this constraint is baked into the design. I also use it to generate GIF walkthroughs to attach to Slack messages when shipping features to non-technical stakeholders.
5. Agent Teams (/teams)
Claude Code’s Agent Teams feature lets you orchestrate multiple Claude instances that communicate directly with each other. One session acts as team lead, spawning teammates that each get their own context window and can send peer-to-peer messages through a shared task list.
Example use: “It is taking a really long time for each of the tables on the account list page to load. Can you spin up a team of agents to figure out the root causes of the slow load times and list them out by what’s causing it the most to the least. And this is for every tab that this seems to be happening on.”
What it’s helped with: The key distinction vs. sub-agents: sub-agents report back to a parent only. Agent Teams let agents talk to each other. This matters when agent A discovers something agent B needs but A has already finished. Most of my work still uses sub-agents — teams are reserved for when agents genuinely need to coordinate.
Link: Agent Teams docs
6. /insights (Claude Code’s built-in session retrospective)
Most Claude Code users don’t know this exists. Run /insights and it generates an interactive HTML dashboard analyzing all your past sessions — not just token counts, but qualitative patterns. It uses Claude Haiku to extract “facets” from each session transcript (what was the goal, did it succeed, where was friction, was the user satisfied), then synthesizes patterns across hundreds of sessions into a narrative report.
Example use: Just type /insights in any session. It saves a report to ~/.claude/usage-data/report.html. Open it in a browser.
What it surfaced for me: After ~490 sessions, it told me my top friction point was 99 instances of “wrong approach” — Claude starting in the wrong direction before I corrected it — and generated specific CLAUDE.md additions to preempt those mistakes in future sessions. It also identified my project areas (135 sessions on trialist dashboard features, 60 on code review, 55 on debugging), confirmed that plan-first TDD and parallel agent orchestration were my highest-satisfaction workflows, and described my usage pattern as “orchestrating Claude Code like a development team lead.” It reads less like a usage dashboard and more like a performance review from a dev lead who watched every session you ran.
That’s the current state of things. Each of these has been running for at least a few weeks and has earned its place. The ones that compound the most are #1 and #2 — superpowers gives structure to every session, and claudeception makes sure the good patterns stick around for next time.
If any of this is useful or you’ve found similar things that work, I’d like to hear about it.