Sharing what’s been working for me lately in my Claude Code setup in case it’s helpful for others. This is less of a polished guide and more of a changelog — things I’ve added, how I actually use them, and what’s stuck.
1. Automated demo GIFs from PRs
I built a skill that teaches Claude Code to autonomously create animated GIF walkthroughs of UI features for pull requests. Claude reads the git diff, plans a 5-8 frame storyboard, launches a browser, navigates through the feature, takes screenshots, and stitches them into an optimized GIF — all from one prompt. I don’t have to record Loom videos or take a bunch of manual screenshots to attach to Slack messages explaining new features anymore.
Example use: “/pr-demo-gif-agent-browser to showcase the new feature and I can attach it to the Slack message I send to the Account Executives saying something like ‘I added a feature where you can look at your book of business and different cuts of your book of business such as the closed lost and the open opportunities…’”
What it’s helped with: PR reviewers can see a feature working in 10 seconds without pulling the branch. GitHub renders GIFs inline but not video, so this constraint is baked into the design. The same workflow generates walkthroughs I attach to Slack messages when shipping features to non-technical stakeholders.
Made possible by: Agent Browser CLI
2. Agent Teams (/teams)
Claude Code’s Agent Teams feature lets you orchestrate multiple Claude instances that communicate directly with each other. One session acts as team lead, spawning teammates that each get their own context window and can send peer-to-peer messages through a shared task list. Speeds up problem-solving by letting multiple agents work in parallel and compare notes — leads to faster debugging, broader exploration, and higher-quality output than a single agent.
Example use: “It’s taking forever for the account list page to load after we merged our last PR. Spin up a team of agents to figure out the root causes of the slow load times, each agent exploring a different hypothesis, ranked from most to least impactful, across every tab.”
What it’s helped with: The key distinction vs. sub-agents: sub-agents report back to a parent only. Agent Teams let agents talk to each other. This matters when agent A discovers something agent B needs but A has already finished. Most of my work still uses sub-agents — teams are reserved for when agents genuinely need to coordinate across hypotheses or problem spaces.
Link: Agent Teams docs
3. Self-improving meta-loop (claudeception)
This is less a single tool and more a feedback loop I’ve built into how I use Claude Code. After any session where I learn something reusable, a hook prompts @blader’s /claudeception skill to evaluate the session — it analyzes what happened, extracts reusable patterns, and creates or updates skills automatically. Combined with Anthropic’s /skill-creator, my setup gets smarter every session.
Example use: “I know I’m going to set up Google OAuth for every application I deploy like we did in this session. [if not automatically triggered] Use /claudeception and /skill-creator to create the optimal skill for future sessions when I need to do the same thing?”
What it’s helped with: I started with ~20 skills and now have 181. Most were extracted from real sessions — TanStack table patterns, Slack message composition, Unleash agent config, Docker debugging. The compound effect is real: skills I created months ago save me hours every week because Claude Code already knows my patterns. It’s the closest thing I’ve found to a genuinely self-improving development environment.
Link: github.com/blader/Claudeception
4. Select Star + Snowflake MCP servers together
I have both Select Star (data catalog) and Snowflake (data warehouse) configured as MCP servers in Claude Code. Select Star is the map — it knows what tables exist, what columns mean, and how data flows. Snowflake is the engine — it executes queries and returns results. Using them together means way fewer hallucinations on column names and table structure, and I can hand this off to a sub-agent rather than doing it myself.
Example use: “The overdue task count is wrong in the dashboard. Spin up a sub-agent that uses SelectStar to find the right table, then Snowflake to figure out what characteristics should surface these as correctly categorized as overdue tasks.”
What it’s helped with: Turns multi-day data analysis (Slack the data team, wait, guess column names, debug errors) into a minutes-long conversation. Claude discovers the right table via Select Star, verifies the schema via Snowflake MCP, then writes and runs the query. I’ve set this up for colleagues who don’t write SQL — they just describe what they want to know.
Links: Select Star MCP | Snowflake MCP
5. Superpowers (Obra’s workflow methodology)
Jesse Vincent’s superpowers skill solves the biggest problem with AI coding agents: they skip steps. Instead of jumping straight to code, superpowers enforces a full workflow — brainstorming → planning → implementation → testing → review → finish. It includes 13 sub-skills that chain together. The ones I use most frequently: /brainstorming (Socratic clarification before any feature work), /writing-plans (bite-sized implementation steps), /executing-plans (systematic task-by-task execution with verification), and /using-git-worktrees (isolated branches per feature).
Example use: Someone Slacks me “I have an idea for a new sales play.” I paste the screenshot with my scattered thoughts in a markdown file. Superpowers helps me brainstorm and hone my idea, explores the codebase, designs a path forward with trade-offs, writes an implementation plan, and then executes and tests it in a new worktree.
What it’s helped with: My full feature workflow is now brainstorming → writing-plans → using-git-worktrees → executing-plans → finishing-a-development-branch. Every step is a superpowers sub-skill. It prevents the “vibe coding” failure mode where you build the wrong thing fast. The brainstorming sub-skill alone has saved me from going down the wrong path dozens of times by forcing clarifying questions before committing to an approach.
Link: github.com/obra/superpowers
That’s the current state of things. Each of these has been running for at least a few weeks and has earned its place. The ones that compound the most are #3 and #5 — claudeception makes sure the good patterns from every session stick around, and superpowers gives structure to every session so those patterns get applied correctly.
If any of this is useful or you’ve found similar things that work, I’d like to hear about it.