Skip to content
Go back

Claude Code Is Misnamed

Published:

Claude Code is misnamed.

It’s not a coding tool. It’s a general agent that happens to be good at coding.

I’ve been thinking about this for a while, but a few things this week made it click.


Simon Willison wrote about Anthropic’s new “Skills” feature and made a broader observation: “Claude Code is, with hindsight, poorly named. It’s not purely a coding tool: it’s a tool for general computer automation. Anything you can achieve by typing commands into a computer is something that can now be automated by Claude Code.”

When I started using Claude Code, I thought of it as a way to write code faster. And it does that. But what I was actually doing was building automations—systems that run without me, that solve business problems, that teams depend on daily.

A sales pipeline that queries our CRM, routes accounts by territory, filters contacts by seniority, and imports directly to our sales tools. A web app that generates personalized collateral in minutes instead of the 30-60 minutes it used to take. A competitive intelligence agent that monitors our competitors and delivers findings to Slack every morning.

None of these are “coding projects” in the traditional sense. They’re business automations. The code is incidental. The automation is the point.


Lenny Rachitsky’s newsletter had 50+ examples of non-technical people using Claude Code. His reframe: “Forget that it’s called Claude Code and instead think of it as Claude Local or Claude Agent.”

Someone who downloads all their meeting recordings and asks Claude Code to identify times they’ve subtly avoided conflict. Someone who uses it as a personal organization assistant—finding duplicate files, reviewing directory structures, suggesting improvements. Someone who built a DIY subagent to help them design and build a slide tower for their kid.

A mom who voice-records ideas during morning stroller walks, feeds the rambling notes to Claude Code, and gets back coherent research themes, draft articles in her exact voice, and LinkedIn posts ready to publish.

One person created a slash command that analyzes their journal entries against their git commits to find gaps between what they said they’d do and what they actually did. “Like having a COO that learns from my patterns.”

None of this is coding. It’s directing an AI to do things on your computer.


The shift: from “AI as a tool you prompt” to “AI as an agent you direct.”

A tool is ChatGPT in a browser window. You prompt it, it responds, you do something with the response. The tool is reactive. It waits for input.

An agent is different. You teach it how your company works. You give it access to your systems. You point it at problems. It runs.

Claude Code is already an agent. Most people just don’t realize it because of the name.


Anthropic released “Skills” recently, which makes the agent framing more explicit.

A skill is a set of instructions that teaches Claude how to do something your way. How you format reports. How you structure follow-up emails. How you want competitive research presented. You encode that expertise once, and Claude applies it whenever the relevant task comes up.

The technical details are less important than what it represents: a shift from one-off prompting to persistent, reusable expertise that an agent can apply on your behalf.

But even without skills, the capability is already there. Claude Code has been a general-purpose automation platform this whole time. Skills just make it easier to see.


There’s a caveat.

Aaron Levie wrote about Jevons Paradox for knowledge work recently. The core idea: efficiency doesn’t reduce usage, it unlocks usage that wasn’t possible before. Coal became more efficient, demand went up. Mainframes became minicomputers became PCs, each generation 100X more units.

AI agents are doing this to knowledge work. They’re not reducing the work—they’re unlocking work that wasn’t economical before.

But Levie also noted: “AI agents require management, oversight, and substantial context to get the full gains.”

That’s the catch. The democratization is real—anyone can install Claude Code and start automating. But actually capturing the value requires a skill that’s still unevenly distributed.

I’ve watched people try these tools and give up because they couldn’t get past the initial friction. The capability is theoretically available to everyone, but realizing it requires learning to direct AI agents effectively.

Stack Overflow’s 2025 survey shows 82% of developers using AI tools weekly, but positive sentiment has dropped from 70%+ to 60%. A study from METR found developers actually took 19% longer on tasks when using AI—even though 52% believed they were faster. There’s a gap between capability and effective use.

Maybe the tools get easier. Maybe the gap closes. But right now, it’s real.


Here’s where I land.

Claude Code being misnamed isn’t just a branding issue. It shapes how people think about what’s possible.

If you think it’s a coding tool, you think it’s for developers. If you realize it’s a general agent, you start asking different questions. What could I automate? What expertise could I encode? What would I build if the cost of building dropped by 90%?

Those are business questions, not technical questions. And they’re available to anyone willing to learn how to direct these tools.

I don’t know where the ceiling is. But I know that “learning to direct AI agents” is becoming a real skill category—one that didn’t exist a few years ago and is now quietly becoming valuable.

Probably worth developing. Probably sooner rather than later.


Share this post on:

Next Post
Jevons Paradox and Knowledge Work