How I manage my dev workflow with three Agent skills
One of the most exciting things about LLMs is their ability to do useful things that would be hard to script in the traditional sense. Shell scripts are great for deterministic tasks — rename these files, run this build, deploy to that server. But the moment you need judgment or the ability to recover when something unexpected happens, traditional scripting falls apart.
LLMs add a layer of fault tolerance that's genuinely hard to achieve any other way. A script fails when the output doesn't match the expected pattern. An LLM adapts.
This is what drew me to Claude Code skills. Skills are custom slash commands written in Markdown that teach an agent your specific workflow. I've been exploring AI in my own work for a while now, but skills have taken things to a new level. I bounce between Linear, GitHub, and the terminal all day. I wanted a way to stay in one place and let the agent handle the choreography. So I built a set of skills that form a complete loop:
- /new-issue — I describe a bug or feature in plain English. Claude files a well-structured Linear issue with acceptance criteria, labels, and the correct milestone.
- /next-task — it queries Linear for the highest-priority task, switches to the right git branch, implements the changes, opens a PR, and updates the issue status. If something blocks the task it tells me instead of plowing ahead. That judgment is the part you can't script.
- /approve — one command takes a PR from "approved" to "shipped." It verifies checks are green and there's no unresolved feedback before squash-merging.
The best part is how easy skills are to create. I didn't write these skills by hand. I described my process to Claude in a few sentences and it wrote out the full procedures and created the skill files. If my workflow changes I just update the Markdown or ask it to revise.
These skills aren't just automation. They're a living specification for how I want work to get done. Branch naming. Commit messages. PR structure. Merge criteria. All encoded in Markdown files that the agent actually follows. These decisions used to live in my head or in a wiki that drifted out of date the moment I wrote it. Now they're interpreted with intelligence rather than executed literally. They're guidelines, not rigid scripts. If something is slightly off the agent figures it out instead of crashing.
This points to something bigger. We're moving past AI as a merely a code generator. Skills turn LLMs into a workflow participants that understand your project's conventions and operate across tools on your behalf. It's not replacing the thinking. I still decide what to build and how to structure things. But the mechanical overhead of moving work through a pipeline? Let the agent handle that.
The ecosystem is growing fast. I use Claude Code, but the concept isn't limited to one tool. Vercel recently launched skills.sh, a directory of community skills that work across agents — Claude Code, Cursor, Copilot, Codex, and others. You can install them with npx skills add. There are already skills for React best practices and web interface guidelines. Or write your own.
What surprised me most about using skills for workflow is how quickly the mechanics disappear. You stop thinking about the process and just focus on the work. Try it with one small process. You might not go back.
Update: I wrote a follow-up post walking through how to build a /next-task skill from scratch, including a starter prompt and an example of it running.