Shipping a Feature in 45 Minutes: My Claude Code Workflow End to End
The fastest path from idea to shipped feature with Claude Code is not "type a prompt and let it rip." I know because I tried that approach for months, and I documented those hard-won lessons in my 400+ sessions retrospective. It works sometimes. When it doesn't, you lose an hour rebuilding something that should have taken twenty minutes.
The actual fastest path is a structured workflow that takes about 45 minutes for a medium-complexity feature. Eight steps, each one earning its place because skipping it burned me in the past. Not theoretically. Actually burned. Wasted sessions, broken builds, features that had to be redone from scratch.
This post walks through every step using a real example: adding a scheduled blog publishing feature to this portfolio site. Posts with a future date should stay hidden until that date arrives. Simple feature. The kind of thing that sounds like fifteen minutes of work until you realize there are three different ways to get it wrong.
Step 0: Memory Recall
Before writing a single line of code, I check whether I've been here before.
Claude Code starts every session from scratch. No memory of previous sessions. No awareness of past mistakes. Claude-mem fixes this by persisting observations across sessions. The key habit is searching it before you start working.
For the blog scheduling feature, the searches are obvious: "LESSON blog", "LESSON Next.js", "LESSON MDX". Three searches, five seconds each. What comes back is gold.
Claude-mem surfaced the MDX compilation rules I'd learned the hard way weeks ago. No curly braces in prose text -- MDX interprets them as JSX expressions and the build breaks. Always run next build after touching MDX-adjacent code. These rules are also in my CLAUDE.md, but the memory search surfaces the context around them: the specific session where the build broke, the exact error, and how long it took to diagnose. That context makes the guardrail feel real, not just a line in a config file.
It also surfaced a lesson about the blog data layer: the getAllPosts() function has no filtering by date. It returns everything. This is the kind of implementation detail you forget between sessions, and it's exactly the kind of detail that matters when you're about to add date-based filtering.
Memory recall takes about thirty seconds. The ROI is absurd. Every lesson surfaced is a mistake you don't repeat. Every architectural detail remembered is a debugging session avoided. Skip this step and you're coding blind, relying on Claude to rediscover things you already know.
Step 1: Brainstorm
With past context loaded, I invoke /brainstorm and describe what I want: "I want blog posts to auto-publish on their scheduled date. Posts with future dates should be invisible to readers."
Claude doesn't start coding. It starts asking questions. This is the entire point of the brainstorm step. Superpowers puts Claude into a Socratic questioning mode where it resists jumping to solutions and instead interrogates the problem.
The questions come fast:
How are posts currently dated? There's a date field in frontmatter. Is that the publish date or the writing date? For me it's the publish date, but Claude is right to ask -- the answer isn't obvious from the field name alone.
Is there any existing future-date filtering? No. getAllPosts() returns every post with published: true regardless of date. This confirms what claude-mem surfaced, and now Claude has it in active context.
How is the site deployed? Vercel, from a GitHub repo. Static generation at build time. This matters enormously because it determines whether date filtering happens at build time or runtime.
What triggers a rebuild? Only git pushes. There's no scheduled rebuild. So even if we add date filtering to the build, a post scheduled for next Tuesday won't appear on Tuesday unless something triggers a rebuild on that day.
That last question is the one that prevents building the wrong thing. Without it, I'd have added the date filter to getAllPosts(), pushed, and then been confused when scheduled posts didn't appear on their dates. The filter would have been correct. The deployment pipeline would have been missing. The feature would have been broken in a way that only manifests on the scheduled date -- the worst kind of bug to debug.
The brainstorm takes two to three minutes. It surfaces the deployment gap, confirms the data layer assumptions, and produces a clear shared understanding of what needs to be built. Two minutes well spent.
Step 2: Plan
With the brainstorm complete, I invoke /write-plan. Claude shifts from questioning mode to planning mode. It explores the codebase -- reads blog.ts to understand the data layer, checks vercel.json for deployment config, looks at the frontmatter format across existing posts, examines the GitHub Actions directory.
The plan it produces is specific and reviewable:
Task 1: Add a date <= today filter to getAllPosts(). This is the core logic. Posts with future dates won't appear in the blog index or any list views. One surgical change to one function.
Task 2: Gate getPostBySlug() for future dates. Even if a future post doesn't appear in the index, someone could still access it via direct URL. This task adds a 404 response for future-dated posts accessed directly. Without this, the scheduling feature has a loophole that anyone with the URL can bypass.
Task 3: Create a GitHub Actions cron workflow at .github/workflows/daily-publish.yml. This triggers a Vercel rebuild every day at midnight EST. When the rebuild runs, any posts whose date has arrived will pass the filter and appear on the site. This is the deployment piece that the brainstorm identified as missing.
Three tasks. Each one is small enough to verify independently. Each one addresses a specific requirement. Together they constitute the complete feature.
I review the plan. Is anything missing? The direct URL gating in Task 2 -- I wouldn't have thought of that myself. It's a security-by-obscurity hole that most people would miss. The plan caught it.
Is anything wrong? The cron schedule uses midnight EST, which means UTC-5. I check whether GitHub Actions handles timezone offsets correctly in cron expressions. It does, but only if you specify the timezone explicitly. I mention this to Claude and it updates the plan to include TZ: America/New_York in the workflow environment.
Plan review takes about two minutes. The plan is now a contract. Here's what we're building, here's the order, here are the edge cases we're handling. No ambiguity.
Step 3: Execute
I tell Claude to execute the plan. With the plan approved, implementation is focused and fast. Claude knows exactly what to build. No guessing, no assumption-making, no "I'll figure out the requirements as I go."
Task 1: Claude opens blog.ts and adds a date comparison to getAllPosts(). The change is four lines. It imports a date utility, gets today's date, and adds a filter predicate that checks post.date <= today. The existing sort and published-state filtering remain unchanged. Surgical.
Task 2: Claude modifies getPostBySlug() to check the post's date before returning it. If the date is in the future, it returns null, which the page component already handles as a 404. Two lines added. The null-handling path was already there because getPostBySlug() already returns null for nonexistent slugs. Claude reused the existing pattern instead of inventing a new one.
Task 3: Claude creates .github/workflows/daily-publish.yml. The workflow runs on a cron schedule, once daily at 5:00 UTC (midnight EST). It triggers a Vercel deployment hook via a curl request to the Vercel deploy webhook URL (stored as a GitHub secret). The workflow file is twelve lines of YAML. Simple, readable, and doing exactly one thing.
With subagents, Tasks 1 and 2 ran in parallel since they're independent edits to different functions. Task 3 ran alongside them since it's a new file with no dependencies. Total execution time: about three minutes.
Claude explains each change as it makes it. Not verbose explanations, just enough context to review intelligently: "Added date filter using the existing parseISO utility to keep the comparison consistent with how dates are already handled elsewhere in the module."
That kind of explanation matters. It's not just telling me what it did -- I can see that in the diff. It's telling me why it made specific implementation choices. The parseISO detail tells me Claude chose consistency over introducing a new date library. Good call.
Step 4: Verify
Code is written. Now I need to know it works.
First check: npx next build. This catches MDX compilation errors, TypeScript issues, and any broken imports. The blog scheduling feature doesn't touch MDX files, but the build runs the full pipeline including blog post compilation. If my date filter accidentally broke the data shape, the build would fail.
Build passes. Good.
Second check: existing blog posts. I scan the build output to confirm that all currently published posts are present. Since every existing post has a date in the past, none should be filtered out. They're all there.
Third check: the filter logic. I look at the date comparison code. Is it using <= or <? It's <=, meaning a post dated today will be visible today. That's the correct behavior -- you want the post to appear on its publish date, not the day after. Edge case handled.
Fourth check: the GitHub Actions workflow. I read the YAML. Is the cron syntax correct? Is the timezone set? Is the deploy hook URL referenced as a secret, not hardcoded? Yes, yes, and yes.
Claude can run these checks, and it did run the build. But I look at the output myself. This is the "trust but verify" step. The habit of always verifying prevents the one time you skip it from being the time something was broken.
Verification takes about five minutes, most of that being the build itself.
Step 5: Commit and Push
The feature works. Time to ship it.
I use the commit workflow from my custom commands and Claude handles the git mechanics. It stages the specific files that changed -- blog.ts and .github/workflows/daily-publish.yml -- not git add . which would catch whatever else is in the working directory. Specific file staging is a habit that prevents accidentally committing unrelated changes, env files, or editor artifacts.
Claude writes the commit message. Not "update blog.ts and add workflow" -- that describes the what, which I can already see in the diff. Instead: "feat(blog): add scheduled publishing with daily rebuild trigger." That describes the why. A month from now, when I'm reading the git log trying to understand when a behavior changed, the commit message tells me the intent, not just the file list.
Push to remote. Optionally create a PR if this is going through review. For a personal portfolio site, I push to main directly. For a team project, the PR creation is one more command in the same flow.
The entire git workflow -- stage, commit, push -- takes about thirty seconds. One invocation, not three separate commands where you have to think about what to stage and how to phrase the message.
Step 6: Review
After the code is committed, I run a review pass. For a change this small -- two file edits and one new file -- a full five-agent parallel review is probably overkill. But I do it anyway, because the habit matters more than the individual instance.
The review checks for things I might miss because I'm too close to the code. Does the date comparison handle timezone edge cases? Is the GitHub Actions workflow following the repository's existing CI/CD patterns? Is the Vercel deploy hook URL properly secured as a secret? Are there any missing error handling paths?
For this feature, the review flags one thing: the getAllPosts() filter compares dates as strings, not as Date objects. ISO date strings sort lexicographically the same as chronologically, so it works, but it's fragile. If anyone ever uses a non-ISO date format in frontmatter, the comparison breaks silently. The review suggests adding an explicit parseISO() call to make the comparison robust.
Good catch. I make the one-line fix, amend the commit, and push again. That's exactly the kind of subtle correctness issue that code review exists to find. I wouldn't have caught it in my own review because I know all my dates are ISO format. But the code shouldn't depend on that assumption.
Total review time: about two minutes.
Step 7: Reflect
The feature is shipped. The last step is reflection.
I invoke the reflect workflow. It reviews the session: what was built, how long it took, what went well, what didn't, and whether there are lessons worth persisting to memory.
For this session, the reflection identifies one lesson worth saving: "LESSON: blog.ts getAllPosts has no built-in date filtering -- check data layer assumptions before adding scheduling or time-based features." This goes into claude-mem as a persistent observation. Next time I or Claude work on anything date-related in the blog system, this lesson will surface in the memory recall step. The loop closes.
The reflection also notes what went well: the brainstorm caught the deployment gap before any code was written, and the plan's inclusion of direct-URL gating prevented a security hole. These aren't lessons to save -- they're confirmations that the process worked as intended.
Reflection takes about one minute. Most of that is Claude analyzing the session. My input is minimal: confirm the lesson is worth saving, maybe add a note if something happened that the automated reflection missed.
The Full Loop
Here's the workflow laid out as a cycle:
Memory Recall -- Brainstorm -- Plan -- Execute -- Verify -- Commit -- Review -- Reflect -- Memory Store
Each step feeds the next. Memory recall informs the brainstorm. The brainstorm shapes the plan. The plan structures the execution. Execution produces code to verify. Verified code gets committed. Committed code gets reviewed. The review reveals lessons. Lessons get stored in memory. Memory gets recalled at the start of the next session.
It's a cycle, not a line. The reflect step at the end feeds back into the memory recall step at the beginning. Every session makes future sessions better. The workflow improves itself over time because the lessons compound.
This is what I mean when I say Claude Code gets better the more you use it. The model doesn't improve between sessions. Your accumulated memory does. And the structured workflow ensures that memory actually gets captured and applied, not just experienced and forgotten.
Why Each Step Exists
None of these steps are arbitrary. Each one prevents a specific failure mode that I've experienced firsthand.
Memory recall prevents repeating past mistakes. Without it, every session starts from zero and the same bugs get rediscovered, the same wrong approaches get tried, the same architectural details get forgotten.
Brainstorm prevents building the wrong thing. Without it, Claude makes assumptions about what you want and starts building immediately. Sometimes those assumptions are right. When they're wrong, you don't find out until the feature is complete and doesn't match what you needed.
Plan prevents scattered implementation. Without it, Claude edits files in whatever order occurs to it, sometimes doubling back to modify things it already touched, sometimes missing edge cases that a structured plan would have identified upfront.
Execute with review prevents bad code from accumulating. Without review checkpoints during execution, implementation drift compounds. By task 4, the code might be solving a different problem than what task 1 set up.
Verify prevents broken builds and regressions. Without it, you push code that compiles on optimism rather than evidence. The build catches things that look correct in a diff but fail in practice.
Commit with specific files prevents accidental includes. Without targeted staging, git add . catches whatever happens to be in your working directory -- temporary files, env variables, experiments you didn't mean to ship.
Review prevents subtle bugs. Without it, correctness depends entirely on the author's ability to spot their own mistakes. The date-string comparison issue in this feature is exactly the kind of thing that passes author review and breaks six months later.
Reflect prevents repeating the session's mistakes in future sessions. Without it, lessons learned exist only in your human memory, which is unreliable, and are completely absent from Claude's memory, which is nonexistent.
Each step takes one to five minutes. The entire workflow takes about 45 minutes for a medium-complexity feature. That might sound like a lot for "filter blog posts by date and add a cron job." But the alternative -- jumping straight to implementation, discovering the deployment gap halfway through, rebuilding, missing the direct-URL loophole, fixing that later, forgetting the lessons -- would take longer and produce worse results.
The Speed Paradox
The structured workflow feels slower than "just start coding." And on any single step, it is slower. Typing /brainstorm and answering questions for two minutes is slower than immediately writing code. Writing a plan and reviewing it is slower than letting Claude start editing files.
But total time, from idea to shipped feature that works correctly, is consistently shorter with the structured workflow. The time you invest in each step is time you don't spend on rework, debugging, and fixing things that should have been caught earlier.
I've tracked this informally over about 50 features. The structured workflow averages around 40 minutes per feature. The unstructured approach averages around 55 minutes, with much higher variance. Some unstructured features ship in 15 minutes. Some take two hours because the initial approach was wrong and the rebuild takes longer than starting from scratch would have.
The structured approach has almost no variance. 35 to 50 minutes, every time, for medium-complexity features. Predictability is its own kind of speed. When you know a feature will take 45 minutes, you can plan your day around it. When a feature might take 15 minutes or two hours, you can't plan anything.
Making It Yours
My exact workflow won't be your exact workflow. The specific tools -- Superpowers, GSD, claude-mem, the review toolkit -- are the ones I've settled on after months of iteration. You might prefer different plugins, different step orderings, different levels of rigor at each step.
What matters isn't the specific tools. It's the shape of the loop. Some form of memory. Some form of requirements clarification before coding. Some form of planning before execution. Some form of verification after execution. Some form of reflection after shipping. This is the same principle behind context engineering -- the quality of the output depends on the quality of the context you provide, not just the prompt.
Those five elements, in that order, with whatever tools implement them, will produce better results than unstructured sessions. I'm confident enough in that claim to put it in print because the underlying principle is old: think before you build, verify after you build, learn from what you built. Claude Code doesn't change that principle. It just makes each step faster.
The investment is in the process. The process pays for itself on every feature, starting with the first one. 45 minutes to ship something that works correctly, with lessons stored for next time. That's the workflow. It's not glamorous. It's reliable. And reliable is what ships software.
Related Posts
Claude Code Isn't a Code Editor. It's a New Way to Use a Computer.
After a month of writing about Claude Code, here's the thing I keep coming back to: this isn't a developer tool. It's a new interface for computing.
What 400+ Sessions Taught Me About Working with Claude Code
After hundreds of Claude Code sessions across personal projects and production codebases, here are the lessons that took the longest to learn.
Subagents and Parallel Execution: Making Claude Code 5x Faster
Claude Code can spawn autonomous worker agents that run in parallel. Here's how subagents work, when to use them, and why they make complex tasks dramatically faster.