Two years ago, AI was a novelty in my development workflow. Now it is embedded in nearly every stage. This is not a prediction about the future or an argument about whether AI is good for programming. It is a practical walkthrough of what my workflow actually looks like today.

Planning and Design

Before writing any code, I describe the feature or system I am building in a conversation with Claude. Not as a prompt-and-receive exercise, but as an actual back-and-forth design discussion. I explain the requirements, the constraints, the existing architecture, and the trade-offs I am considering.

The model pushes back on weak assumptions, suggests alternatives I had not considered, and identifies potential issues early. It is not a replacement for design reviews with human colleagues, but it is available at 2am when I am working through a problem solo.

I keep these planning conversations saved. They serve as informal design documents that I can reference later to understand why certain decisions were made.

Scaffolding and Boilerplate

Once the design is settled, I use AI to generate the initial structure. This includes project scaffolding, database schemas, API endpoint stubs, type definitions, and configuration files. The key is that I have already made the design decisions - the AI is just translating them into code faster than I could type.

For a typical new microservice, this saves about an hour of setup time. The generated code is not production-ready, but it gives me a working skeleton that compiles and runs. I then iterate on this skeleton, replacing placeholder logic with real implementations.

Implementation

For core business logic, I still write most of the code myself. This is the part where domain knowledge, system understanding, and careful thought matter most. AI tools help at the margins - generating helper functions, writing data transformation code, implementing well-known algorithms.

The tool I use most during implementation is Copilot in VS Code. It handles the mechanical parts of coding: completing repetitive patterns, generating function bodies from clear signatures, and writing the glue code between components. I accept maybe 30-40% of its suggestions after reading them.

For more complex implementation questions, I switch to Claude in a separate window. Copilot is good for inline completions, but for "how should I structure this error handling across three services" type questions, a conversational interface is better.

Testing

This is where AI has had the biggest measurable impact on my code quality. I generate test scaffolding with AI and then refine it. My prompt is usually: "Here is the function. Write unit tests covering the happy path, edge cases, and error conditions." The generated tests catch about 80% of what I would write manually, and they often include edge cases I would have skipped.

I also use AI to generate test data. Building realistic but diverse test fixtures used to be tedious. Now I describe the shape of data I need and get a comprehensive set of test cases in seconds.

The tests still need human review. AI-generated tests sometimes test implementation details instead of behavior, and they occasionally miss the most important invariants of the system. But as a starting point, they are significantly better than a blank file.

Code Review

Before opening a PR, I run my diff through Claude and ask for a review. I paste the diff along with context about what the change is supposed to do. It catches issues I missed - unused variables, inconsistent error handling, missing null checks, potential performance problems.

This is not a replacement for human code review. A human reviewer understands the broader system context and business implications in ways the model cannot. But the AI pre-review catches the mechanical issues, which means human reviewers can focus on design and logic instead of pointing out that I forgot to handle an empty array.

Documentation

I generate first drafts of API documentation, README files, and inline code comments with AI. The drafts are usually 70-80% correct and need editing for accuracy and tone. But starting from a draft is much faster than starting from nothing, especially for the kind of documentation that most developers procrastinate on.

Debugging

When something breaks, my first move is now often to describe the symptoms to Claude: the error message, what I expected, what actually happened, and the relevant code. For straightforward bugs, this is faster than manually tracing through the code. For complex bugs, it gives me hypotheses to test instead of staring at a stack trace trying to figure out where to start.

What This Workflow Costs

I pay for Claude Pro ($20/month), GitHub Copilot ($10/month), and occasional API usage (~$30/month). Total: about $60/month. The time savings are well worth it - I estimate at least 8-10 hours per week that I can redirect from mechanical coding to design, thinking, and building features.

What I Still Do Manually

Architecture decisions. Security reviews. Performance optimization. Anything that requires understanding the full system context or making judgment calls with incomplete information. AI is a powerful accelerator for known patterns, but it does not replace the engineering judgment that comes from years of operating production systems.

The goal is not to have AI write all your code. It is to have AI handle the parts that do not require your full attention so you can focus that attention where it matters most.