Seven months into using AI tools daily. My workflow has settled into something I'm genuinely happy with. It's not what I expected. When I started, I thought AI coding tools were about writing code faster. Turns out the bigger value is in everything around the code.

Phase 1: Planning

Before I write a single line of code, I talk to ChatGPT about what I'm building. Not in a formal design-doc way. More like thinking out loud with a colleague who has encyclopedic knowledge.

I describe the feature or system I'm building. I explain the constraints. Then I ask: "What approaches would you consider? What are the trade-offs of each?" The responses aren't always novel, but they're consistently thorough. It surfaces considerations I might have missed and forces me to articulate my requirements clearly.

I also use it to evaluate my own ideas. "I'm thinking of using WebSockets for this. What are the downsides?" Having to defend your approach against a well-informed critic makes you think more carefully about your choices.

Time spent: 10-15 minutes per feature. Time saved: probably hours of backtracking when you realize mid-implementation that your approach has a fatal flaw.

Phase 2: Writing Code

This is where Copilot earns its keep. I write the structural code myself: the architecture, the important business logic, the parts that require domain knowledge. Copilot handles the rest.

My rhythm goes like this: write a function signature and a comment describing what it should do. Copilot suggests the implementation. I review it, accept the good parts, modify the rest. For repetitive patterns (API route handlers, database queries, component props), Copilot is scary good. It picks up on the pattern from earlier in the file and replicates it.

For non-trivial logic, I switch to ChatGPT. I describe the specific problem, paste in relevant code for context, and discuss the approach before writing it. This is particularly useful for algorithms, data transformations, and anything where the "right" approach isn't immediately obvious.

Phase 3: Debugging

This is where AI has the highest ROI in my workflow. My old debugging process: stare at code, add console.logs, search Stack Overflow, stare more. New process: paste the error and relevant code into ChatGPT. Ask it what's wrong.

About 60% of the time, it nails the problem immediately. Another 20% of the time, it points me in the right direction even if the specific suggestion isn't quite right. The remaining 20%, it's unhelpful and I fall back to traditional debugging.

The key technique: give it more context than you think it needs. Don't just paste an error message. Include the function that errored, the data being passed in, what you expected to happen, and what actually happened. The more context, the better the diagnosis.

Phase 4: Refactoring

This was an unexpected win. I paste in a function or module that's gotten messy and say "refactor this for readability and maintainability." ChatGPT consistently produces cleaner versions. It extracts helper functions, improves naming, simplifies conditionals, and removes duplication.

I don't blindly accept refactoring suggestions. Sometimes it changes the logic while "simplifying" it. But as a starting point for cleaning up code, it's excellent. It gives me a clean version that I then review and adjust rather than starting the refactoring from scratch.

Phase 5: Documentation

I hate writing documentation. Always have. AI has made it almost tolerable. I paste in a function or API endpoint and ask for JSDoc comments, README sections, or API documentation. The output is usually 80% there. I clean up the specifics and add context that only I know, but the structure and boilerplate is handled.

I've also started using it to write commit messages. I paste the diff and ask for a descriptive commit message. It's faster than thinking about it and the messages are consistently better than my lazy "fix stuff" commits.

Phase 6: Testing

"Write unit tests for this function, covering happy path, edge cases, and error conditions." Then I review and adjust. AI-generated tests tend to be thorough but sometimes test implementation details rather than behavior. I fix that, but having the scaffolding saves a lot of time.

The Numbers

I genuinely think I'm about 30-40% more productive than I was before these tools. That's not a precise measurement. It's a gut feeling based on how much I ship per week compared to a year ago. Some of that is from writing code faster. But most of it is from spending less time stuck, less time on boilerplate, and less time procrastinating on tasks I find tedious (tests, docs).

The tools cost me $30/month (Copilot + ChatGPT Plus). That's the best $30 I spend on anything work-related.

My advice: don't just use AI for code generation. Use it for every phase of development. The compounding effect across planning, writing, debugging, refactoring, docs, and testing is far bigger than any single use case.