People keep asking what tools I use. Instead of answering in fragments across Twitter threads, here's the complete stack. Every tool, why I chose it, and how it fits into my workflow.

Primary Coding: Claude Code

This is the backbone. Claude Code runs in my terminal and handles 70% of my code generation, refactoring, and debugging. I use it with a detailed CLAUDE.md file in every project that defines conventions, patterns, and constraints.

Why Claude Code over IDE-based tools: it works in my existing terminal workflow. I don't need to switch to a different editor or learn a new interface. It reads files, makes changes, runs commands. Simple.

IDE: VS Code with Minimal Extensions

I use VS Code but not as an AI tool. I turned off GitHub Copilot. I don't use Cursor. VS Code is my code reviewer, my git client, and my diff viewer. When Claude Code makes changes, I review them in VS Code's diff panel. That separation is intentional. I want my editor to show me what changed, not to suggest more changes.

Extensions I actually use: GitLens, Error Lens, Todo Highlight, and a theme (Vitesse Dark). That's it.

Architecture and Planning: Claude in the Browser

For high-level thinking, I use Claude's web interface with long conversations. I'll spend 30 minutes discussing architecture before writing a line of code. The web interface is better for this because I can easily paste diagrams, share screenshots of existing systems, and have a flowing conversation without triggering tool use.

My typical planning prompt starts with: "I need to design [system]. Here are my constraints: [list]. Here's what exists today: [description]. Let's think through the options before deciding."

Quick Questions: GPT-5

For fast, factual questions, I use GPT-5. "What's the syntax for a Postgres partial index?" or "How does the Node.js event loop handle I/O?" GPT-5 is snappy and concise for reference questions. I don't use it for generating code I'll actually ship.

Local Models: Ollama with Llama 3

I run Llama 3 70B locally through Ollama for two specific use cases: processing sensitive data that I can't send to external APIs, and quick file transformations where latency matters more than quality. "Convert this CSV to JSON with these field mappings" runs faster locally than round-tripping to an API.

Documentation: Claude + Custom MCP Server

I built an MCP server that indexes our internal documentation. When Claude Code needs to reference our API specs, architecture decisions, or deployment procedures, it queries this server. This eliminated the biggest pain point of AI coding: the model not knowing about your internal systems.

Testing: AI-Assisted, Human-Verified

Claude Code writes the first draft of tests. I review them, add edge cases it missed, and make sure the test names describe actual behaviors. The AI gets about 80% of the test coverage right. The 20% I add manually is the important 20%, the weird edge cases that come from knowing how users actually abuse the system.

Code Review: Three-Layer Process

Before opening a PR, I run three passes:

  1. Claude Code review. "Review these changes for bugs, security issues, and missed edge cases." Catches mechanical errors.
  2. VS Code diff review. I read every change myself. Takes 5-10 minutes for a typical PR. Catches logic issues the AI misses.
  3. Human reviewer. A teammate reviews with fresh eyes. Catches architectural issues neither I nor the AI noticed.

Deployment: Standard CI/CD

I don't use AI in my deployment pipeline. GitHub Actions, Docker, standard blue-green deploys. AI is great for writing code but I want my deployment pipeline to be deterministic and predictable. No language models in the critical path.

What I Stopped Using

GitHub Copilot. Turned it off. The inline suggestions broke my thinking flow. I'd be planning a function in my head and Copilot would suggest something plausible but wrong, and I'd lose my train of thought evaluating it.

Cursor. Good product, but I prefer the terminal workflow. Cursor's composer mode is powerful, but I found myself fighting the IDE integration more than benefiting from it.

ChatGPT for coding. Copy-pasting code from a chat window into an editor is a workflow from 2023. Agentic tools that edit files directly are strictly better.

The Philosophy

My stack is intentionally simple. One AI coding tool (Claude Code), one editor (VS Code), one planning interface (Claude web), one quick-reference model (GPT-5). Every tool has a clear role. Nothing overlaps.

The developers I see struggling with AI aren't using too few tools. They're using too many, with overlapping capabilities and conflicting suggestions. Pick one tool for each job and get very good at it.