I use AI for coding every day. I'm a believer. But there's a growing narrative that AI can generate production code for anything if you just prompt it right. That's not true, and pretending it is will get you into trouble. Here are the areas where I still write every line by hand.

Complex state management

AI is great at generating individual functions. It struggles with state that flows across an entire application. I recently built a real-time collaborative editor where multiple users modify the same document simultaneously. The state management involves conflict resolution, operational transforms, undo/redo stacks per user, and eventual consistency guarantees.

I tried using Claude to help with the conflict resolution logic. It generated code that handled simple cases correctly but failed catastrophically when three users edited the same paragraph simultaneously. The model doesn't have an intuitive understanding of concurrency. It can recite textbook CRDT algorithms, but translating those into working code that handles all the real-world edge cases requires a human who can mentally simulate the concurrent execution paths.

Business logic with domain nuance

Every business has rules that don't fit into clean abstractions. An insurance pricing engine where premiums depend on 40 variables with nonlinear interactions. A payment system where refund rules differ by payment method, jurisdiction, subscription type, and promotional status. A scheduling system where constraints include union rules, certifications, overtime limits, and employee preferences.

AI can generate the scaffolding for these systems. It can write the data models, the API endpoints, the basic CRUD operations. But the actual business logic, the 200 lines of nested conditionals that encode years of domain knowledge, needs to come from a human who understands why the rules exist. AI-generated business logic looks right in code review and breaks on the first edge case that wasn't in the prompt.

Distributed systems coordination

Anything involving multiple services communicating over a network is dangerous territory for AI-generated code. Retry logic with exponential backoff and circuit breakers. Distributed transactions that need to be eventually consistent. Message ordering guarantees across partitioned consumers. Failover logic that needs to handle partial failures gracefully.

AI models have seen plenty of distributed systems code in their training data. The problem is that correct distributed systems code looks almost identical to incorrect distributed systems code. The difference between "works" and "loses data under network partition" might be one missing check in a retry handler. AI doesn't understand the failure modes because it can't reason about network behavior the way an experienced systems engineer can.

Performance-critical code

When I need code to handle 100K requests per second or process a billion-row dataset, I write it by hand and benchmark it. AI-generated code is typically "correct and reasonable" but not optimized. It'll use the standard library sort when you need a radix sort. It'll allocate memory in a loop when you need a pre-allocated buffer. It'll use reflection when you need generated code.

These optimizations require understanding the specific runtime characteristics, memory hierarchy, and bottlenecks of your system. AI can't profile your application. It can't see that 80% of CPU time is spent in one function. It can suggest general optimization patterns, but the last 10x of performance comes from specific, measured, targeted improvements that only a human with a profiler can identify.

Security-sensitive code

Authentication flows, authorization logic, cryptographic operations, input sanitization for injection prevention. I don't trust AI to get these right because the cost of getting them wrong is too high. A subtle bug in a utility function causes an error message. A subtle bug in an authentication flow causes a data breach.

AI regularly generates code with security issues that look correct superficially. I've seen it produce JWT validation that skips signature verification, SQL queries with string interpolation instead of parameterized queries, and password hashing with insufficient rounds. Each of these would pass a quick code review but create real vulnerabilities.

What this means practically

AI is an incredible tool for the 60-70% of coding that's well-understood patterns: CRUD endpoints, data transformation, UI components, utility functions, test boilerplate. For this work, AI makes me 2-5x faster.

For the remaining 30-40% where the real engineering challenge lives, AI is useful as a thinking partner but not as a code generator. I discuss the approach with Claude, sketch out algorithms together, and get feedback on my design. But I write the code myself, test it thoroughly, and take responsibility for every line.

Knowing which category each task falls into is the skill that separates developers who use AI effectively from those who ship AI-generated bugs to production. Develop that judgment and you'll be fine. Abdicate it and you'll learn the hard way.