I've been collecting these for months. Every time an AI coding tool confidently gives me code that's wrong in a non-obvious way, I save it. Not to bash the tools. I use them every day and they make me more productive. But because understanding how they fail makes you better at using them.
Here are real examples from my actual projects, not contrived tests.
The Made-Up Method
I asked ChatGPT to help me parse multipart form data in Express. It told me to use req.files.map(f => f.arrayBuffer()). The code looked perfectly reasonable. The arrayBuffer() method exists on Blob objects in the browser. But Express's file objects (from multer) don't have an arrayBuffer() method. They have a buffer property.
The AI mixed up browser APIs with Node.js APIs. It created a plausible-looking method call by combining real concepts from different contexts. I spent 20 minutes debugging "arrayBuffer is not a function" before I realized the method simply doesn't exist where the AI said it does.
The Deprecated API
I asked for help setting up authentication in Next.js. ChatGPT suggested using getServerSideProps with a specific pattern for checking sessions. The pattern it showed was from an older version of next-auth. The function signatures had changed. The session object had a different shape. Everything compiled, but the session check always returned null.
This is maybe the most common hallucination category. The AI learned from training data that includes old tutorials, old documentation, and old Stack Overflow answers. It generates code for library versions that existed during training, not the version you're actually using. Always check that method signatures match your installed version.
The Library That Doesn't Exist
This one was wild. I asked for help with a specific data validation task and ChatGPT recommended a library called "schema-validator-pro" with a detailed example of its API. I ran npm install schema-validator-pro. Package not found. Because it doesn't exist. The AI invented a library name, invented its API, and wrote a convincing usage example for a thing that was never real.
I've seen this happen three times now. The model generates a plausible-sounding package name that follows npm naming conventions and creates a fictional API that looks like what you'd expect from a package with that name. If you're not in the habit of checking that packages actually exist before installing them, you'll waste time on this.
The Subtle Logic Error
I asked Copilot to write a function to check if two date ranges overlap. It generated:
function rangesOverlap(start1, end1, start2, end2) {
return start1 < end2 && start2 < end1;
}
This is actually the correct algorithm for open intervals. But my application used closed intervals (where ranges that share an endpoint should count as overlapping). The correct version needed <= instead of <. Copilot had no way to know my business requirement, so it picked the more common mathematical definition.
This is the sneakiest type of hallucination. The code is technically correct for one interpretation of the problem. Just not your interpretation. And because it "looks right" and even passes basic tests, you might not catch it until edge cases hit production.
The Confident Wrong Explanation
I was debugging a memory leak in a Node.js application. I described the symptoms to ChatGPT and it confidently told me the issue was with event listeners not being cleaned up. It suggested a specific fix. I spent an hour implementing the fix. The memory leak persisted.
The actual cause was a closure holding a reference to a large object in a timer callback. ChatGPT's suggestion was a real cause of memory leaks, just not this memory leak. It picked the most statistically likely cause based on the symptoms I described, not the actual cause based on my code.
How I've Learned to Catch These
Verify every import. If the AI suggests a library or method you haven't used before, check the actual docs before writing more code on top of it.
Run the code early. Don't write 50 lines of AI-generated code before testing. Generate a small chunk, run it, confirm it works, then continue. This catches hallucinated APIs immediately.
Watch for excessive confidence. When ChatGPT says "simply use X" or "the standard approach is Y" for something you're not sure about, that's a signal to verify. High confidence from the AI doesn't correlate with high accuracy.
Check version numbers. If the AI suggests a pattern, check it against the docs for your specific library version. npm docs packagename or a quick search for "packagename v4 migration guide" can save hours.
Test edge cases explicitly. AI tends to write happy-path code. After generating any logic, manually think through: what happens with null, empty, zero, negative, very large, or concurrent inputs?
The Bigger Picture
Hallucinations aren't going away soon. They're a fundamental property of how language models work: they generate plausible text, and plausible text isn't always correct text. The models will get better, but they'll still hallucinate.
The practical response isn't to stop using these tools. It's to build habits that catch hallucinations quickly. Think of it like using a calculator: incredibly useful, but you still need to sanity-check whether the answer makes sense. AI coding tools are the same. Incredibly useful. Just never blindly trustworthy.