I keep a running file of prompts that actually work. Not the theoretical "best practices" you find in prompt engineering guides (though Anthropic's docs are genuinely useful), but the exact prompts I paste into Claude, GPT, and Gemini on a daily basis. Here are the ones that survived the cut for 2025, building on my 2024 collection.
The System Prompt That Changed Everything
I add this to the beginning of every coding session. It sounds simple, but it eliminates about 80% of the frustrating behaviors:
You are a senior developer working on my codebase.
Before writing any code, read the relevant files first.
When you make changes, explain what you changed and why.
If something is ambiguous, ask me instead of guessing.
Never apologize. Never add comments that just restate the code.
Prefer simple solutions. Fewer files, fewer abstractions.
The "never apologize" line saves you from those annoying "I apologize for the confusion" responses that waste tokens and add nothing. The "ask instead of guessing" line prevents the model from confidently writing code based on wrong assumptions.
Debugging Prompts
When something breaks, this prompt gets me to the answer faster than any other approach:
Here's the error: [paste error]
Here's the relevant code: [paste code]
Think through this step by step:
1. What is the immediate cause of this error?
2. What are the possible root causes?
3. What's the simplest fix that doesn't break anything else?
The numbered steps force chain-of-thought reasoning. Without them, the model often jumps to the first plausible fix without considering side effects.
Code Review Prompt
This one I use before every PR:
Review this code for:
1. Bugs or logic errors
2. Security issues (injection, auth bypass, data exposure)
3. Performance problems (N+1 queries, unnecessary loops, missing indexes)
4. Missing edge cases
5. Violations of the patterns used elsewhere in this codebase
Be specific. Reference line numbers. Skip style nitpicks.
The "skip style nitpicks" is crucial. Without it, you get ten comments about variable naming and zero comments about the SQL injection on line 47.
Architecture Decision Prompt
I need to decide between [option A] and [option B] for [specific task].
Context:
- Current stack: [your stack]
- Scale: [expected load/data size]
- Team: [size and experience]
- Timeline: [deadline]
Compare both options on: complexity, performance, maintainability,
and operational cost. Give me a clear recommendation with reasoning.
This works because it gives the model enough context to make a grounded recommendation instead of a generic "it depends" answer.
Test Generation Prompt
Write tests for this function: [paste function]
Requirements:
- Test the happy path
- Test edge cases: null input, empty arrays, boundary values
- Test error conditions
- Use the existing test patterns in this project (jest/vitest/pytest)
- Each test should test ONE behavior with a descriptive name
The "descriptive name" requirement is the secret weapon. It forces the model to think about what each test actually verifies, which results in better coverage than just asking for "comprehensive tests."
The Refactoring Chain
For large refactors, I use a three-prompt chain:
Prompt 1 (Analysis): "Read this module and list every function, what it does, and what depends on it. Don't change anything yet."
Prompt 2 (Plan): "Based on that analysis, propose a refactoring plan that [your goal]. Show me the plan before writing any code."
Prompt 3 (Execute): "Execute step 1 of the plan. Show me the changes, then wait for approval before moving to step 2."
Breaking it into analysis, planning, and execution prevents the model from making sweeping changes that break things. Each step is reviewable.
Documentation Prompt
Write documentation for this [API/module/function].
Include:
- One sentence description
- Parameters with types and descriptions
- Return value
- One realistic usage example
- Any gotchas or common mistakes
Write for a developer who knows the language but not this codebase.
Keep it under [N] lines.
The word limit is important. Without it, AI documentation becomes a wall of text that nobody reads.
The Meta-Tip
The single most effective thing you can do for better prompt results: include examples of what you want the output to look like. One concrete example beats ten paragraphs of description. If you want a function written in a specific style, show it a function written in that style first. Few-shot examples are still the most reliable technique, even with the best models.
I'll keep updating this list as new patterns emerge. The models keep getting better, and the prompts that work keep evolving. If you maintain your own prompt library, I shared tips on organizing one in building a developer prompt library. But the fundamentals stay the same: be specific, show examples, and break complex tasks into steps.