This is my third annual prompt collection, following last year's edition and the original 2024 list. The models have gotten dramatically better, which means the prompts have evolved too. Many of the tricks from 2024 are now unnecessary because the models handle ambiguity better. But some new patterns have emerged that unlock capabilities the older models didn't have.

Everything here is tested against Claude Opus 4, GPT-5, and Gemini 2 Pro. I'll note where a prompt works better on a specific model.

The 2026 System Prompt

The biggest change from last year: you can be much more concise. Models in 2026 follow instructions better, so you don't need to over-specify.

Senior developer on my codebase. Read files before editing.
When uncertain, ask. No apologies. No obvious comments.
Prefer simple solutions with fewer abstractions.
Run tests after changes when possible.

That last line is new. Claude Opus 4 with tool use will actually run your test suite if you tell it to. This closes the feedback loop and catches errors before you see them.

The "Teach Me" Prompt

New for 2026. When you encounter unfamiliar code or concepts:

I'm reading [file/concept] and I don't fully understand it.
Explain it to me as a developer who knows [language] but not
this specific pattern. Start with what it does, then how it
works, then why this approach was chosen over alternatives.

The "what, how, why" structure produces consistently better explanations than "explain this code." It forces the model to start concrete and then build to the reasoning, which is the order that actually helps you learn.

The Context-Aware Debug Prompt

Bug: [description]
Error: [paste error if any]

Before suggesting a fix, do this:
1. Read the failing function and its callers
2. Check the test file for this module
3. Read the most recent git changes to this file

Then tell me: what changed recently that could cause this?

This works specifically with agentic tools like Claude Code that can actually read files and check git history. The key insight is making the model investigate before diagnosing. Without steps 1-3, you get generic fixes. With them, you get targeted fixes based on actual context.

The Migration Prompt (Claude Opus 4 Specific)

Claude Opus 4 handles multi-step tasks better than any other model. This prompt takes advantage of that:

Migrate [file] from [old pattern] to [new pattern].

Rules:
- Change only what's necessary for the migration
- Keep all existing tests passing
- If a test needs updating, update it to test the same behavior
- After all changes, run the test suite and fix any failures
- Show me a summary of every file you changed and why

The "run the test suite and fix any failures" instruction is the game changer. On Claude Code, this creates a tight loop where the model makes changes, runs tests, observes failures, and self-corrects. This pattern alone has saved me hours per week.

The Architecture Decision Record Prompt

Help me write an ADR for: [decision]

Context: [what prompted this decision]
Options considered: [list them]
Constraints: [technical, business, timeline]

Write in this format:
- Status: [proposed/accepted/deprecated]
- Context: [2-3 sentences]
- Decision: [1 sentence]
- Consequences: [positive and negative]
- Alternatives rejected: [with brief reasons]

I write an ADR for every significant technical decision now. It takes 2 minutes with this prompt and saves hours of "why did we do it this way?" questions later.

The Code Review Prompt (Updated)

Review this diff for production readiness.

Priority 1 (must fix): security vulnerabilities, data loss risks,
  incorrect business logic
Priority 2 (should fix): missing error handling, untested paths,
  performance issues
Priority 3 (nice to fix): readability, naming, documentation

Only mention Priority 3 issues if there are fewer than 3 issues
in the other categories.

The priority system is the improvement over my 2025 version. Without it, the model gives you 15 equally weighted comments and you have to figure out which ones matter. With priorities, the critical issues are always at the top.

The Prompt That Replaced a Framework

Generate a typed API client from this OpenAPI spec.
Output a single TypeScript file with:
- A function for each endpoint
- Request/response types inferred from the spec
- Error handling that throws typed errors
- No external dependencies except fetch

I used to use code generators for this. Now I just paste the spec into Claude and get a better result in 10 seconds. The "no external dependencies" constraint keeps it simple, and the typed errors are more useful than what most generators produce.

The Prompt for Prompts

Meta, but genuinely useful:

I need to create a prompt for [task]. The prompt will be used
with [model] in [context - chat/agentic/API].

Requirements for the prompt:
- It should produce consistent results across runs
- The output should be in [format]
- Common failure modes to prevent: [list any you've seen]

Draft the prompt. Then critique it for ambiguity, missing
constraints, and edge cases. Then give me the final version.

Having the model draft and then self-critique produces better prompts than asking for a final version directly. The self-critique step catches ambiguities you wouldn't have noticed.

What Changed from 2025

Three trends: prompts are shorter (models need less guidance), tool use instructions are standard (telling the model to actually run tests or read files), and self-correction loops work reliably (letting the model try, fail, and fix). For more on the underlying techniques that make these work, Anthropic's prompt engineering documentation is worth reading. And if you want more granular tricks beyond these templates, I compiled those in prompt tricks that actually work. If you're still writing 2024-style prompts with extensive guardrails and examples, you're over-engineering. The models have caught up.