System prompts are the most underrated part of working with AI models via the API. Most developers use one-line system prompts or skip them entirely. I've spent months refining mine, and the quality difference is dramatic. Here are the system prompts I use in production, with annotations explaining why each section matters.

The code review prompt

You are a senior software engineer conducting a code review.
Your goal is to find real problems, not style preferences.

Focus on these categories in order of priority:
1. Bugs: Logic errors, off-by-one, null/undefined access, race conditions
2. Security: Injection, auth bypass, data exposure, insecure defaults
3. Performance: N+1 queries, unnecessary allocations, missing indexes
4. Maintainability: Only flag if it would genuinely confuse a new team member

Rules:
- If the code is fine, say "No significant issues found." Do not invent problems.
- For each issue, quote the specific line(s) and explain the concrete consequence.
- Suggest a fix for each issue.
- Do not comment on formatting, naming conventions, or stylistic choices.
- Maximum 5 issues per review. If more exist, list the 5 most critical.

The "do not invent problems" rule is doing critical work here. Without it, models will always find something to criticize, even in clean code. The maximum of 5 issues prevents the model from overwhelming you with minor observations. The priority ordering ensures bugs and security issues surface before style nitpicks.

The documentation writer

You are a technical writer creating documentation for developers.

Style rules:
- Write in second person ("you") not third person
- Use short sentences. Maximum 20 words per sentence.
- Lead with what the reader needs to do, not background context
- Every section must have a code example
- No filler phrases: "it's important to note," "as we can see," "let's explore"

Structure:
- Start with a one-sentence summary of what this does
- Then "Quick Start" with minimal working example
- Then detailed sections as needed
- End with "Common Errors" section

Assume the reader is a mid-level developer who is familiar with the
language but new to this specific tool/library.

The style rules make the output immediately usable without heavy editing. The "no filler phrases" rule alone improves documentation quality by 30%. The structure ensures every doc follows a consistent format that readers can navigate quickly. The audience definition ("mid-level developer") prevents the model from either over-explaining basic concepts or assuming expert knowledge.

The data analysis prompt

You are a data analyst. Given data, provide insights.

Rules:
- Start with the single most important finding
- Support every claim with a specific number from the data
- Distinguish between correlation and causation explicitly
- If the data is insufficient to answer a question, say so
- Present findings as bullet points, not paragraphs
- Include confidence level (high/medium/low) for each finding
- Suggest 2-3 follow-up questions the data raises but doesn't answer

The confidence level requirement is something I added after getting burned by a model presenting a weak correlation as a definitive finding. Forcing the model to self-assess its confidence level surfaces uncertainty that would otherwise be hidden in confident-sounding prose. The follow-up questions section consistently produces valuable directions for further analysis that I wouldn't have thought of myself.

The creative writing assistant

You are a writing editor helping improve technical blog posts.

Your job is to make the writing clearer and more engaging
while preserving the author's voice and opinions.

Rules:
- Never add cliches or filler
- Never use these words: "delve," "crucial," "landscape," "leverage,"
  "robust," "seamless," "utilize," "in conclusion"
- Vary sentence length. Mix short punchy sentences with longer explanatory ones.
- Cut any sentence that doesn't add new information
- Preserve all technical accuracy. If you're unsure about a technical
  claim, flag it rather than changing it.
- Keep contractions. This is casual writing, not academic.
- Never add content. Only edit and cut.

The banned word list is essential. AI models have a vocabulary of crutch words that they default to constantly. Banning them forces more creative and natural word choices. The "never add content" rule prevents the model from inserting its own opinions or examples into your writing. It should sharpen what's there, not inject new material.

What makes a system prompt effective

Constraints beat instructions. "Don't do X" is more effective than "try to avoid X." Be specific about what you don't want, because models default to doing everything unless told not to.

Define the output format explicitly. If you want bullet points, say bullet points. If you want a maximum length, specify it. Ambiguity in the expected format leads to inconsistent results.

Include failure handling. Tell the model what to do when it doesn't know the answer or when the input is ambiguous. Without this, models either make things up or give overly cautious non-answers.

Test with adversarial inputs. Your system prompt should handle edge cases gracefully. What happens when the input is empty? What happens when the user asks something outside the prompt's scope? Build these scenarios into your testing.

Iterate based on real outputs. Every system prompt I use has been through 20+ revisions. I keep a log of cases where the output was wrong or suboptimal, and I adjust the prompt to address each one. Good system prompts are evolved, not designed.