After nearly a year of using LLMs daily for development work, I have noticed a clear pattern: the quality of the output depends heavily on the quality of the input. This is not a profound insight, but the specific techniques that work for coding prompts are worth documenting. I later followed up with prompt tricks that actually work for more advanced patterns. Here are the patterns I use consistently.
Provide Context First, Then the Task
The single biggest improvement to my prompts was structuring them as context, then instruction. Most developers do it backwards - they ask the question first and provide context as an afterthought.
// Weak prompt
"How do I handle errors in this function?"
// Strong prompt
"I have a Go HTTP handler that calls three external services
sequentially. If any service fails, I need to return a
descriptive error to the client while logging the full error
internally. The services use different error types.
How should I structure the error handling?"
The second prompt gives the model everything it needs to produce a useful answer: the language, the architecture, the constraint (sequential calls), the requirement (descriptive client errors plus internal logging), and the complication (different error types).
Specify the Output Format
If you want code, say so. If you want an explanation, say so. If you want both, specify the order. LLMs are people-pleasers by default - they will give you a bit of everything if you do not tell them what you actually want.
"Write a TypeScript function that validates an email address.
Return only the code with JSDoc comments. No explanation needed."
This saves you from scrolling past paragraphs of explanation to find the three lines of code you actually wanted.
Use Examples as Specifications
When I need a data transformation function, providing input/output examples works better than describing the transformation in words.
"Write a function that transforms this input:
{ users: [{ name: 'Alice', role: 'admin' }, { name: 'Bob', role: 'user' }] }
Into this output:
{ admin: ['Alice'], user: ['Bob'] }
The function should handle any number of roles and users."
This approach eliminates ambiguity. The model can infer the pattern from the example and generalize it, which is often more accurate than trying to follow a verbal description of the same transformation.
Constrain the Solution Space
Without constraints, LLMs default to the most common solution they have seen in training data. This is often not what you want for your specific situation.
"Implement rate limiting for this Express API.
Constraints:
- Must work across multiple server instances (no in-memory storage)
- Use Redis as the backing store
- Sliding window algorithm, not fixed window
- Must return standard rate limit headers (X-RateLimit-Limit, etc.)
- No external rate limiting libraries"
Each constraint eliminates a class of solutions. Without the Redis constraint, you might get an in-memory implementation. Without the "no libraries" constraint, you might get a one-liner using express-rate-limit. The constraints force the model to produce exactly the solution you need.
Ask for Alternatives
When making design decisions, I ask the model to present multiple approaches with trade-offs instead of a single recommendation.
"I need to implement real-time updates in a web app.
Present three approaches (WebSockets, SSE, and polling),
with pros, cons, and when each is the best choice.
My constraints: behind a corporate proxy, thousands of
concurrent users, updates every 5-30 seconds."
This gives you a decision framework instead of a single opinion. The model is surprisingly good at presenting balanced comparisons when you explicitly ask for them.
Chain Prompts for Complex Tasks
For anything non-trivial, breaking the task into steps produces better results than one large prompt. Instead of asking "build me an authentication system," I chain:
- "Design the database schema for user authentication with email/password and OAuth support."
- "Based on this schema, write the registration endpoint with input validation."
- "Now write the login endpoint with rate limiting."
- "Add the OAuth callback handler for Google."
- "Write middleware that validates JWT tokens from these endpoints."
Each prompt builds on the previous output, and you can review and correct at each step. One large prompt would likely produce an implementation with inconsistencies between components.
Include Your Standards
If you have coding standards, tell the model. It cannot read your ESLint config or your team's style guide.
"Follow these conventions:
- Use named exports, not default exports
- Error-first callback pattern
- All functions must have JSDoc comments
- Prefer early returns over nested if statements
- Use const over let wherever possible"
This seems obvious, but I see developers complain about AI-generated code style when they never specified their preferences. The model will follow whatever conventions you give it - you just need to state them.
The Meta-Skill
Prompt engineering for code is really about being precise about requirements. If you can write a clear prompt, you can write a clear specification. If you can write a clear specification, you can build better software - with or without AI. The skill transfers. OpenAI's prompt engineering guide covers some of the same ground from a more theoretical angle. I also compiled my best prompts for coding in 2024 if you want ready-to-use templates. The developers who are best at prompting are the ones who were already best at communicating technical requirements clearly.