I've been using ChatGPT and the OpenAI API daily for about four months now. In that time, I've gone from writing lazy one-line prompts to crafting careful instructions that get dramatically better results. Here are the techniques that made the biggest difference, with real examples.
1. Chain of Thought: "Think Step by Step"
This is the single most impactful trick. When you ask an LLM to solve a problem directly, it often jumps to a conclusion and gets it wrong. When you tell it to think through the problem step by step, the accuracy improves significantly.
Before: "What's the time complexity of this function?" followed by my code. Result: it often gives a quick answer that's wrong for non-obvious cases.
After: "Analyze the time complexity of this function. Think through it step by step. Identify each loop and recursive call, determine how many times each executes, and then combine them." Result: it actually traces through the logic and catches nested loops or hidden recursion.
This works because LLMs generate tokens sequentially. When you force them to "show their work," each intermediate step provides context for the next token. It's like how writing out your math actually helps you think, not just communicate.
2. Few-Shot Examples: Show, Don't Tell
Instead of describing what you want, show the model 2-3 examples of input/output pairs. This is absurdly effective and I use it constantly.
Before: "Convert these function names from camelCase to snake_case and add type hints."
After: "Convert function signatures following this pattern:" then I include two or three examples of the exact transformation I want, followed by "Now convert these:" and the actual functions.
The model picks up on the pattern and replicates it almost perfectly. It catches nuances that are hard to describe in words but obvious from examples. I use this for code transformations, data formatting, and any task where the output format matters.
3. System Prompts: Set the Character
When using the API, the system prompt sets the model's behavior for the entire conversation. When using ChatGPT directly, putting your instructions at the beginning of the conversation has a similar effect.
My go-to system prompt for code review: "You are a senior software engineer doing a code review. Focus on bugs, security issues, and performance problems. Do not comment on style or formatting. Only comment when you find a genuine issue. Be specific and reference line numbers."
Without this, the model tries to be comprehensive and helpful, which means it flags every possible improvement including things that don't matter. The system prompt constrains its behavior to what I actually care about.
4. Constraints and Formats
Tell the model exactly what format you want the response in. Don't hope it'll figure it out.
"Respond with valid JSON in this exact format: {field: description}." Or: "Give me a numbered list, maximum 5 items, each under 20 words." Or: "Write the code in TypeScript, use async/await not callbacks, include error handling."
Every constraint you add narrows the output space and increases the chance of getting what you actually want. I've found that being overly specific about format is almost never a problem, while being vague about it almost always is.
5. Role Assignment
"You are a PostgreSQL performance expert" gets you better database advice than just asking a database question. "You are a security auditor reviewing code for vulnerabilities" gets you better security analysis than asking "are there any security issues?"
It sounds silly. It feels like you're playing pretend with a computer. But it works consistently well. The model has different "regions" of knowledge and expertise, and assigning a role activates the right one.
6. Iterative Refinement
Don't try to get the perfect prompt on the first try. Start simple, see what the model gets wrong, and add constraints to fix those specific issues.
My code review prompt went through about 8 iterations. Each time I noticed a pattern of unhelpful output, I added a line to the prompt to address it. "Don't suggest adding comments to obvious code." "Don't recommend error handling for cases that are already handled upstream." "Ignore variable naming unless the current name is actively misleading."
Your prompts should be living documents. Save them somewhere. Version control them. They're part of your tooling now.
The Meta-Lesson
The real lesson from all of this is that AI tools have a skill curve. People who say "I tried ChatGPT and it wasn't useful" usually asked it a vague question and got a vague answer. The gap between a beginner prompt and an expert prompt is like the gap between a Google search and a targeted Stack Overflow query. Same tool, completely different results.
Invest the time in learning to prompt well. It's a skill that's going to matter for the rest of your career.