Prompt engineering is the practice of designing precise inputs to get better, more reliable outputs from AI language models. The right technique — whether chain-of-thought for reasoning, few-shot for formatting, or role-based for domain expertise — can double the quality of your results without changing models.
No techniques match your search. Try different keywords or clear filters.
How to Use Prompt Engineering Techniques
Prompt engineering is about communicating intent clearly to an AI model. Each technique targets a different failure mode — vague outputs, wrong format, hallucination, or shallow reasoning. Choosing the right technique for your task is the first step.
Step 1: Identify Your Goal
Before writing a prompt, ask: Am I extracting data, generating creative content, solving a logic problem, or getting domain expertise? Different goals call for different techniques. Data extraction works best with structured output (JSON mode). Logic problems benefit from chain-of-thought. Domain tasks respond well to role-based prompting.
Step 2: Choose Your Core Technique
Use the category filters to narrow to the right technique family. Reasoning techniques like chain-of-thought work best for multi-step problems. Few-shot techniques are ideal when you need consistent formatting. Role-based techniques unlock domain expertise. Structured output techniques ensure parseable responses for downstream processing.
Step 3: Copy and Adapt the Example
Every technique includes a real example prompt you can copy. Replace the bracketed placeholders with your specific content. The key is to keep the structural elements intact — the role setup, the reasoning instruction, or the format template — while swapping in your actual task.
Step 4: Combine Techniques
The most powerful prompts combine multiple techniques. For example, a role-based system prompt paired with chain-of-thought reasoning and a structured JSON output format. Start with one technique, test it, then layer in additional patterns. Too many instructions in a single prompt can cause the model to lose track of constraints.
Step 5: Iterate and Refine
Prompt engineering is empirical — test your prompt, observe failure modes, and refine. If the model skips steps, add "do not skip any steps" or "show all intermediate work." If output format is inconsistent, add a structured template with explicit field names. The "When to Use" notes on each technique card describe the exact scenarios where each pattern excels.
Technique Combinations That Work
For complex analysis: Role-Based + Chain-of-Thought + Critique-and-Revise. For data extraction: Structured Output (JSON) + Few-Shot examples. For coding tasks: Expert Simulation + Plan-and-Execute + Self-Reflection. For research synthesis: RAG Prompting + Citation Prompting + Document Grounding.
FAQ
What is prompt engineering?
Prompt engineering is the practice of designing and refining inputs to AI language models to get more accurate, useful, and consistent outputs. Good prompts specify a role, task, context, and output format. Techniques like chain-of-thought and few-shot prompting can dramatically improve results.
What is chain-of-thought prompting?
Chain-of-thought prompting asks the AI to show its reasoning step by step before giving a final answer. Adding phrases like 'Let's think step by step' or 'Walk me through your reasoning' activates this pattern. It significantly improves accuracy on math, logic, and multi-step problems.
What is few-shot prompting?
Few-shot prompting provides 2-5 examples of the desired input-output format before asking the AI your actual question. This 'teaches' the model the exact format and style you want, leading to much more consistent outputs than zero-shot prompting alone.
Which prompting technique works best?
It depends on your task. Chain-of-thought works best for reasoning and math problems. Few-shot works best when you need a specific output format. Role-based prompting works best for domain expertise. Structured output with JSON mode works best for data extraction. This cheatsheet helps you pick the right technique for your use case.
Do these techniques work with ChatGPT, Claude, and Gemini?
Yes, all techniques in this cheatsheet are model-agnostic and work with ChatGPT, Claude, Gemini, Llama, and other major LLMs. Some techniques like JSON mode have model-specific syntax differences, but the core concepts apply universally.
Is this cheatsheet free?
Yes, completely free. Search, browse, and copy any technique without signing up or creating an account. All content runs in your browser.
Is my data private?
Yes. This is a purely static reference tool. No data is collected, stored, or sent to any server. Your searches and clipboard copies stay entirely on your device.