AI Prompt Engineering Guide — 6 Techniques That Actually Work

Most people use AI like a search engine — type a vague question, get a vague answer. Prompt engineering is the skill of writing instructions that consistently produce high-quality, accurate, and useful responses. This guide covers 6 practical techniques with real before/after examples.

6

battle-tested techniques

10x

better output with good prompts

0

cost — just better wording

2026

updated for GPT-4o & Claude 3.5

1

Why Prompts Matter More Than You Think

The same AI model will give wildly different outputs depending on how you phrase your request. This isn't a quirk — it's by design. Language models predict the most likely continuation of your text. A precise, context-rich prompt narrows down the probability space and guides the model toward exactly what you need.

ItemWeak PromptStrong Prompt
SpecificityFix this codeFix the null pointer exception in this TypeScript function. Explain what caused it.
ContextWrite a functionWrite a TypeScript function that validates an email address using regex. Return a boolean.
FormatExplain CORSExplain CORS in 3 bullet points suitable for a junior developer with no HTTP background.
ConstraintsSummarize thisSummarize this in 2 sentences. Focus on the key business impact, not technical details.
2

Technique 1 — Role-Based Prompting

Assigning a role or persona to the AI dramatically improves the tone, depth, and relevance of responses. The model uses the role as a filter for what vocabulary, assumptions, and perspective to apply.

Vague question

❌ Bad
How do I optimize React performance?

Role + context + specific ask

✅ Good
You are a senior React engineer who has worked on large-scale SPAs with 1M+ users.

I have a dashboard that re-renders every second due to real-time data. The FPS is dropping to 20 on low-end devices.

What are the top 3 React-specific optimizations I should apply first?

Domain Expert

"You are a senior DevOps engineer specializing in Kubernetes..."

Audience Adapter

"Explain this to a non-technical product manager..."

Style Guide

"You write in the style of the React docs — precise, minimal, no hype..."

3

Technique 2 — Chain-of-Thought (CoT)

Telling the model to "think step by step" before giving an answer significantly improves accuracy on reasoning tasks, math, logic, and debugging. It forces the model to work through intermediate steps rather than jumping to a conclusion.

Direct question

❌ Bad
Is this SQL query efficient? SELECT * FROM orders WHERE customer_id = 123 ORDER BY created_at DESC;

Step-by-step analysis prompt

✅ Good
Analyze this SQL query step by step:
SELECT * FROM orders WHERE customer_id = 123 ORDER BY created_at DESC;

1. First, identify what indexes would help
2. Then check for any N+1 or full-table scan issues
3. Finally, suggest the optimized version with explanation

Table: orders (500,000 rows). customer_id and created_at both have individual indexes.

When to use Chain-of-Thought

CoT is most powerful for: debugging code, math/logic problems, multi-step planning, comparing trade-offs, and any task where "show your work" would help a human too.
4

Technique 3 — Few-Shot Prompting

Provide 2-3 examples of the exact input→output format you want. The model uses these as a pattern to follow rather than guessing your intent. This is especially powerful for formatting, classification, and code generation tasks.

textFew-Shot Example: Commit Message Generator
Convert these git diffs into conventional commit messages.

EXAMPLE 1:
Diff: Added user.email validation in signup form
Output: feat(auth): add email validation to signup form

EXAMPLE 2:
Diff: Fixed null check in getUserById causing crash
Output: fix(users): handle null return from getUserById

EXAMPLE 3:
Diff: Removed unused imports from dashboard.tsx
Output: chore(dashboard): remove unused imports

NOW DO THIS:
Diff: Added Redis caching for product listings API endpoint
Output:

Quick fact

Three examples is the sweet spot for most tasks. More than 5 examples rarely adds value and uses up context window space.

5

Technique 4 — Constraint & Format Specification

Tell the model exactly what format you want the output in. This prevents verbose, padded responses and makes the output immediately usable in your workflow.

textFormat Specification Examples
# Response as JSON
Extract the key info from this job posting as JSON:
{ "title": "...", "company": "...", "salary": "...", "remote": true/false, "stack": ["..."] }

Job posting: [paste here]

---

# Response as Markdown table
Compare these 3 state management libraries (Redux, Zustand, Jotai) in a markdown table.
Columns: Library | Bundle Size | Learning Curve | Best For | Verdict

---

# Response as numbered list, max 5 items
List the top 5 reasons Next.js apps are slow in production.
Each item: one sentence, developer-focused, actionable.
ItemWithout Format SpecWith Format Spec
LengthUnpredictable — often too longControlled — exactly what you asked
StructureParagraphs you need to parseJSON / table / list ready to use
Copy-paste ready?Usually needs editingOften paste-and-go
Hallucination riskHigher (more room to fill)Lower (constrained output)
6

Technique 5 — Context Injection

Language models have no memory between conversations and no access to your codebase, docs, or data. Context injection means pasting the relevant information directly into the prompt so the model works with your actual situation rather than a generic one.

No context

❌ Bad
Why is my Next.js app slow?

Real data injected

✅ Good
Here is my Next.js app's Lighthouse report (score: 34):
- LCP: 8.2s (image hero, no priority attribute)
- TBT: 1,200ms (two 400KB client-side JS bundles)
- CLS: 0.42 (dynamic content inserted above the fold)

My stack: Next.js 14 App Router, Tailwind, no image optimization configured.

Based on this specific data, what are the top 3 changes that will have the most impact?

Sensitive data warning

Never paste real user data, API keys, passwords, database contents, or HIPAA/PII data into AI prompts. Use placeholders like [REDACTED], [USER_EMAIL], or masked values. See our HIPAA-compliant AI guide for masking workflows.
7

Technique 6 — Iterative Refinement

Treat AI conversations as a dialogue, not a one-shot query. Start broad, then narrow down with follow-up instructions. This is faster than trying to write the perfect prompt on the first attempt.

1

Send the initial prompt

Get a baseline response. Don't overthink the first message.

Write a function to validate a credit card number.
2

Refine with constraints

Add the specifics you want changed or added.

Make it TypeScript. Use the Luhn algorithm. Return { valid: boolean, error?: string }.
3

Add edge cases

Ask the model to test its own output.

Now write 5 unit tests covering: valid card, expired, wrong length, non-numeric, Amex 15-digit.
4

Request the final version

Have the model produce a clean, combined final version.

Output the final function + tests as a single TypeScript file.
8

Prompt Template Library

Save these reusable templates for common developer tasks:

textCode Review Template
Review this [LANGUAGE] code for:
1. Correctness — logic errors or edge cases
2. Security — any injection, auth, or data exposure risks
3. Performance — unnecessary loops, missing indexes, N+1 queries
4. Readability — naming, comments, structure

Rate each category 1-5 and explain the top issue in each.

```[LANGUAGE]
[PASTE CODE HERE]
```
textDocumentation Generator
Generate JSDoc comments for this TypeScript function.

Include:
- @description: one sentence, plain English
- @param: for each parameter with type and purpose
- @returns: what the function returns and when
- @throws: any errors this can throw
- @example: one realistic usage example

```typescript
[PASTE FUNCTION HERE]
```
textBug Explanation Template
I have a bug. Here is everything I know:

ERROR MESSAGE:
[paste error]

CODE WHERE IT HAPPENS:
```[language]
[paste code]
```

WHAT I EXPECTED:
[expected behavior]

WHAT ACTUALLY HAPPENED:
[actual behavior]

WHAT I'VE TRIED:
[list attempted fixes]

Please: (1) explain the root cause, (2) give the fix, (3) explain why the fix works.
9

Model-Specific Tips

GPT-4o

Excellent at following format specs. Use JSON output mode for structured data. Handles very long context well.

Claude 3.5 Sonnet

Strong reasoning and nuanced writing. Responds well to "think step by step." Great for code review and analysis.

Gemini 1.5 Pro

Long context champion (1M tokens). Best for analyzing large codebases or long documents.

Local Models (Llama, Mistral)

Good for sensitive data — nothing leaves your machine. Simpler prompts work better. Less instruction-following.

Frequently Asked Questions