✍️ Prompt Writing Formulas
What is Prompt Engineering?
Prompt engineering is the practice of designing and optimizing input instructions to achieve desired outputs from AI language models. It's the bridge between human intent and AI execution—the art and science of communicating effectively with artificial intelligence.
The Universal Prompt Formula
The 6 Core Components of Effective Prompts
| Component | Purpose | Example |
|---|---|---|
| 1. Role/Persona | Define the AI's expertise and perspective | "You are an expert data scientist..." |
| 2. Task/Instruction | Clearly state what needs to be done | "Analyze the following dataset and identify trends..." |
| 3. Context | Provide background and constraints | "For a B2B SaaS company targeting enterprises..." |
| 4. Input Data | Supply the content to work with | "Here is the sales data: [data]" |
| 5. Output Format | Specify structure and style | "Provide response as a bulleted list..." |
| 6. Examples (Optional) | Show desired input-output patterns | "Example: Input: [X] → Output: [Y]" |
Prompt Engineering Best Practices
- Assuming the model has access to real-time data or internet (unless explicitly enabled)
- Using ambiguous pronouns ("it," "that," "this") without clear antecedents
- Overloading a single prompt with multiple unrelated tasks
- Neglecting to specify tone and audience
- Failing to provide examples for complex or novel tasks
1. Zero-Shot Prompting
Basic No ExamplesStructure
Direct instruction without examples. Model relies on pre-trained knowledge.
Rationale: Creates intrigue and emphasizes predictive capability
Option 2: "Turn Campaign Data Into Revenue Growth"
Rationale: Benefit-focused, addresses ROI concern
Option 3: "Stop Guessing. Start Growing Your Pipeline"
Rationale: Addresses pain point and solution
2. Few-Shot Prompting
Intermediate With ExamplesStructure
Provide 2-5 examples to establish pattern and style.
- Use 2-5 examples (more isn't always better due to context limits)
- Examples should represent diverse scenarios
- Maintain consistent formatting across examples
- Use representative edge cases if relevant
- Order examples from simple to complex
3. Chain-of-Thought (CoT) Prompting
Advanced ReasoningStructure
Elicit step-by-step reasoning before final answer. Dramatically improves complex problem-solving.
1. Starting customers: 3,000
2. New customers gained: +450
3. Customers lost (churn): -275
4. Net change: 450 - 275 = 175 customers
5. Growth rate calculation: (Net change / Starting customers) × 100
6. Math: (175 / 3,000) × 100 = 5.83%
Final Answer: The company's net customer growth rate for Q1 was 5.83%
Few-Shot Chain-of-Thought
Combine Few-Shot + CoT for maximum effectiveness on complex tasks.
1. Calculate the train's speed: 240 miles ÷ 4 hours = 60 mph
2. Calculate distance at that speed for 7 hours: 60 mph × 7 hours = 420 miles
Answer: The train will travel 420 miles in 7 hours
4. ReAct (Reasoning + Acting)
Advanced Multi-StepStructure
Interleave reasoning, actions, and observations for tool-using agents.
Action: Search "Eiffel Tower completion year"
Observation: The Eiffel Tower was completed in 1889.
Thought: Now I need to find Paris's population in 1889.
Action: Search "Paris population 1889"
Observation: Paris had approximately 2.4 million inhabitants in 1889.
Thought: I now have both pieces of information needed.
Answer: The population of Paris in 1889, when the Eiffel Tower was completed, was approximately 2.4 million people.
5. Self-Consistency
Advanced EnsembleStructure
Generate multiple reasoning paths and select the most consistent answer.
- Total cookies: 12 × 24 = 288
- Boxes needed: 288 ÷ 8 = 36 boxes
Approach 2 (Per-batch then multiply):
- Boxes per batch: 24 ÷ 8 = 3 boxes
- Total boxes: 3 × 12 = 36 boxes
Approach 3 (Grouping):
- Each batch (24 cookies) needs 3 boxes of 8
- 12 batches × 3 boxes = 36 boxes
Final Answer: 36 boxes (all three approaches agree, high confidence)
6. Tree of Thoughts (ToT)
Advanced SearchStructure
Explore multiple reasoning branches, evaluate each, and select the most promising path.
Pros: Addresses early churn, improves activation
Cons: Only helps new customers, takes time to see results
Viability: 6/10
Branch 2: Predictive Churn Model + Proactive Outreach
Pros: Targets at-risk customers before they leave, data-driven
Cons: Requires ML infrastructure, needs historical data
Viability: 9/10
Branch 3: Feature Enhancement Based on User Feedback
Pros: Addresses root causes, improves product value
Cons: Longer development cycle, may not impact churn in 6 months
Viability: 5/10
Selected Path: Branch 2 - Predictive Churn Model
Reasoning: Highest impact in shortest timeframe, addresses existing customer base, measurable ROI...
[Detailed execution plan follows]
Role Prompting / Persona Assignment
Expert Role Template
Perspective Shifting
"Analyze this from the perspective of [role]"
Example: "As a CFO, evaluate this investment..."
Multiple Personas
Ask the AI to represent different viewpoints
Example: "Debate this topic with responses from a developer, designer, and PM"
Devil's Advocate
"Challenge this idea from a skeptical perspective"
Example: "Play devil's advocate and find flaws in this strategy"
Instruction Hierarchy & Prompt Chaining
Multi-Step Prompt Chain
- Complex Projects: Break down large tasks into manageable steps
- Quality Control: Each step can be reviewed before proceeding
- Token Efficiency: Keep each prompt focused and under context limits
- Iterative Refinement: Progressively improve output quality
Constraint-Based Prompting
Constraint Template
Output Formatting Techniques
Structured Data (JSON)
Table Format
XML/HTML Structure
Bullet Point Hierarchy
Step-by-Step List
Code Block
Delimiter Techniques
Using Delimiters for Clarity
Delimiters help separate instructions from data, especially important for preventing prompt injection.
- Triple quotes: """ content """
- Triple backticks: ``` content ```
- XML tags: <content>...</content>
- Dashes/Hashes: --- content ---
- Brackets: [[[content]]]
Model Comparison Guide
GPT-4 / GPT-4 Turbo
Reasoning: Excellent Context: 128K Speed: MediumBest For:
- Complex reasoning tasks
- Long document analysis
- Creative writing
- Code generation
Prompt Tips:
- Use system messages for role
- Leverage large context window
- Request step-by-step reasoning
Claude 3 (Opus/Sonnet)
Reasoning: Excellent Context: 200K Safety: HighBest For:
- Analysis & research
- Long document processing
- Thoughtful conversations
- Nuanced understanding
Prompt Tips:
- Direct, clear instructions work best
- Use XML tags for structure
- Excels at careful reasoning
GPT-3.5 Turbo
Reasoning: Good Context: 16K Speed: Very FastBest For:
- Quick tasks
- Simple Q&A
- Content generation
- Cost-effective processing
Prompt Tips:
- Keep prompts concise
- Use examples for complex tasks
- Clear, explicit instructions
Gemini 1.5 Pro
Reasoning: Excellent Context: 1M tokens Multimodal: YesBest For:
- Massive document analysis
- Video/audio processing
- Multimodal tasks
- Code understanding
Prompt Tips:
- Leverage huge context window
- Combine text, image, video
- Good at long-range dependencies
Llama 3 (70B)
Reasoning: Good Context: 8K Open Source: YesBest For:
- On-premise deployment
- Cost-sensitive applications
- Privacy-critical use cases
- Fine-tuning customization
Prompt Tips:
- Use clear system prompts
- Benefits from examples
- Simpler language preferred
Mistral Large
Reasoning: Very Good Context: 32K Multilingual: ExcellentBest For:
- European languages
- Code generation
- Structured output
- JSON mode
Prompt Tips:
- Strong instruction following
- Excellent with French, Spanish
- Use JSON schema for structure
System Prompts vs User Prompts
| Aspect | System Prompt | User Prompt |
|---|---|---|
| Purpose | Set overall behavior, role, constraints | Specific task instruction |
| Persistence | Applies to entire conversation | Single turn or specific request |
| Content | Role definition, guidelines, constraints | Actual task, data, questions |
| When to Use | Define consistent behavior | Specific tasks and queries |
| Example | "You are a Python expert. Always provide type hints and docstrings." | "Write a function to sort a list of dictionaries by a specific key." |
Effective System Prompt Template
Marketing & Content Creation
Software Development & Technical
Business Analysis & Strategy
Customer Support & Communication
Education & Training
Common Prompt Problems & Fixes
❌ WEAK PROMPT
"Write about AI"
Problems:
- No clear objective
- No scope or constraints
- No target audience
- No output format
✓ STRONG PROMPT
"Write a 500-word blog post explaining how AI-powered customer service chatbots improve response times for e-commerce businesses. Target audience: small business owners with no technical background. Tone: informative but accessible. Include 3 real-world examples and one actionable implementation tip."
Improvements:
- Specific length constraint
- Clear topic and angle
- Defined audience
- Tone specification
- Content requirements
❌ WEAK PROMPT
"Help me with this code"
Problems:
- No code provided
- No error description
- No expected behavior
- No context
✓ STRONG PROMPT
"Debug this Python function. It should return the sum of even numbers in a list, but it's returning incorrect results for negative numbers. Here's the code: [code]. Expected: sum_even([-2, 3, 4, -5]) → 2. Actual: 6. Explain the bug and provide corrected code with comments."
Improvements:
- Code included
- Specific problem stated
- Test case with expected/actual
- Clear deliverable
❌ WEAK PROMPT
"Analyze this data and tell me what you think"
Problems:
- Vague objective
- No analysis focus
- No format specified
- Subjective request
✓ STRONG PROMPT
"Analyze this sales data from Q1-Q4 2024. Identify: 1) Top 3 performing products by revenue, 2) Month-over-month growth trends, 3) Seasonal patterns, 4) Underperforming categories. Present findings as a markdown table with key metrics and a bullet list of actionable insights. Data: [CSV data]"
Improvements:
- Specific analysis goals
- Clear structure
- Format defined
- Data context provided
Prompt Testing Framework
Systematic Prompt Evaluation
| Criterion | Questions to Ask | Target Score |
|---|---|---|
| Clarity | Is the instruction unambiguous? Can someone else interpret it the same way? | 9-10/10 |
| Specificity | Are all parameters defined? Length, format, tone, constraints? | 8-10/10 |
| Context | Is sufficient background provided? Does the AI have what it needs? | 8-10/10 |
| Consistency | Does the prompt produce consistent results across multiple runs? | 7-10/10 |
| Efficiency | Is the prompt concise without sacrificing clarity? | 7-10/10 |
| Measurability | Can output quality be objectively evaluated? | 8-10/10 |
Prompt Testing Checklist
Iterative Prompt Refinement Process
5-Step Refinement Cycle
Advanced Optimization Techniques
Temperature Tuning
- 0.0-0.3: Deterministic, factual, consistent
- 0.4-0.7: Balanced creativity and coherence
- 0.8-1.0: Highly creative, diverse outputs
- 1.0+: Experimental, unpredictable
Top-P (Nucleus Sampling)
- 0.1: Very focused, safe choices
- 0.5: Moderate diversity
- 0.9: High diversity, creative
- 0.95: Maximum reasonable diversity
Max Tokens
- Set based on expected output length
- Too low: truncated responses
- Too high: unnecessary cost
- Rule of thumb: 1 token ≈ 0.75 words
Stop Sequences
- Define custom stopping points
- Useful for structured output
- Example: Stop at "###" or "\n\n"
- Prevents over-generation
Essential Prompt Patterns Cheat Sheet
| Pattern | When to Use | Template |
|---|---|---|
| Persona Pattern | Need specific expertise or perspective | "Act as a [role] who [characteristics]..." |
| Format Pattern | Need structured output | "Provide output as [format: JSON/table/list]..." |
| Example Pattern | Complex or novel tasks | "Here are examples: [examples]. Now do [task]..." |
| Template Pattern | Consistent output structure needed | "Fill in this template: [template with placeholders]..." |
| Chain Pattern | Multi-step processes | "First [step 1], then [step 2], finally [step 3]..." |
| Constraint Pattern | Specific requirements or limitations | "[Task] with constraints: [list constraints]..." |
| Refinement Pattern | Improving existing output | "Improve this [content] by [specific improvements]..." |
| Comparison Pattern | Evaluating options | "Compare [A] and [B] across [criteria]..." |
| Reasoning Pattern | Need transparent logic | "Explain your reasoning step by step before answering..." |
| Negative Pattern | Avoid specific outcomes | "[Task] but do NOT [unwanted behaviors]..." |
Power Words for Effective Prompts
Action Verbs
Analyze, Generate, Summarize, Compare, Evaluate, Create, Design, Develop, Optimize, Transform, Extract, Classify, Debug, Refactor
Constraint Words
Exactly, Only, Maximum, Minimum, Must include, Must avoid, Within, Between, No more than, At least
Structure Words
First, Then, Next, Finally, Step-by-step, In order, Sequence, Before, After, While
Quality Words
Comprehensive, Detailed, Concise, Specific, Actionable, Clear, Professional, Accurate, Thorough, Precise
Your first prompt doesn't need to be perfect. Start with clear basics, test, and refine based on results. Version control your prompts like code.
Build a personal library of tested prompts for common tasks. Reuse and adapt rather than starting from scratch every time.
More context = better results. Include relevant background, constraints, audience, and desired outcome. Don't assume the model knows implicit information.
When possible, show don't tell. One good example is worth a paragraph of instructions.
Track what works. Note model, temperature, prompt structure, and output quality. Use A/B testing for critical applications.
Instead of one giant prompt, use prompt chaining. Each step can be reviewed and refined, leading to better final output.
Don't just ask for "a report"—specify structure, length, sections, tone. The more precise your format requirements, the better the output.
Wrap user input in delimiters (""", ###, XML tags) to prevent prompt injection and improve parsing, especially in production systems.