AiPro Institute™ Prompt Library
Professional-Grade Prompts for Marketing Analytics & Growth
📧 Email Campaign Performance Analyzer
📋 The Prompt
🧠 The Logic: Why This Prompt Works
1. 📊 Comprehensive Metric Framework Beyond Vanity Numbers
This prompt demands analysis across four critical dimensions: deliverability (bounce/spam rates), engagement (opens/clicks), conversion (revenue attribution), and audience health (unsubscribes/complaints). Most email reports stop at open rates and CTR — vanity metrics that don't reveal the full story.
Why it matters: A 35% open rate means nothing if your CTOR is 8% (indicating openers aren't engaged), your bounce rate is 4% (list quality crisis), and your unsubscribe rate is 1.2% (audience fatigue). By forcing analysis of deliverability health (inbox placement), engagement depth (CTOR, not just CTR), and revenue impact (conversion attribution), you get a 360° view. The prompt explicitly separates hard vs. soft bounces, unique vs. total opens, and mobile vs. desktop performance — nuances that reveal optimization levers most marketers overlook.
Real-world impact: A SaaS company using this framework discovered their 28% open rate was masking a 3.2% CTOR — openers weren't finding relevant content. By shifting focus from "get more opens" to "engage existing openers," they redesigned CTAs and increased conversions by 47% without changing send volume.
2. 🎯 Deliverability Intelligence (The Invisible Performance Killer)
The prompt prioritizes deliverability health assessment upfront — bounce rates, spam complaints, and sender reputation indicators. Why? Because if 30% of your emails land in spam folders (invisible in most analytics), your engagement metrics are fatally misleading. You're analyzing only the emails that made it to inboxes, not your true send performance.
The diagnostic framework: Bounce rate >2% signals list decay or acquisition quality issues. Spam complaint rate >0.1% triggers ISP reputation penalties. The prompt connects these technical signals to strategic implications: "Your 0.18% spam rate suggests frequency or relevance issues; consider re-engagement campaigns for dormant subscribers to prevent future spam triggers." This transforms obscure technical metrics into actionable marketing decisions.
Case example: An ecommerce brand blamed "email fatigue" for declining open rates (22% → 16% in 3 months). Deliverability analysis revealed the real culprit: a 3.8% bounce rate from an outdated purchased list segment, which triggered Gmail's bulk sender penalties. After list hygiene and re-verification, their inbox placement recovered to 94%, and open rates rebounded to 27% — no creative changes needed.
3. 🔍 Click-to-Open Rate (CTOR): The True Engagement Signal
While most teams celebrate open rates and panic over CTR, this prompt elevates Click-to-Open Rate (CTOR) as the premier engagement metric. CTOR = (Unique Clicks ÷ Unique Opens) × 100. It answers: "Of people who opened your email, what percentage found it compelling enough to click?" A 2% CTR with a 20% open rate yields 10% CTOR — meaning 90% of openers bounced. That's a content failure, not a subject line problem.
Why CTOR trumps CTR: CTR is distorted by send volume and subject line curiosity. CTOR isolates content quality and relevance. Industry benchmarks: 10-15% CTOR is average, 20-30% is strong, 35%+ is exceptional. The prompt instructs AI to diagnose low CTOR as "content-audience misalignment" or "weak CTA design," then prescribe targeted fixes like "Test benefit-driven vs. feature-driven CTAs" or "Reduce CTA count from 5 to 2 for clearer path to action."
Transformation story: A B2B newsletter with 32% open rate but 6% CTOR (only 18% of openers clicked) used this analysis to pivot. They discovered readers opened for industry news (headline curiosity) but didn't click sponsored content (relevance gap). Solution: Separate newsletters for editorial vs. promotional content. CTOR jumped to 24%, and conversion rate doubled.
4. 💰 Revenue Attribution & Conversion Funnel Mapping
The prompt demands end-to-end revenue intelligence: conversion rate, total revenue, revenue per email (RPE), average order value (AOV), and customer lifetime value (CLV) impact. It goes beyond "Did they buy?" to "What's the economic value of this campaign?" and "Are we acquiring high-value customers or one-time discount hunters?"
The forensic approach: By requesting "Direct revenue vs. influenced revenue," the prompt captures both immediate conversions and assisted conversions (e.g., email drove site visit, purchase happened later). The ROI calculation framework — (Revenue - Campaign Cost) ÷ Campaign Cost × 100 — turns engagement data into business outcomes. The prompt also flags segment-level performance: "VIP customers converted at 8.2% vs. 1.4% for prospects — consider separate nurture tracks."
Strategic insight: A fashion retailer's Black Friday email had a "successful" 4.2% conversion rate and $127K revenue. But RPE analysis ($2.18 per email) revealed it was their lowest-performing holiday campaign (average RPE: $4.50). Culprit: Heavy discounting attracted bargain hunters (AOV down 38%). Next year, they segmented: deep discounts for lapsed customers, exclusive early access for VIPs. RPE jumped to $5.20, and profit margin improved 22%.
5. 🧬 Audience Behavior Psychographics & Segmentation Insights
The prompt instructs AI to perform behavioral segmentation forensics: Who opened but didn't click (content issue)? Who clicked but didn't convert (landing page friction)? Who unsubscribed (frequency/relevance problem)? This identifies micro-audiences for targeted re-engagement or exclusion strategies.
The segmentation matrix: Engaged subscribers (opened + clicked) → Nurture with advanced content. Curious non-clickers (opened, no click) → Test stronger CTAs or relevance. Non-openers → Subject line A/B tests or send-time optimization. The prompt also demands "Unsubscribe root cause analysis" and "Spam complaint triggers" — turning negative signals into improvement hypotheses. Example output: "72% of unsubscribes occurred among subscribers acquired via contest (low intent); recommend sunset this segment."
Optimization win: A fitness app's re-engagement campaign had 12% open, 3% CTR (poor). Segmentation analysis revealed: Dormant users who last engaged with strength training content had 19% open, 8% CTR when sent strength-specific emails vs. generic "We miss you" messages. By creating behavior-based re-engagement tracks, they reactivated 31% of dormant users vs. 11% with generic campaigns.
6. 🎓 Actionable Roadmap with Prioritized Experimentation
The prompt structures insights into a three-tier action plan: Immediate Wins (this week), Short-Term Improvements (next 30 days), and Long-Term Strategy (next quarter). This prevents analysis paralysis and ensures findings drive execution. Each tier includes specific tactics, not vague advice: "Test subject lines under 40 characters" (immediate) vs. "Build behavioral segmentation engine" (long-term).
The A/B testing agenda: The prompt embeds experimentation mindset by requesting "Recommended next tests based on findings." For example: "Your 22% open rate lags the 28% benchmark; hypothesis: personalized subject lines will increase opens by 4-6 percentage points. Test: Segment A receives 'John, your exclusive offer inside' vs. Segment B receives 'Exclusive offer for loyal customers.' Measure: Open rate and CTOR (ensure personalization doesn't degrade click quality)."
Execution framework: A SaaS company used this roadmap structure to operationalize 43 email analysis reports in one year. Their process: Immediate wins = implement within one sprint. Short-term = add to next quarter OKRs. Long-term = annual strategy planning. Result: Email-driven revenue grew 89% year-over-year, attributed to systematic optimization cycles guided by this analytical framework.
💡 Example Output Preview
📧 Email Campaign Performance Report: Fall Product Launch Campaign
Campaign: New Collection Reveal — Seasonal Product Launch
Sent: September 15, 2025 | Audience: 47,320 Active Subscribers (purchased in last 12 months)
Goal: Drive first-week sales for Fall 2025 collection
🎯 EXECUTIVE SUMMARY
Performance Verdict: 🟡 GOOD (Above Average, Room for Optimization)
Key Wins:
- Strong mobile engagement: 68% of opens on mobile with 4.8% mobile CTR (32% above account average)
- High-value conversions: $284 average order value among converters (18% above site average)
- Deliverability excellence: 0.8% bounce rate, 0.04% spam complaints (well below thresholds)
Critical Improvement Areas:
- Low CTOR (12.8%): Only 1 in 8 openers clicked — content-audience disconnect
- Desktop underperformance: Desktop CTR (2.1%) 54% below mobile (4.8%); desktop design needs work
- Geographic engagement gap: West Coast open rate (31%) vs. East Coast (22%) — timezone optimization needed
Single Most Impactful Recommendation:
Redesign desktop email layout to match mobile's visual hierarchy and CTA prominence. Desktop accounts for 32% of opens but only 18% of clicks — fixing this could increase total conversions by 24-30%.
📊 PERFORMANCE METRICS BREAKDOWN
Deliverability Health: 🟢 EXCELLENT
- Successfully Delivered: 46,942 (99.2% delivery rate)
- Bounce Rate: 0.8% (🟢 Healthy — under 2% threshold)
- Hard Bounces: 247 | Soft Bounces: 131
- Spam Complaint Rate: 0.04% (🟢 Excellent — under 0.1% threshold)
- Inbox Placement Estimate: 96-98% based on engagement signals
Engagement Performance: 🟡 GOOD
- Open Rate: 26.4% (12,394 opens) — 🟡 Above industry avg (22%) but below account avg (29%)
- Unique Open Rate: 24.1%
- Click-Through Rate: 3.1% (1,455 clicks) — 🟡 Meets industry benchmark
- Click-to-Open Rate (CTOR): 12.8% — 🟠 Below target (aim for 18-22% for promotional emails)
- Unsubscribe Rate: 0.31% (145 unsubscribes) — 🟢 Healthy
Revenue Impact: 🟢 STRONG
- Conversion Rate: 2.7% (1,267 purchases)
- Total Revenue: $359,828
- Revenue per Email (RPE): $7.66 — 🟢 28% above account average ($5.98)
- Average Order Value: $284 — 🟢 18% above site average ($241)
- ROI: 18.2:1 (Campaign cost: $19,750 including design, ESP fees, and labor)
Device Performance:
- Mobile: 68% of opens | 4.8% CTR | 2.9% conversion rate 🟢
- Desktop: 28% of opens | 2.1% CTR | 2.2% conversion rate 🔴 Underperforming
- Tablet: 4% of opens | 1.8% CTR | 1.9% conversion rate
Top Performing Links:
- "Shop New Arrivals" CTA (hero section): 647 clicks (44.5% of all clicks)
- "Free Shipping Details" footer link: 312 clicks (21.4%)
- "View Full Collection" mid-email CTA: 289 clicks (19.9%)
🔍 CONTENT PERFORMANCE FORENSICS
Subject Line Analysis:
Subject: "Sarah, your first look at Fall's best 🍂 | Early access ends tonight"
Length: 68 characters (🟡 Slightly long; recommend under 50 for mobile)
Elements: Personalization (first name) + Emoji + Urgency + Exclusivity
Performance: 26.4% open rate vs. 29% account average (🟠 -9% underperformance)
Hypothesis: Subject line may be too long for mobile (where 68% of opens occur). Emoji likely helped, but the multi-clause structure ("your first look" + "early access ends") may have diluted urgency. Recommendation: Test shorter, single-benefit subject lines like "Sarah, fall favorites — yours first 🍂" (39 characters).
CTA Effectiveness:
- Primary CTA ("Shop New Arrivals"): 44.5% click share — Clear winner
- Secondary CTA ("View Full Collection"): 19.9% — Solid but redundant with primary?
- Issue: 5 CTAs total in email; may cause decision paralysis
- Test Recommendation: Streamline to 2 CTAs (hero + footer); hypothesis: fewer choices will increase click concentration and CTOR by 3-5 points
Personalization Impact:
First-name personalization in subject line; no behavioral personalization in email body (e.g., product recommendations based on past purchases). Opportunity: Segment sends by past purchase category (e.g., "Sarah, new dresses like the ones you loved" for dress buyers) — could lift CTOR by 8-12 points based on industry data.
🎯 ACTIONABLE OPTIMIZATION ROADMAP
IMMEDIATE WINS (Implement This Week):
- Desktop Layout Redesign (Priority #1): Increase CTA button size by 40%, center-align hero CTA, increase whitespace around CTAs. Expected Impact: Desktop CTR from 2.1% → 3.2-3.8% (+24-33% conversions).
- Subject Line Length Test: Next send, test 35-45 character subject lines vs. current 60-70 character format. Hypothesis: Shorter = +3-5 points open rate on mobile.
- CTA Consolidation: Reduce from 5 CTAs to 2 (hero "Shop Now" + footer "View Full Collection"). Expected Impact: CTOR from 12.8% → 16-18%.
SHORT-TERM IMPROVEMENTS (Next 30 Days):
- Behavioral Personalization Engine: Implement product recommendations based on past purchase categories. Segment: Dress buyers see new dresses; shoe buyers see new shoes. Expected Impact: CTOR +6-10 points, RPE +15-25%.
- Timezone Send Optimization: Current send: 10am ET for all subscribers. Test: 10am local time for each timezone. Expected Impact: West Coast open rate +4-6 points.
- Re-Engagement Campaign for Non-Clickers: Target the 10,939 subscribers who opened but didn't click with a follow-up email featuring top sellers + social proof ("1,200+ customers already shopping Fall favorites"). Expected Impact: Recapture 8-12% of non-clickers.
LONG-TERM STRATEGY (Next Quarter):
- Predictive Segmentation Model: Build ML model to predict high-intent subscribers based on browsing behavior, email engagement, and purchase history. Send premium/exclusive content to high-propensity segment.
- Lifecycle Email Automation Expansion: Currently: Welcome series + abandoned cart. Add: Post-purchase nurture, re-engagement series, VIP tier progression emails.
- Advanced A/B Testing Program: Systematic testing calendar: Subject line tests (2x/month), CTA tests (1x/month), design layout tests (quarterly).
📈 SUCCESS METRICS FOR NEXT CAMPAIGN
- Primary Goal: Increase CTOR from 12.8% to 17% (+33%)
- Secondary Goals: Desktop CTR from 2.1% → 3.5% | Open rate from 26.4% → 29%
- Revenue Target: Maintain or exceed $7.66 RPE while improving engagement
- Health Metrics: Keep bounce rate <1%, spam complaints <0.05%, unsubscribe rate <0.4%
🔚 CONCLUSION
This campaign delivered solid revenue ($360K, $7.66 RPE) with healthy deliverability, but engagement depth (12.8% CTOR) and desktop performance (2.1% CTR) reveal untapped potential. By addressing the three critical areas — desktop design, CTA simplification, and behavioral personalization — the next launch campaign can realistically achieve 25-35% revenue lift with the same send volume. The infrastructure is strong; now optimize for relevance and friction reduction.
🔗 Prompt Chain Strategy
Step 1: Core Performance Analysis
Prompt: Use the main prompt above with your complete campaign data to generate the comprehensive performance report.
Expected Output: Full analytical report with executive summary, deliverability assessment, engagement metrics, conversion intelligence, content forensics, and prioritized action plan (4,000-6,000 words).
Step 2: Segment Deep-Dive
Prompt: "Based on the campaign analysis, I want to understand segment-level performance variations. Break down the following segments and identify which performed best/worst and why: [List your segments: e.g., VIP customers, new subscribers, lapsed customers, geographic regions, acquisition source]. For each segment, provide: Open rate, CTR, CTOR, Conversion rate, Revenue per email, and specific optimization recommendations."
Expected Output: Segment performance matrix with comparative analysis, identification of high-value vs. low-value segments, and tailored optimization strategies for each cohort. You'll discover hidden patterns like "New subscribers have 38% open rate but 1.2% conversion — nurture sequence needed before promotional sends."
Step 3: A/B Testing Roadmap
Prompt: "Generate a 90-day A/B testing roadmap based on the improvement areas identified. For each test, specify: 1) Hypothesis (what you're testing and expected outcome), 2) Test variants (A vs B with specific details), 3) Success metrics and decision criteria, 4) Sample size requirements, 5) Timeline and priority. Focus on the top 3 optimization opportunities from the analysis."
Expected Output: Structured testing calendar with detailed test plans, statistical rigor guidelines, and decision frameworks. Example: "Week 1-2: Subject Line Length Test | Hypothesis: <50 character subject lines will increase mobile open rate by 4-6 points | Variant A: 'New fall collection — shop now 🍂' (35 chars) vs. Variant B: 'Sarah, exclusive first access to fall's best styles 🍂 | Early access ends tonight' (82 chars) | Success Metric: Open rate (primary), CTOR (secondary) | Sample Size: 5,000 per variant | Decision Rule: If Variant A wins by >3 points with 95% confidence, implement for all future campaigns."
🎯 Human-in-the-Loop Refinements
1. 🔄 Provide Historical Context for Trend Analysis
After the initial report, feed AI your last 3-6 campaign results with the same format. Ask: "Analyze performance trends over time. Are we improving or declining? Which metrics show consistent patterns vs. one-time anomalies? What external factors (seasonality, promotions, list growth) might explain variance?" This transforms a single campaign snapshot into a strategic performance narrative. You'll identify whether low engagement is campaign-specific (creative issue) or systemic (audience fatigue, list decay). Example refinement prompt: "Here are my last 5 campaigns [paste data]. How does this campaign fit the pattern? What's our engagement trajectory, and what does it predict for next quarter?"
2. 🎨 Upload Campaign Creative for Visual Analysis
If possible, describe or provide screenshots of the email design. Then ask: "Based on the performance data, critique the email design. Which elements likely contributed to strong/weak performance? Evaluate: Visual hierarchy, CTA prominence and design, Mobile vs. desktop optimization, Text-to-image ratio, Color psychology and brand alignment, Accessibility (contrast, font size)." AI will connect design decisions to engagement outcomes: "Your hero CTA is purple-on-purple (low contrast); likely contributed to low CTOR. Recommendation: High-contrast CTA (orange or white) could lift clicks 12-18%." This bridges quantitative analytics with qualitative design feedback.
3. 💸 Add Campaign Cost Data for Full ROI Clarity
Initial analysis calculates RPE (revenue per email), but for true profitability assessment, provide: Email platform costs, Design/copywriting labor, Promotional discounts offered, Customer acquisition cost (if applicable). Then request: "Calculate full campaign ROI, profit margin, and cost per acquisition. Compare to other marketing channels. Is email over/underperforming relative to its costs?" This prevents the trap of celebrating high revenue while ignoring profitability. A campaign generating $500K revenue but costing $450K (10% margin) is strategically inferior to one generating $200K at $40K cost (80% margin).
4. 🧪 Request Prescriptive A/B Test Hypotheses
Generic advice like "test subject lines" is too vague. After the core report, ask: "Based on the specific underperformance areas, design 3 high-impact A/B tests with detailed hypotheses, variants, and statistical frameworks. For each test: State the problem data point, Propose a psychological/technical hypothesis, Design exact test variants (word-for-word for subject lines, wireframe for design), Define success metrics and confidence thresholds, Estimate expected lift." Example output: "Problem: 12.8% CTOR indicates content-audience disconnect. Hypothesis: Subscribers want social proof before clicking. Test: Variant A (control): 'Shop New Collection' CTA. Variant B: 'Join 2,400+ customers who already love our fall styles' CTA. Metric: CTOR. Expected lift: +4-7 points. Statistical requirement: 2,500 opens per variant for 90% confidence."
5. 📧 Compare Against Competitor Benchmarks
Industry averages are useful, but direct competitor intelligence is gold. If you have access to competitor email data (via tools like Mailcharts or manual monitoring), provide examples: "My competitor's welcome email has 45% open rate vs. my 32%. They use: [describe their approach]. What are they doing right that I'm missing?" AI will reverse-engineer best practices: "Competitor uses curiosity-driven subject line with no discount mention, while you lead with '20% off' — they're attracting higher-intent subscribers. Recommendation: Test non-discount value prop in subject line to attract engaged vs. bargain-hunting subscribers." This competitive benchmarking turns external data into internal strategy.
6. 🚀 Build a 12-Month Email Growth Forecast
After optimization recommendations, request a forward-looking projection: "Assume I implement all immediate and short-term improvements. Model the expected performance trajectory for the next 12 months. Provide monthly projections for: Subscriber list size (accounting for growth and churn), Average open rate, Average CTOR, Conversion rate, Monthly email revenue, Key assumptions and risk factors." This transforms reactive analysis into proactive planning. You'll get: "Month 1: Implement desktop redesign → CTOR 12.8% → 16.2%. Month 2: Add behavioral segmentation → Conversion rate 2.7% → 3.4%. Month 6: Cumulative effect → Monthly email revenue from $360K → $520K (+44%). Risk: Assumes 15% list growth; if growth is 5%, revenue projection drops to $430K." Now you have a roadmap with accountability metrics.
✅ Quality Checklist
Before submitting to your team, verify your AI-generated report includes:
- ✅ Executive summary with clear performance verdict and top 3 action items
- ✅ Deliverability health score with bounce/spam rate context
- ✅ CTOR analysis (not just CTR) with engagement depth interpretation
- ✅ Revenue attribution with RPE, AOV, and ROI calculations
- ✅ Segment-level insights (not just aggregate averages)
- ✅ Device performance breakdown with mobile vs. desktop optimization gaps
- ✅ Competitive/benchmark context (industry standards + your historical baseline)
- ✅ Prioritized action plan in three tiers (immediate/short-term/long-term)
- ✅ Specific next steps (not generic advice like "improve subject lines")
- ✅ Success metrics defined for the next campaign
Red flags that indicate you need to refine your inputs:
- 🚩 Report says "good performance" without data-backed justification
- 🚩 Recommendations are generic ("test more," "segment better") without specific tactics
- 🚩 No mention of CTOR or deliverability health (critical diagnostic metrics)
- 🚩 Revenue discussed without RPE, AOV, or profitability context
- 🚩 No comparison to benchmarks or historical performance (context-free analysis)
- 🚩 Action plan lacks prioritization or implementation timeline
If you see these red flags, add more granular data (segment breakdowns, historical trends, campaign creative descriptions) and request deeper analysis with the Human-in-the-Loop refinements above.