🚀 Product Launch Readiness
The Prompt
The Logic
1. Cross-Functional Completeness Prevents Siloed Failure
A launch is only as strong as its weakest link. A perfect product with untrained support staff creates frustrated customers and overwhelmed teams. Brilliant marketing without sales enablement generates leads that don't convert. Excellent positioning but unstable infrastructure causes outages that destroy trust. Cross-functional readiness assessment forces every team to examine their preparedness and dependencies. Product can't launch if legal hasn't approved terms; marketing can't execute if the product isn't feature-complete; sales can't sell without pricing and packaging finalization. This framework identifies where teams are out of sync—where product thinks marketing is handling onboarding communication while marketing assumes product built it into the UI. Forcing explicit assessment of each function's readiness exposes dangerous assumptions before they become launch-day disasters.
2. Risk-Based Prioritization Separates Critical from Nice-to-Have
Not all gaps are created equal. A typo in documentation is annoying but not a launch blocker; a critical authentication bug that exposes user data absolutely is. Risk assessment—evaluating each gap by impact (how bad if it happens) × likelihood (probability it happens)—distinguishes must-fix issues from can-defer improvements. High-impact, high-likelihood risks (server crashes under expected load, core feature doesn't work on iOS) are launch blockers requiring resolution or significant mitigation. Low-impact, low-likelihood risks (edge case bug affecting <1% of users in rare scenario) can be monitored post-launch. This framework prevents both dangerous launches (ignoring critical risks) and perfectionism paralysis (delaying for trivial polish). It creates honest conversations about trade-offs: Can we launch with this limitation? What's our backup plan if this risk materializes? What's the cost of delay versus accepting managed risk?
3. Customer Impact Scenarios Ground Readiness in Reality
Teams often assess readiness from their own perspective—engineering considers product "ready" when features are built; marketing when assets are created; support when training is complete. But readiness must be measured from the customer perspective: Can a customer discover the product, understand its value, decide to purchase, complete payment, onboard successfully, and achieve their desired outcome? Walking through this end-to-end customer journey exposes gaps that silo-focused assessments miss. What happens when a customer tries to sign up during peak load? Where do they go if onboarding confuses them? What if they need help but documentation is incomplete? If a feature breaks, how long until they get support? Customer impact scenarios—literally role-playing the customer journey—reveal hidden dependencies and coordination failures that abstract checklists miss, ensuring launch readiness translates to actual customer success.
4. Rollback Planning Enables Confident Risk-Taking
The ability to rapidly reverse a launch decision transforms risk calculus. Without rollback capability, every risk becomes existential—you're trapped with whatever breaks. With clear rollback procedures, abort criteria, and tested mechanisms to revert to previous state, you can take calculated risks knowing you're not betting the company. Rollback planning forces clarity on critical questions: At what point do we pull the plug? How quickly can we revert? What's the blast radius if we roll back (affected users, data loss, communication needed)? Who has authority to make the rollback call? Teams that plan rollbacks paradoxically launch more confidently because they know they're not trapped—if things go sideways, there's an escape hatch. This transforms launch decisions from irrevocable leaps of faith into reversible experiments where you can fail safely, learn quickly, and try again with improvements.
5. Post-Launch Readiness Determines Long-Term Success
Launch day is just the beginning—post-launch execution determines whether early momentum compounds or collapses. Post-launch readiness means having monitoring to detect problems before customers complain, rapid response capability to fix issues quickly, communication protocols to keep stakeholders informed, and continuous improvement processes to iterate on feedback. Teams often focus all energy on launch day, then exhaust themselves and disband, leaving the product to drift. But Week 1 post-launch is when you discover what actually breaks under real usage, learn which assumptions were wrong, and identify the improvements with highest impact. Post-launch readiness includes on-call rotations, incident response playbooks, daily standups to review metrics and issues, customer feedback collection and analysis, and reserved capacity for rapid iterations. Products that launch well but lack post-launch machinery falter; products with strong post-launch operations can recover from rough launches and continuously improve toward product-market fit.
6. Realistic Timeline Assessment Prevents Compressed Failure
Wishful thinking is the most common launch risk. Teams estimate based on optimistic scenarios (everything goes smoothly, no surprises, perfect execution) and ignore historical evidence that projects rarely finish on time. Realistic timeline assessment means acknowledging actual team capacity (accounting for meetings, context switching, competing priorities), dependency risk (waiting on others, external approvals, third-party services), and Murphy's Law buffer (inevitable surprises, scope creep, illness, technical complications). If your critical path requires perfect execution with zero slack, you're not planning—you're hoping. Realistic assessment exposes when dates are fantasy, forcing honest conversations: Do we cut scope? Add resources? Accept a delayed launch? Launch with limitations? Teams resist these conversations because they're uncomfortable, but the alternative—pretending an unrealistic date is achievable—leads to crunch time death marches, burnout, quality shortcuts, and ultimately, launch failure or delay anyway. Realistic timelines with buffer enable sustainable execution and informed decisions.
Example Output Preview
🚀 Launch Readiness Assessment - TaskFlow Mobile App 3.0 (Launch: March 15, 2026)
EXECUTIVE SUMMARY
Overall Launch Readiness: 🟡 AT RISK
Confidence in Launch Date: 60% (Conditional Go)
Verdict: Launch is possible on March 15, but significant risks exist across technical stability, support readiness, and GTM execution. Recommend CONDITIONAL GO with specific must-complete criteria outlined below, OR consider 2-week delay to April 1 for safer launch with higher confidence.
Top 5 Critical Risks Requiring Immediate Attention:
- 🔴 BLOCKER: iOS App Store Review Timing - Submitted 2 days ago, average review time is 2-5 days but can be 7-10 days if rejected and resubmitted. High risk of missing March 15 launch if any issues found. Mitigation: Prepare Android-only launch plan as backup; have developer on standby for immediate fixes if rejected.
- 🔴 Infrastructure Scalability Unproven - Load tested to 2,000 concurrent users; expect 5,000+ at launch based on user base + marketing push. Database queries showing performance degradation at scale. Risk: Launch day slowdowns or outages. Mitigation: Implement database query optimization (5 days), add caching layer (3 days), set up auto-scaling with aggressive thresholds.
- 🟡 Support Team Under-Prepared - Training scheduled for March 12 (3 days before launch), no practice runs with actual product. 18 support agents, but only 2 have used beta version. Risk: Overwhelmed support, poor customer experience in critical first week. Mitigation: Move training to March 8, require all agents to complete 2-hour product walkthrough by March 10, create cheat sheets for common issues.
- 🟡 Migration Path for Existing Users Untested - 47,000 existing users must migrate to new app. Migration tested with 50 beta users successfully, but edge cases (users with >100 projects, special characters in data, expired subscriptions) not fully validated. Risk: Data loss, failed migrations, support ticket surge. Mitigation: Phased migration: power users first (March 15-17), then general rollout (March 18-22) with rollback capability.
- 🟡 Marketing Landing Page Not Live - Design approved, content finalized, but site not deployed or tested. Dev handoff happening March 8, needs QA and final approval. Risk: Traffic from launch campaigns lands on incomplete/broken page. Mitigation: Prioritize landing page deployment by March 10 with 5 days for testing and iteration.
Launch Decision Recommendation: CONDITIONAL GO
Conditions for Safe Launch:
- ✅ iOS App Store approval received by March 12 (3 days before launch) - If not, delay to March 22
- ✅ Infrastructure optimization complete and load tested to 6,000 concurrent by March 11
- ✅ Support team training completed and agents demonstrate competency by March 12
- ✅ Landing page live and tested by March 10
- ✅ Go/No-Go review meeting March 13 with clear abort criteria
Alternative Scenario: If 3+ conditions above are not met by March 13, recommend 2-week delay to April 1. Cost of delay: ~$40K in lost revenue, 2 weeks of team opportunity cost. Benefit: Significantly reduced launch risk, better customer experience, higher confidence in technical stability.
PRODUCT & TECHNICAL READINESS: 🟡 AT RISK
Feature Completeness: 🟢 READY
- All planned features implemented and functional
- Beta testing with 150 users completed; feedback incorporated
- No missing features that would block launch
Bug Status: 🟡 AT RISK
- Critical Bugs: 0 (all resolved) ✅
- High Priority: 3 open (2 under investigation, 1 fix in QA)
- Medium Priority: 12 open (acceptable to launch, monitor post-launch)
- Low Priority: 27 open (defer to future releases)
- Concern: High-priority bug #247 (offline sync occasionally fails to resolve conflicts) still being investigated—could be launch blocker if root cause isn't understood. Target resolution: March 10.
Quality Assurance: 🟢 ADEQUATE
- 85% automated test coverage on core flows
- Manual QA completed on iOS (iPhone 13-15) and Android (Samsung S22-S24, Pixel 7-8)
- Regression testing passed for all critical user journeys
- Accessibility testing completed (WCAG 2.1 AA compliance verified)
- Gap: Limited testing on older devices (iPhone 11, Android 10) - 15% of user base; may encounter edge case issues
Performance & Scalability: 🔴 NOT READY
- Load Testing: Validated to 2,000 concurrent users; performance acceptable
- Expected Load: 5,000-7,000 concurrent at launch peak (based on 47K user base + marketing surge)
- Bottleneck Identified: Database queries for project loading degrade beyond 3,000 users (avg response time: 850ms → 3.2s)
- Infrastructure: Auto-scaling configured but never tested at target load; monitoring alerts set but thresholds untested
- REQUIRED ACTIONS:
- 1. Database query optimization (identify N+1 queries, add indexes) - ENG team, 5 days, Due: March 10
- 2. Implement Redis caching for frequently accessed project data - ENG team, 3 days, Due: March 11
- 3. Load test to 8,000 concurrent users and validate response times <2s p95 - QA team, 2 days, Due: March 13
Technical Debt & Limitations: 🟡 ACCEPTABLE
- Known limitation: Real-time collaboration limited to 8 simultaneous editors (acceptable for MVP; 95% of projects have <5 collaborators)
- Workaround documented: Users with >8 collaborators can use "turn-based" editing mode
- Technical debt: Legacy API endpoints still supported but should be deprecated within 6 months post-launch
Readiness Assessment: Product is feature-complete and quality is acceptable for launch. Primary concern is performance at scale—infrastructure optimization is on critical path and must complete by March 11 for safe launch. If load testing on March 13 reveals issues, we lack time to fix before March 15 launch.
GO-TO-MARKET READINESS: 🟡 AT RISK
Marketing Assets: 🟡 AT RISK
- ✅ Blog post announcing launch (drafted, scheduled for March 15, 9am)
- 🔄 Landing page (design approved, in development, not yet live) - CRITICAL: Deploy by March 10
- ✅ Demo video (90 seconds, professional production, completed)
- ✅ Social media assets (graphics, copy for LinkedIn, Twitter, Facebook scheduled)
- 🔄 Case study with beta customer (interview complete, writeup 70% done, targeting March 12)
- ❌ Press release (not started, optional—decided not to pursue press outreach for this launch)
Sales Enablement: 🟢 READY
- ✅ Pitch deck updated with v3.0 features and benefits
- ✅ Pricing finalized: Free tier (unchanged), Basic $12/mo (new tier), Pro $24/mo (upgraded from $18)
- ✅ Sales training scheduled March 12, 2-hour session with product demo and objection handling
- ✅ Demo environment stable and accessible to all account executives
- ✅ Internal FAQ for common customer questions prepared
Channel Activation: 🟢 READY
- Email: Launch announcement email to 47,000 existing users scheduled for March 15, 10am
- In-App: Notification banner prepared for web app users directing to mobile app download
- Paid Ads: $30K budget allocated; campaigns created in Google Ads and Facebook Ads, targeting "project management app" keywords; scheduled to go live March 15
- Product Hunt: Submission prepared, scheduled for March 16 (day after launch to build momentum)
- Partnerships: Co-marketing email with integration partner (Zapier) scheduled for March 18
Readiness Assessment: GTM execution is mostly ready. Landing page deployment is on critical path and creates launch day risk if not completed by March 10. All other assets are on track. Sales team is aligned and prepared.
[Assessment continues with Customer Experience Readiness, Operations & Infrastructure, Risk Assessment, Timeline Analysis, Launch Execution Plan, Gap Analysis, and Final Launch Decision sections...]
FINAL LAUNCH DECISION RECOMMENDATION
Recommendation: CONDITIONAL GO for March 15, 2026
Confidence Level: 60%
GO Decision Conditions (All Must Be Met by March 13):
- ✅ iOS App Store approval received (currently in review, submitted March 1)
- ✅ Infrastructure performance validated to 6,000+ concurrent users with <2s p95 response time
- ✅ High-priority bug #247 (offline sync) resolved or decision made to launch with workaround documented
- ✅ Support team training completed with demonstrated product competency
- ✅ Landing page live, tested, and validated across devices
Go/No-Go Review: Thursday, March 13, 4pm
Decision Makers: VP Product (launch owner), VP Engineering (technical go), VP Marketing (GTM readiness), CTO (infrastructure approval)
Abort Criteria (Automatic No-Go Triggers):
- iOS App Store rejection received after March 12 (insufficient time to fix and resubmit)
- Load testing on March 13 reveals critical performance issues requiring >2 days to fix
- Database migration failure rate >5% in power user testing (March 12-13)
- Discovery of critical security vulnerability
Rollback Plan: If launched and critical issues emerge in first 24 hours:
- Trigger: >5% of users reporting data loss, sustained outage >2 hours, security breach
- Authority: CTO or VP Product can call rollback
- Procedure: (1) Disable new user signups, (2) Halt marketing campaigns, (3) Display maintenance message, (4) Revert to v2.0 app in stores, (5) Migrate users back to previous version
- Communication: Email to all users within 4 hours explaining situation and timeline
Alternative Recommendation: If Conditions Not Met by March 13
Delay to April 1, 2026 (2-week delay)
Rationale: Two additional weeks provides buffer to resolve technical risks, complete marketing assets without rush, allow fuller support preparation, and reduce launch day stress. Cost of delay is manageable; cost of botched launch is significantly higher (brand damage, overwhelmed support, technical debt from rushed fixes).
Success Metrics (Declaring Launch Successful):
- Week 1 Targets: 8,000+ downloads, <3% crash rate, <5% critical support tickets, 70%+ positive app store reviews
- Month 1 Targets: 15,000 downloads, 25% existing user migration, 1,200+ paid conversions, NPS >40
- Must Not Occur: Sustained outage >4 hours, data loss affecting >100 users, security incident, negative press coverage
Post-Launch Plan:
- Daily standups: March 15-22, 9am, product + eng + support leads
- On-call rotation: 24/7 coverage March 15-18, business hours March 19-25
- Metrics monitoring: Real-time dashboard tracking downloads, crashes, support tickets, performance
- Retrospective: March 29, full team debrief on what went well and what to improve
Prompt Chain Strategy
Step 1: Cross-Functional Readiness Assessment
Expected Output: High-level readiness verdict with clear assessment of product quality, technical stability, and GTM preparedness. Identifies whether you have a product problem, technical risk, or GTM execution gap.
Step 2: Operations, Risk, and Timeline Analysis
Expected Output: Detailed risk analysis with severity ratings, critical path identification showing what must complete before launch, and operations validation ensuring post-launch support readiness. Reveals hidden dependencies and timeline risks.
Step 3: Launch Decision and Execution Plan
Expected Output: Actionable launch plan with clear decision recommendation (go/no-go/conditional), specific conditions that must be met, prioritized action items to close gaps, and post-launch monitoring plan. Enables leadership to make informed launch decision with clear risk understanding.
Human-in-the-Loop Refinements
1. Conduct Pre-Mortem Exercise
Imagine the launch failed catastrophically. Request: "Conduct a pre-mortem analysis—assume the launch on March 15 was a disaster. What are the 10 most likely failure scenarios that could have caused this? For each scenario: (1) What specifically went wrong? (2) What early warning signs would we have seen? (3) What could we have done to prevent it? (4) What's our mitigation strategy if it happens?" Pre-mortems surface risks teams are unconsciously avoiding and create proactive mitigation strategies before problems occur.
2. Validate with Cross-Functional Reality Check
AI assessments need ground truth validation. Prompt: "I've shared this readiness assessment with engineering, marketing, sales, and support leads. Here's their feedback [provide feedback]. Where do their concerns differ from this assessment? Are there risks I'm underestimating or gaps they're seeing that weren't captured? Update the risk register and action items based on this team input." Frontline teams often see practical execution challenges that high-level assessments miss.
3. Stress Test the Timeline with Pessimistic Scenarios
Optimistic timelines always slip. Ask: "Re-analyze the critical path assuming pessimistic but realistic scenarios: (1) iOS App Store takes 8 days to approve, (2) Infrastructure optimization takes 2x longer than estimated, (3) One key engineer is sick for 3 days, (4) Marketing asset approval delayed 2 days. Does the March 15 launch survive? What breaks? What's the realistic date with these contingencies factored in?" Pessimistic planning reveals when your timeline has no margin for normal friction.
4. Define Minimum Viable Launch Criteria
Perfect readiness is often impossible. Request: "Define three launch tiers: (1) Ideal launch—everything perfect, zero compromises; (2) Good launch—acceptable trade-offs, manageable risks; (3) Minimum viable launch—bare minimum to launch without disaster. For the current state, which tier are we in? What specific improvements move us from minimum viable to good? What's truly non-negotiable vs. nice-to-have?" This framework helps teams make honest trade-off decisions rather than pretending everything is critical.
5. Build Launch Day Incident Response Playbook
Murphy's Law guarantees something will break. Prompt: "Create a launch day incident response playbook covering: (1) How do we detect problems (monitoring, support tickets, social media)? (2) Escalation paths—who gets alerted for what severity? (3) Communication templates for common issues (outage, bugs, feature not working). (4) Rollback decision tree—when do we abort vs. patch? (5) Hotfix deployment process without standard approval gates. Make this actionable enough that anyone on-call can execute it." Incidents are less catastrophic when response is pre-planned rather than improvised in panic.
6. Schedule Post-Launch Learning Retrospective
Launches are learning opportunities regardless of outcome. Request: "Design a post-launch retrospective framework to be conducted 2 weeks after launch: (1) What went well that we should replicate? (2) What went poorly that we should fix? (3) What surprised us that we didn't anticipate? (4) What would we do differently next launch? (5) What processes or tools should we build to make future launches smoother? Create this framework now so we're prepared to capture learning while it's fresh." Teams that systematize launch learning compound their capabilities; those that don't repeat the same mistakes every launch.