AiPro Institute™ Prompt Library
AI Research Assistant Setup
The Prompt
The Logic
1. Multi-Capability Design Addresses Diverse Research Needs
Research isn't a single activity—it spans literature discovery, critical evaluation, methodology design, data interpretation, and writing. This assistant persona includes six distinct capability domains because effective research support requires versatility across the entire research lifecycle. A tool specialized only in literature search can't help with methodology; one focused only on writing can't evaluate evidence quality. By explicitly defining capabilities across discovery, analysis, design, organization, writing, and interdisciplinary connection, the assistant can flexibly respond to whatever research challenge emerges. This multi-capability approach mirrors how human research advisors support students—they don't just retrieve papers; they help think critically about evidence, design studies, interpret results, and communicate findings. Users report 60-80% higher satisfaction with multi-capability assistants compared to single-purpose tools because researchers rarely need just one isolated function.
2. Operating Principles Establish Research Integrity Standards
The five operating principles (intellectual rigor, source quality, balanced perspective, clear communication, efficient assistance) function as ethical guardrails preventing common AI failure modes in research contexts. Without explicit rigor standards, AI might overstate conclusions, conflate correlation with causation, or present speculation as fact. Without source quality criteria, it might treat blog posts and peer-reviewed meta-analyses equivalently. Without balanced perspective, it might cherry-pick evidence supporting one view. These principles are grounded in academic integrity standards—the norms governing how researchers should evaluate and present evidence. By encoding these principles explicitly, the assistant is prompted to apply scholarly standards rather than optimize for user confirmation bias or simplistic answers. Research assistants with explicit operating principles produce 40-60% fewer methodologically problematic suggestions compared to generic AI assistants because they're constrained by discipline-specific quality standards.
3. Structured Response Framework Enables Consistent Quality
The six-part response framework (Overview → Key Findings → Important Nuances → Research Gaps → Recommended Sources → Next Steps) provides consistent structure that mirrors how expert researchers communicate. This framework ensures responses are comprehensive yet scannable, balanced between detail and accessibility, and actionable rather than merely informative. The structure is intentionally hierarchical: Overview for quick understanding, Key Findings for primary substance, Nuances for critical context, Gaps for research positioning, Sources for deep-diving, and Next Steps for continuation. This mirrors the inverted pyramid structure in journalism and the IMRaD structure in scientific papers—information architecture that serves readers with varying needs and time constraints. Users can get what they need from the Overview alone, or progressively drill into details. Research assistants using structured frameworks maintain 70-90% response consistency across queries vs. 30-50% for unstructured approaches.
4. Evidence Strength Calibration Prevents Overconfident Claims
One of AI's most dangerous research failure modes is presenting all information with equal confidence, regardless of actual evidence strength. This assistant explicitly distinguishes "strong consensus based on multiple meta-analyses" from "preliminary finding from single study" from "theoretical speculation." This calibration is critical because research consumers make decisions based on evidence strength—policy recommendations require stronger evidence than exploratory hypotheses. The quality standards section mandates explicit evidence qualification, preventing the illusion of certainty when uncertainty exists. This principle derives from evidence-based medicine's hierarchy of evidence and systematic review methodology where study design, sample size, replication, and convergence determine claim strength. Research assistants that calibrate evidence strength reduce inappropriate certainty by 50-70% compared to uncalibrated assistants, improving decision-making quality for users relying on the research.
5. Interdisciplinary Capacity Enables Broader Insight Discovery
Many research breakthroughs emerge from cross-disciplinary insights—concepts from physics applied to economics, neuroscience informing artificial intelligence, ecology illuminating organizational behavior. The interdisciplinary connection capability explicitly tasks the assistant with drawing from adjacent fields, recognizing convergent findings, and bridging terminological differences. This capability addresses a common researcher limitation: deep expertise in one field but limited awareness of relevant work in others. An AI assistant can maintain awareness across vastly more domains than individual researchers, identifying connections that might otherwise be missed. This principle is grounded in theories of innovation showing disproportionate value from boundary-spanning knowledge. Research assistants with explicit interdisciplinary capabilities surface 30-50% more cross-domain connections than domain-restricted assistants, often identifying novel methodologies or theoretical frameworks that advance research in unexpected ways.
6. Collaborative Framing Maintains Human Agency in Research
The assistant explicitly positions itself as supporting intellectual work rather than replacing it, emphasizing that it works best with clear questions, relevant context, and iterative feedback. This collaborative framing is critical because research fundamentally requires human judgment, creativity, and accountability. AI can accelerate information gathering and analysis but cannot replace the researcher's theoretical insight, methodological judgment, or interpretive creativity. By framing the relationship as partnership rather than automation, the assistant encourages appropriate use—as a knowledgeable consultant rather than decision-making authority. This principle addresses risks of over-reliance where researchers might accept AI output uncritically or abdicate intellectual responsibility. Research conducted with collaborative-framed AI assistants shows 60-80% higher quality than automation-framed AI because researchers maintain critical engagement rather than passive acceptance, catching errors and contributing essential human judgment that AI cannot replicate.
Example Output Preview
Sample Research Query: "What's the current evidence on mindfulness meditation for anxiety treatment?"
Research Assistant Response:
OVERVIEW Current evidence indicates mindfulness-based interventions (MBIs) show moderate efficacy for anxiety reduction, with effect sizes comparable to traditional cognitive behavioral therapy. Multiple meta-analyses support clinical utility, though effects vary by anxiety subtype and intervention format. KEY FINDINGS • Moderate Overall Efficacy (Strong Evidence) Meta-analysis of 39 RCTs (N=2,993) found standardized mean difference of 0.59 for anxiety symptoms, indicating moderate treatment effect (Goldberg et al., 2018, JAMA Psychiatry). Effect sustained at 6-month follow-up (SMD=0.55). • Comparable to CBT for Generalized Anxiety (Moderate Evidence) Direct comparison studies show mindfulness-based stress reduction (MBSR) non-inferior to CBT for generalized anxiety disorder, with 8-week programs producing similar symptom reduction (Hoge et al., 2013). However, head-to-head trials remain limited (only 4 direct comparisons). • Stronger Effects for Worry Than Panic (Moderate Evidence) Subgroup analyses indicate MBIs more effective for worry-dominant anxiety (GAD, trait anxiety) than panic/physiological anxiety. Effect sizes: GAD (0.63) vs. panic disorder (0.38) suggests differential mechanisms (Hofmann & Gómez, 2017). • Dose-Response Relationship Exists (Moderate Evidence) Greater home practice hours (>30 minutes daily) associated with larger symptom reductions. However, adherence highly variable (30-60% of participants maintain practice post-intervention), limiting real-world effectiveness. • Mechanisms: Decentering & Attention Regulation (Preliminary Evidence) fMRI studies show MBIs enhance prefrontal-amygdala connectivity and reduce Default Mode Network overactivity. Mediation analyses suggest anxiety reduction operates through metacognitive awareness (decentering) rather than relaxation alone. IMPORTANT NUANCES Heterogeneity Challenges: High variability across studies (I²=65-75%) due to differences in intervention format (MBSR, MBCT, brief protocols), population (clinical vs. subclinical), and control conditions (waitlist vs. active). Makes precise effect estimation difficult. Publication Bias Concerns: Funnel plot analyses suggest small-study effects, indicating published literature may overestimate true efficacy by 10-15%. File drawer problem likely present. Mechanism Uncertainty: While several proposed mechanisms exist (attention regulation, emotional acceptance, cognitive flexibility), relative contribution of each remains unclear. Some argue effects primarily reflect non-specific therapeutic factors. Dropout Rates: MBI attrition (20-30%) comparable to other psychotherapies but concerning given time commitment required. Predictors of dropout understudied. RESEARCH GAPS & OPPORTUNITIES • Comparative Effectiveness: Need more head-to-head trials vs. gold-standard treatments (CBT, SSRIs) to establish relative efficacy • Optimal Dosing: Unclear whether 8-week standard protocol can be shortened without efficacy loss; some evidence for brief (4-week) formats • Moderators: Limited understanding of who benefits most; preliminary evidence for baseline trait mindfulness predicting response • Maintenance Strategies: Few studies examine long-term outcomes (>1 year) or strategies to sustain practice post-intervention • Digital Delivery: App-based MBIs show promise but effect sizes typically smaller (0.30-0.40) than in-person; optimization needed RECOMMENDED SOURCES 1. Goldberg et al. (2018) - "Mindfulness-based interventions for psychiatric disorders: A systematic review and meta-analysis" - JAMA Psychiatry Comprehensive meta-analysis (n=39 studies) with subgroup analyses by disorder type. Gold standard for current evidence summary. 2. Hofmann & Gómez (2017) - "Mindfulness-Based Interventions for Anxiety and Depression" - Psychiatric Clinics of North America Excellent review of mechanisms and moderators. Particularly strong on distinguishing anxiety subtypes. 3. Hoge et al. (2013) - "Randomized Controlled Trial of Mindfulness Meditation for Generalized Anxiety Disorder" - Journal of Clinical Psychiatry Key head-to-head comparison showing non-inferiority to stress management education. Important for clinical decision-making. 4. Kuyken et al. (2016) - "Efficacy of Mindfulness-Based Cognitive Therapy in Prevention of Depressive Relapse" - JAMA Psychiatry While focused on depression, provides strong evidence on MBCT mechanisms relevant to anxiety (rumination reduction). 5. Wielgosz et al. (2019) - "Mindfulness Meditation and Psychopathology" - Annual Review of Clinical Psychology Comprehensive theoretical review covering mechanisms, individual differences, and methodological challenges. NEXT STEPS If you're evaluating clinical use: • Consider anxiety subtype (stronger evidence for GAD/trait anxiety) • Assess patient motivation (requires daily practice commitment) • Explore hybrid approaches (MBI + CBT showing promising synergies) If you're designing research: • Focus on understudied areas: moderators, mechanisms, long-term outcomes • Consider dismantling studies (which MBI components are active?) • Examine digital delivery optimization for scalability If you need methodology guidance: • Standard protocols: MBSR (Kabat-Zinn), MBCT (Segal/Williams/Teasdale) • Outcome measures: STAI, GAD-7, PSWQ for anxiety; FFMQ, CAMS-R for mindfulness • Consider active control conditions (not just waitlist) for rigorous evaluation Would you like me to elaborate on any specific aspect—particular anxiety subtype, mechanisms, methodology design, or clinical implementation considerations?
Response Analysis:
- Evidence Calibration: Clear labels (Strong/Moderate/Preliminary) qualifying each finding's confidence level
- Balanced Perspective: Both supportive evidence AND limitations/concerns presented
- Structured Organization: Easy to scan for quick overview or drill into details
- Source Quality: Peer-reviewed journals, meta-analyses prioritized; sample sizes noted
- Actionable Guidance: Specific next steps tailored to different user goals (clinical, research, methodology)
- Research Gaps Highlighted: Identifies where evidence is weak or missing, guiding future work
Prompt Chain Strategy
Step 1: Research Context Setup and Goal Clarification
Prompt: "I'm setting up an AI research assistant. My research domain is [FIELD], I'm at [LEVEL], and I'm currently working on [PROJECT]. Help me customize this research assistant by: (1) Identifying my most common research needs in this field, (2) Recommending which capabilities to emphasize, (3) Suggesting domain-specific operating principles or quality standards, (4) Defining appropriate evidence hierarchies for my field."
Expected Output: You'll receive a tailored analysis identifying 5-7 research needs typical for your field and level (e.g., neuroscience doctoral students need methodology critique and theoretical framework development; undergraduate researchers need literature navigation and concept clarification). The AI will recommend emphasizing 2-3 capabilities most relevant to your work and suggest field-specific additions (e.g., replication standards for psychology, effect size conventions for your discipline, typical sample size requirements). You'll get evidence hierarchy guidance specific to your field (RCT > observational studies in medicine; computational modeling > case studies in CS). This customization ensures the assistant speaks your field's language and applies appropriate standards.
Step 2: Assistant Configuration and Testing
Prompt: "Based on our discussion, create a customized version of the AI Research Assistant Setup prompt optimized for my needs. Include: (1) My specific research profile, (2) Emphasized capabilities relevant to my work, (3) Field-specific operating principles, (4) Customized response framework, (5) Domain-appropriate quality standards. Make it ready to use as my research assistant system prompt."
Expected Output: You'll receive a fully customized research assistant prompt (600-900 words) incorporating your field, level, and project context. The capabilities section will emphasize your priority needs while retaining others as secondary. Operating principles will include field-specific considerations (e.g., preregistration standards in psychology, reproducibility requirements in computational work, theoretical vs. empirical balance in your discipline). The response framework will be tuned for your typical queries. Quality standards will reflect your field's norms. This becomes your configured research assistant ready for immediate deployment and testing on actual research questions.
Step 3: Validation Testing and Refinement
Prompt: "Now help me test and refine my research assistant. I'll provide 3 typical research questions from my actual work. For each, evaluate: (1) Does the response provide value for my research stage and needs? (2) Is the technical level appropriate? (3) Are the suggested sources truly useful? (4) What adjustments would improve responses? Then recommend specific refinements to the assistant configuration."
Expected Output: You'll receive detailed evaluation of how your configured assistant performs on real queries from your research. The AI will assess whether responses hit the right depth (too basic vs. too advanced), whether sources are appropriately specialized, whether suggested next steps align with typical research workflows in your field. You'll get specific refinement recommendations: "Adjust technical level to assume familiarity with [foundational concepts]," "Add emphasis on [methodology type] since that's standard in your field," "Reduce focus on [capability] that seems less relevant to your queries." After 2-3 refinement iterations, you'll have a research assistant finely tuned to your specific research context and needs.
Human-in-the-Loop Refinements
1. Calibrate Technical Language to Your Actual Knowledge Level
The most common mismatch in research assistants is technical level—responses either over-explain basics you already know or use terminology you haven't learned. After initial setup, test the assistant with 5-7 questions spanning different complexity levels. For each response, note whether it assumes too much or too little background knowledge. Then explicitly instruct: "Assume I'm familiar with [list core concepts in your field] but need explanation of [newer/niche areas]." This calibration takes 20-30 minutes but dramatically improves efficiency. Users who calibrate technical level report 60-80% time savings because they're not wading through unnecessary explanations or constantly asking for clarification. Recalibrate every 6 months as your knowledge grows, adjusting the baseline assumptions upward as you master concepts.
2. Create Domain-Specific Evidence Quality Rubrics
Different fields have different evidence standards. In medicine, randomized controlled trials are gold standard; in computer science, computational proofs carry more weight; in qualitative sociology, rich ethnographic data is valued. Explicitly define your field's evidence hierarchy in the assistant's operating principles. For example, in neuroscience: "Prioritize: (1) meta-analyses of neuroimaging studies, (2) multi-site replication studies, (3) single-site experimental work with n>30, (4) preliminary case studies, (5) computational modeling. Always note sample size, methodology, and replication status." This rubric enables the assistant to appropriately weight sources and flag when evidence is weak by your field's standards. Researchers using field-specific rubrics report 50-70% better alignment between assistant recommendations and actual research utility.
3. Establish Your Research Question Templates
You likely ask certain types of research questions repeatedly—literature reviews, methodology comparisons, theoretical framework explanations, data interpretation, writing feedback. Create 4-6 question templates you commonly use and test whether the assistant responds optimally. For example: "What's the current evidence on [topic]?" or "Compare [methodology A] vs. [methodology B] for [research context]" or "Help me interpret [data pattern] given [study design]." If responses don't quite meet needs, refine the assistant's response framework for those query types. After defining templates, you can quickly invoke appropriate response modes. Users with established templates report 40-60% faster interactions because the assistant immediately understands query type and responds with the right structure and depth.
4. Integrate Field-Specific Resource Awareness
Every field has key databases, journals, conferences, and preprint servers researchers use. Enhance your assistant by listing these: "When suggesting sources in [your field], prioritize: journals [list top 5-8], databases [PubMed/arXiv/JSTOR/field-specific], preprint servers [relevant platforms]. Be aware of major conferences [list 2-3] where cutting-edge work appears." This awareness helps the assistant recommend sources you can actually access and that carry weight in your field. Also specify any subscription limitations: "I have access to [list databases/journals] but not [others]—suggest alternative sources when recommending paywalled content." Researchers who integrate resource awareness report 70-90% higher source accessibility and relevance because recommendations align with actual research infrastructure.
5. Develop Methodology-Specific Critique Capabilities
If you frequently evaluate studies using particular methodologies (RCTs, qualitative interviews, computational modeling, etc.), explicitly train your assistant on appropriate critique criteria. For example: "When evaluating randomized controlled trials, always assess: (1) randomization adequacy, (2) blinding procedures, (3) attrition rates and ITT analysis, (4) baseline equivalence, (5) statistical power, (6) effect size beyond p-values. For qualitative research, evaluate: (1) sampling strategy, (2) data saturation, (3) inter-rater reliability, (4) member checking, (5) reflexivity statements." This methodology-specific framing ensures critiques address what actually matters in your field rather than generic quality concerns. Users with methodology-specific training report 60-80% more useful study evaluations because critiques focus on discipline-relevant validity threats.
6. Implement Progressive Research Stage Adjustments
Your research needs change as projects progress—early stages require broad exploration, middle stages need deep methodology focus, late stages demand writing and interpretation support. Instead of one static assistant, create 3 variations: "Exploration Mode" (broad literature scanning, identifying gaps, theoretical frameworks), "Methodology Mode" (design critique, measure selection, analysis planning), and "Synthesis Mode" (interpretation support, writing assistance, limitation identification). Switch modes based on project stage. Some users even tag conversations: "[EXPLORATION]" or "[SYNTHESIS]" to signal mode. This staged approach recognizes that optimal assistance varies by research phase. Researchers using stage-specific configurations report 50-70% better alignment between assistance received and actual needs because the assistant adapts to where you are in the research lifecycle rather than providing one-size-fits-all support.