Practical Guide to Producing Scoping Reviews with AI Tools

Unlock the Power of Scoping Reviews with AI

Version 1.0 - Target Release: April 28, 2025

Objective

This guide provides a clear, step-by-step method for conducting high-quality scoping reviews using AI tools. Learn how to move efficiently from formulating your review question to sharing actionable insights, all while maintaining ethical standards and leveraging the latest AI capabilities, including "deep research" for complex synthesis.

Who is this Guide For?

Designed primarily for Technology Innovation Management (TIM) students and faculty, this guide helps you:

Craft Clear Review Questions

Develop structured, effective review questions that guide your research process.

Conduct Thorough Reviews

Cover both academic and grey literature with reproducible methods.

Leverage AI Tools Ethically

Select and screen studies, extract key findings, ensure transparency, and generate feedback on drafts.

Translate Findings into TIM Assets

Create venture pitches, market analyses, and competitive intelligence from your research.

Guide Structure

This guide is organized into three main parts with supplementary resources:

Part 1: Foundations

  • Glossary & Introduction
  • Why Use Scoping Reviews
  • Key Frameworks
  • AI Benefits & Limitations
  • Human Oversight
  • Reference Management
  • Deep Research Tools

Part 2: The 9-Step Method

A step-by-step walkthrough from question formulation to dissemination, integrating AI tools effectively.

Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 Step 9

Includes a checklist for authors and reviewers.

Part 3: Keeping Current

  • Guide Strengths
  • Updates
  • Contributing
  • Acknowledgements
  • Ethical AI Use

Appendix: Tool Resources

Quick links to recommended AI tools and tutorials to support your scoping review process.

Guide Version & Updates

Current Version: 1.0 (April 28, 2025)

Version Date Changes Made Edited By
0.5 March 4, 2025 Introduced first draft of the Guide Tony Bailetti
1.0 April 28, 2025
  • Integrated academic and grey literature
  • Integrated deep search methods
  • Improved human oversight
  • Added visual examples
  • Added tutorial and tool links
  • Refined content & incorporated team recommendations
  • Corrected errors
Tony Bailetti

(See Part 3 for details on the update system and how to contribute.)

Part 1

Foundations of Scoping Reviews

Understand the fundamental concepts, frameworks, and tools that make effective scoping reviews possible.

Glossary of Key Terms

Boolean Operators

Logical connectors (AND, OR, NOT) used to refine database searches.

Deep Research (AI Capability)

Advanced AI feature analyzing numerous sources (web, academic, files) to synthesize complex information into detailed summaries or reports. Requires careful human validation.

Grey Literature

Information produced outside traditional publishing (reports, conference proceedings, government docs).

Human Oversight

Essential human judgment in monitoring, guiding, and critically evaluating AI outputs for accuracy, ethics, and relevance.

PRISMA-ScR

Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Reviews - guidelines for conducting and reporting.

Scoping Review

Maps key concepts, evidence, and research gaps in a field, typically without assessing study quality.

Systematic Review

Structured review answering a specific question using rigorous methods to identify, appraise, and synthesize relevant studies.

Introduction: Why Scoping Reviews Matter in TIM

Scoping reviews help you map what's known and unknown in a field, uncovering gaps that can spark new business ideas or research questions.

In Technology Innovation Management (TIM), they are invaluable for informing:

Venture pitches
Market analysis
Business opportunities
Product development
Process improvement
Competitive landscape
Technology disruption
Business models
IP management
Innovation ecosystems

Scoping Reviews in Action (TIM Example):

Explore blockchain commercialization, open innovation ecosystems, or emerging AI startups to understand the current landscape and identify new opportunities.

AI-Powered Reviews:

By combining human expertise with AI tools (including "Deep Research" capabilities for complex synthesis), TIM researchers can efficiently navigate vast amounts of diverse literature, uncover insights from non-traditional sources, and guide future innovation strategies. Crucially, human oversight remains paramount.

Scoping Reviews

  • Cast a wider net for broader exploration
  • Ideal for mapping rapidly evolving tech landscapes
  • Focus on "what exists" rather than "how well something works"
  • Typically don't assess study quality in depth

Systematic Reviews

  • Narrower, focused questions
  • Evaluate intervention effectiveness
  • Often include quality assessment
  • Measure specific outcomes (e.g., comparing success rates of two IP licensing strategies)

1.1 When to Use a Scoping Review

In TIM, scoping reviews are particularly useful when:

Broad/Exploratory Questions

Exploring what exists rather than how well something works.

Example: Investigating how AI platforms transform tech commercialization across industries.

Emerging Topics

Mapping new fields where concepts, definitions, and gaps are unclear.

Example: Mapping quantum computing applications in early-stage tech startups.

Diverse Literature

Integrating findings from varied sources (disciplines, methods) where systematic synthesis is difficult.

Example: Synthesizing insights on AI use in healthcare, finance, and manufacturing from engineering, business, and sociology sources.

Minimal Quality Appraisal

Cataloging existing knowledge without deeply assessing individual study quality.

Example: Identifying existing open innovation strategies rather than evaluating their effectiveness.

Identifying Gaps & Trends

Summarizing research, spotting uncharted areas, and informing future directions.

Example: Finding gaps in blockchain-based supply chain management for market opportunities.

1.2 Key Frameworks for Scoping Reviews

Scoping reviews often follow established frameworks. Understanding these helps structure your review:

Arksey and O'Malley (2005)

Focus: Foundational 5-step iterative process.

Strength: Flexibility for fast-changing fields (e.g., fintech).

TIM Example:

Broadly mapping wearable tech literature (journals, patents, whitepapers) to identify emerging sensor innovations.

(Note: Frameworks can often be combined for a more robust review.)

1.3 Benefits of Using AI for Scoping Reviews in TIM

AI offers significant advantages for managing the large, diverse datasets common in TIM scoping reviews, speeding up research and delivering actionable insights. Advanced "Deep Research" AI capabilities can further assist with synthesizing complex information from numerous sources (with human validation).

Speed & Efficiency

Rapid screening, automated deduplication saves time.

Enhanced Discovery

Uncovers diverse sources, surfaces hidden research via recommendations.

Improved Organization

AI clusters articles thematically, auto-tags topics (e.g., "IP management").

Scalable Analysis

Handles massive datasets, allows dynamic updating as new research appears.

Reduced Human Error

Consistent application of screening criteria.

Insight Generation

Text summarization highlights key findings, trend detection reveals shifts.

Cost Savings

Frees up researcher time for deeper analysis, validation, and strategic tasks.

1.4 Limitations of Scoping Reviews

While powerful for broad overviews in TIM, be aware of these limitations:

Lack of Critical Appraisal

May include lower-quality studies without rigorous evaluation.

Risk: Inflated perception of strategy effectiveness.

Broad Scope

Can provide a high-level view but lack depth for specific recommendations.

Risk: Too general for policy decisions.

No Strength-of-Evidence Assessment

Difficult to compare findings uniformly across diverse methodologies.

Risk: Hard to prioritize reliable data.

Selection Bias

Risk of overemphasizing known areas and missing newer, niche topics.

Risk: Missing emerging tech or innovation hubs.

Heterogeneity of Studies

Combining diverse data types complicates synthesis.

Risk: Difficulty consolidating findings.

Lack of Meta-Analysis

Cannot calculate statistical effect sizes (e.g., ROI for ventures).

Risk: No quantitative benchmarks.

Time/Resource Intensive

Searching complex domains requires significant effort.

Risk: Stretched team capacity.

Difficulty Defining Scope

Balancing comprehensiveness vs. feasibility in fast-moving fields is challenging.

Risk: Unwieldy review or missed subtopics.

Risk of Overlooking Grey Literature

Missing key industry reports or startup data undervalues real-world insights.

Risk: Diminished practical value.

Challenges in Updating

Rapid tech shifts require frequent updates to stay relevant.

Risk: Quickly outdated findings.

1.5 Limitations of Using AI Tools for Scoping Reviews

AI boosts productivity but has critical limitations. Stay vigilant:

1.6 Understanding AI Biases

Be aware of how biases in AI tools can shape your findings in TIM:

Selection Bias

Overrepresenting visible sources (top journals) while ignoring others (non-English, regional).

Example: AI prioritizing Silicon Valley reports, missing insights from Southeast Asian startups.

Mitigation: Search regional databases, local innovation hubs.

Algorithmic Bias

Amplifying established views, downplaying alternatives.

Example: AI trained on software data miscategorizing hardware innovation studies as irrelevant.

Mitigation: Use multiple AI tools, cross-verify, highlight niche sectors.

Confirmation Bias

Aligning with initial search terms, excluding contradictory evidence.

Example: AI focusing on "successful" blockchain pilots, ignoring failed ones.

Mitigation: Include counter-narratives (e.g., "failures"), evaluate results critically.

Citation Bias

Prioritizing highly cited older work, missing newer influential preprints.

Example: AI overlooking less-cited but groundbreaking early research on sustainable materials.

Mitigation: Update search parameters, include novel/disruptive studies.

Language & Accessibility Bias

Focusing on English open-access journals, omitting other languages.

Example: AI missing non-English case studies on unique IP licensing models.

Mitigation: Use multilingual databases, translate key papers if feasible.

Data Hallucination

Inventing facts or references.

Example: AI creating a fictitious reference by merging details from multiple sources.

Mitigation: Verify all references/data in standard databases. Investigate suspicious citations manually.

1.7 Human Oversight and Critical Evaluation

AI accelerates research, but your expertise is indispensable. Human oversight is an active process of critical validation, especially crucial in TIM.

Key Responsibilities:

Monitor AI Outputs: Check domain accuracy (e.g., correct application of innovation frameworks).

Evaluate Credibility: Assess suitability of included studies (e.g., capture relevant new R&D).

Intervene on Errors: Correct AI misinterpretations (e.g., confusing digital entrepreneurship with e-commerce).

Critically Evaluate Synthesis (Especially Deep Research): Treat AI outputs as drafts.

  • Gut Check: Does it seem plausible? Align with your understanding? Are sources credible?
  • Verify: Double-check crucial facts, definitions, data against original sources or reliable databases.
  • Cross-Check: Compare findings from different AI queries/tools. Investigate conflicts.
  • Apply Judgment: Filter insights through your knowledge. Discuss with peers/advisors.

Blend AI efficiency with your contextual knowledge of markets, investors, and regulations for valid, reliable reviews.

1.7.1 Maintaining Transparency and Explainability

Clearly document AI's role throughout the review:

In-Document Annotation

Use comments/footnotes to mark AI-influenced sections.

[AI Draft - Tool: ChatGPT 4.0, Date: 2025-04-21, Prompt: "Summarize findings on AI adoption barriers"]

[Human Edit - Verified data, clarified terminology, integrated into Section 3.1]

Process Explainability

Document why AI was used and its impact.

  • • Justify decisions (especially overrides)
  • • Trace synthesis back to sources
  • • Triangulate findings where possible
  • • Acknowledge AI tool limitations
  • • Include an AI Disclosure Statement

1.7.2 Maintain an AI Usage Log

Use a dedicated log to track AI use for traceability.

Timestamp AI Tool & Version Task Prompt Used Verification
2025-04-21 14:32 ChatGPT 4.0 Search Term Generation "Generate Boolean strings for quantum security startups" J.Smith
2025-04-22 09:15 Perplexity Pro Deep Research on Market Trends "Analyze recent quantum computing VC investments 2023-2025" T.Jones

1.8 Reference Management for Scoping Reviews

Effective reference management is critical for scoping reviews, especially when using AI tools that may generate incorrect citations.

Reference Management Tools

Zotero

Free, open-source with browser integration, PDF annotation, and group collaboration.

Mendeley

PDF management with social networking features and institutional repository integration.

EndNote

Comprehensive tool with advanced formatting options and database searching capabilities.

Papers

Intuitive interface with powerful search capabilities and AI-powered suggestions.

Reference Management Best Practices

1
Start Early

Set up your reference system before beginning your search to avoid backtracking.

2
Verify AI-Generated Citations

Always double-check citations produced by AI tools against the original source.

3
Use Consistent Tagging

Develop a consistent tagging system to categorize sources by theme, methodology, or relevance.

4
Maintain a Backup

Regularly export your reference library to prevent data loss during collaborative projects.

5
Document Search Process

Include metadata about how each reference was discovered for transparency and reproducibility.

Citation Network Visualization

Citation network analysis helps identify:

  • Key foundational papers in a research area
  • Research clusters and sub-domains
  • Emerging research fronts
  • Connections between different bodies of literature

Tools like VOSviewer and CiteSpace can generate visual maps of citation relationships to guide your scoping review.

AI
ML
NLP
CV
RL

Interactive citation network visualization

1.9 Overview of AI Deep Research Tools for Synthesis

A newer category of AI tools offers "Deep Research" capabilities, analyzing potentially hundreds of sources to generate detailed, synthesized reports. These are useful for Steps 5 (Analyze/Synthesize), 6 (Interpret), and 7 (Write), and can aid Step 2 (Search) idea generation.

Key Considerations When Choosing

Task Focus

Web/academic search vs. analyzing your own uploaded files?

Output Needs

Structured reports, conversational summaries, brainstorming?

Cost & Access

Free tier limitations vs. paid plans.

File Uploads

Crucial for analyzing specific gathered papers?

Comparison of Selected Tools (as of April 2025 - Verify current details)

AI Tool Typical Cost (USD/Month) Deep/Pro Access Level File Upload Support Typical Output Style Notes for Scoping Reviews (TIM Context)
Perplexity
~$20 (Pro) Free: ~5/day
Pro: 300+/day
Yes (Pro, up to 10 PDFs/prompt) Detailed, highly cited, structured answers (~2-5k words). Excellent start. Cost-effective, good web/academic research + file analysis. Allows academic focus.
Grok
~$30 Free: ~3/day
SuperGrok: 100+/day
Yes Conversational, real-time web synthesis (~1-3k words). Real-time web insights. Less focused on structured reports. Strong for X/Twitter tracking.
Gemini
~$20 (Advanced) Free: 5/mo (2.0)
Adv: 20/day (2.5 Pro)
No (as of Apr 2025) Detailed, cited reports, long outputs (10k+ words), research plan. Powerful synthesis. Lack of file upload limits analysis of specific gathered papers. Potential topic limits.
ChatGPT
$20 (Plus) / $200+ (Pro) Plus: ~25/mo
Pro: ~250/mo
Yes (up to 10 files/prompt) Detailed, report-like, flexible length (10k+ words), iterative. Most versatile. Excellent for structured drafts, analyzing uploaded files, brainstorming. Higher cost/limited access on lower tiers.

*Access levels/limits change frequently. Check provider.

Recommendation Strategy

1

Frequent Use & File Analysis

Perplexity offers a strong balance.

2

Deep Synthesis (Web Only)

Gemini is powerful (if file upload not needed).

3

Complex Analysis of Specific Files & Iteration

ChatGPT offers flexibility (if cost/access permits).

4

Experiment

Use free tiers/trials to test suitability.

5

Key Consideration - File Upload

Crucial difference! Perplexity & ChatGPT handle uploads; Gemini & Grok are currently web-only for deep search.

Part 2

The 9-Step Method

A systematic approach to producing high-quality scoping reviews of academic and grey literature with AI assistance.

Introduction to the Method

This section outlines a structured, 9-step approach for conducting scoping reviews in TIM using AI tools, including advanced "Deep Research" features where appropriate.

1
Question
2
Search
3
Select
4
Extract
5
Analyze
6
Interpret
7
Write
8
Ethics
9
Share

Planning Steps (1-3)

Focus on question formulation, comprehensive search strategies, and transparent selection criteria.

Analysis Steps (4-6)

Methodical data extraction, synthesis of findings, and interpretation with AI assistance.

Dissemination Steps (7-9)

Clear writing, ethical considerations, and effective sharing of findings.

Crucial Reminder

AI enhances efficiency but does not replace human expertise. Use AI for automation (summarizing, categorizing, initial exploration) but rely on your judgment for critical analysis, theoretical alignment, validation, and final decisions. AI outputs are drafts requiring scrutiny.

What is Critical Thinking?

The ability to analyze, evaluate, and synthesize information systematically to make informed decisions. In TIM, it involves questioning assumptions, assessing evidence, identifying biases, and applying logic to solve complex problems.

1

Formulate the Review Question and Scope

Define a broad yet structured review question that will guide your entire research process. This foundational step sets the direction and boundaries for your scoping review.

Objective Recommended AI Tool(s) Key Actions Expected Outputs
Define broad, structured review question ChatGPT / Gemini / Claude.ai / Elicit / Scite Assistant
  • Generate/refine question variants
  • Apply frameworks (PCC)
  • Suggest keywords
  • Structured question
  • Keyword list
  • Defined scope

AI's Role vs. Human Oversight

AI Can:
  • Generate question variations
  • Suggest related topics/keywords
  • Analyze literature for gaps
  • Identify key authors/papers

Use clear, contextual prompts!

You Must:
  • Ensure clarity/feasibility
  • Validate AI suggestions
  • Align with scope (broad but structured)
  • Incorporate theoretical frameworks
  • Iterate prompts if needed

Use Frameworks (PCC Recommended)

PCC: Population, Concept, Context

Helps structure the question for clarity and focus.

Example (Product Dev):
  • P: AI Startups
  • C: Scaling Challenges
  • C: Venture Acceleration Programs

Q: What are the challenges/success factors for scaling AI prototypes in startups within accelerators?

Other TIM Examples:
Digital Transformation: What organizational factors influence successful digital transformation in manufacturing SMEs?
Quantum Security: How are quantum-resistant cybersecurity solutions being commercialized across sectors?
Open Innovation: What collaboration models exist between corporations and startups in clean energy innovation?

Alternative Frameworks

CIMO: Context, Intervention, Mechanism, Outcome - For design-oriented research questions.

Example: In technology startups (C), how does implementing agile management (I) affect team collaboration mechanisms (M) to improve product development time (O)?

Recommended AI Tools for this Step

ChatGPT/Gemini/Claude

Best for brainstorming, refining phrasing, and generating keyword ideas.

"Help me formulate a scoping review question about [topic] in the field of technology innovation management. Use the PCC framework. My initial interest is in [specific area]."

Elicit

Useful for refining scope and identifying variables via literature analysis (especially good for tech policy/behavior topics).

Scite Assistant

Excellent for validating and reframing questions via citation links and evidence mapping.

Define the Scope

Be broad but manageable. Crucially, decide which grey literature to include (reports, theses, patents?).

Breadth vs. Depth
Timeframe (recent vs. historical)
Literature Types (academic, industry, etc.)
Geography & Stakeholder Perspectives

Outputs for Step 1

Clearly defined research question (using a framework)

Review objectives

Preliminary keyword/search term list (for academic & grey lit)

Selected search strategies/databases/AI tools

Defined scope (inclusion/exclusion criteria, source types, time limits, geography)

2

Search for Articles (Including AI Deep Research)

Systematically retrieve comprehensive academic and grey literature using traditional methods and AI-powered research tools.

Objective Recommended AI Tool(s) Key Actions Expected Outputs
Retrieve academic & grey literature ChatGPT, Gemini, Perplexity, Elicit, Consensus, OpenRead, Google Advanced Search
  • Generate keywords/Boolean strings
  • Run database/web searches
  • Use Deep Research for exploration
  • Identify grey lit sources
  • Log searches
  • Documented search strategy
  • Reference library
  • PRISMA-ScR diagram draft
  • Refined query

Search Workflow Overview

1

Prepare

Define keywords (PCC), craft prompts (use domain-specific terms for grey lit).

2

Core Database Runs

Systematic searches in Scopus, WoS, IEEE Xplore, etc. (Academic) & Google Scholar (filtered), OpenGrey, Gov portals (Grey Lit). Export to reference manager.

3

Exploratory Enhancement (Optional)

Use AI Deep Research modes (Perplexity, ChatGPT, Grok) to find recent/hidden items. Add unique hits to reference manager, tag source type.

4

Capture & Manage

Import all to Zotero/Mendeley, de-duplicate, tag items ("grey").

5

Validate Breadth

Check hit count against expectations; refine if needed.

6

Document

Save search strings, dates, databases, update PRISMA flow diagram.

1. AI's Role vs. Human Oversight

AI Can:
  • Generate keyword/Boolean variations
  • Perform exploratory searches (web, some academic)
  • Identify potential databases/sources
  • Suggest related concepts
You Must:
  • Validate keywords (use established vocabularies)
  • Refine AI queries (check relevance/completeness)
  • Select appropriate databases (esp. for grey lit AI might miss)
  • Manually adjust Boolean operators
  • Use Deep Research strategically for exploration
  • Rely on systematic searches for core documentation (PRISMA)

Use clear, contextual prompts specifying source types!

2. Develop Search Strategy

  • Define key terms, synonyms, Boolean operators (AND, OR, NOT). Ask AI for suggestions.
  • Adapt strategy for each database's syntax.
  • Define initial inclusion/exclusion criteria (used in Step 3).
TIM Example Boolean String:

("open innovation" OR "collaborative innovation" OR "co-creation") AND ("tech* startup*" OR "digital venture*") AND ("market entr*" OR commerciali*) AND ("success factor*" OR barrier* OR challenge*)

3. Recommended AI Tools for Search

ChatGPT/Gemini

Keyword/Boolean generation, summarizing themes/findings.

Consensus

Assess relevance/consensus across studies.

Perplexity

Retrieve academic & non-academic, find connections, latest findings. Deep Research mode available.

Elicit

Extract insights (abstracts, methods), prioritize studies, compare side-by-side.

OpenRead

Retrieve based on meaning, extract relevant parts, summarize complex papers.

Gemini/Grok (Deep Research)

Broad web search, real-time synthesis.

Deep Research Caveat

Cannot access subscription databases (Gartner etc.). Lacks transparency, potential bias. Use for open web/lead gen, validate rigorously. Retrieve paywalled reports via library access.

4. Execute Academic Literature Search

  • Systematically search core databases (Scopus, WoS, IEEE, etc.).
  • Use refined Boolean strings, subject headings (MeSH, IEEE Thesaurus).
  • Conduct backward/forward citation searching on key articles.
  • Use AI tools (Perplexity Academic, Elicit) to supplement systematic search.
  • Export results to reference manager.
Scopus
Web of Science
IEEE Xplore
ACM Digital Library

5. Execute Grey Literature Search

Crucial for real-world TIM insights (reports, patents, whitepapers). Systematically target sources.

Web Search Engines

Use advanced operators (site:, filetype:, "") for targeted results.

Example search:

"blockchain adoption" filetype:pdf site:gov OR site:edu OR site:org

Tools: Google Advanced Search, Bing, DuckDuckGo

6. Crafting Search Queries (Prompts)

Structure Your Prompts

"Scoping review on [Topic]. PCC is [P, C, C]. Suggest Boolean strings for [Database] using keywords/synonyms for [Years]."

Iterate for Precision

"Refine previous query to be more specific about [Aspect], exclude [Irrelevant Area]."

Use High-Context Prompts

Example Deep Research prompt:

"I'm conducting a scoping review for the Technology Innovation Management program on sustainable blockchain applications. I need to find grey literature from 2020-2025 including industry reports, government policies, and corporate whitepapers focusing on energy-efficient solutions and carbon footprint reduction. Include exact citations with URLs where possible. Prioritize reputable sources."

7. Manage Search Results

  • Organize in Zotero/Mendeley.
  • Use built-in tools to remove duplicates.
Refine Iteratively

Evaluate corpus size/relevance and adjust your strategy accordingly:

Too Few Results
  • • Broaden keywords
  • • Remove restrictive operators
  • • Expand date range
  • • Expand grey literature search
Too Many Results
  • • Add precision terms
  • • Use NOT operators
  • • Narrow date range
  • • Apply filters (language, doc type)

8. Document the Search Process

  • Use PRISMA-ScR Flow Diagram

    Track numbers identified, screened, included, excluded.

    Identification
    (n = 1050)
    Screening
    (n = 850)
    Included
    (n = 120)
    Excluded
    (n = 200)
    Excluded
    (n = 730)
  • Keep detailed log

    Databases, terms, dates, results (for reproducibility).

  • Report search limits

    Dates, language, other constraints.

  • Justify selection criteria rationale

    Explain why certain databases or sources were chosen.

9. Outputs for Step 2

Comprehensive reference database in Zotero/Mendeley

Tagged academic and grey literature sources

Search process documentation (including logs and AI prompts used)

PRISMA-ScR flow diagram started (will be completed in Step 3)

Initial corpus of literature ready for screening

3

Step 3: Select Articles

Systematically screen and select relevant studies based on pre-defined criteria.

Objective Recommended AI Tool(s) Key Actions Expected Outputs
Efficiently screen & select studies Rayyan AI, AACODS Checklist (for Grey Lit), Zotero
  • Title/abstract screen
  • Full-text review
  • Dual review
  • Apply criteria
  • Appraise grey lit
  • Final eligible article set
  • Updated PRISMA-ScR diagram
  • Exclusion log

1. AI's Role vs. Human Oversight:

AI Can (e.g., Rayyan):
  • Classify/rank by relevance (based on keywords/training)
  • Summarize abstracts
  • Detect duplicates
You Must:
  • Review/apply inclusion/exclusion criteria manually (AI can misclassify)
  • Assess actual relevance/quality (AI ranks algorithmically)
  • Identify potential AI bias (e.g., favoring recent studies)
  • Use dual human screening for accuracy

Conduct manual spot-checks

2. Screening Strategy (Two Phases):

1

Title/Abstract Screening

Initial filter using inclusion/exclusion criteria. Remove clearly irrelevant studies. Use Rayyan AI for efficiency.

2

Full-Text Screening

Retrieve full texts of potentially relevant studies. Assess eligibility against criteria more deeply. Use Rayyan for sorting/tagging. Justify exclusions.

3. Using Rayyan AI for Screening:

  • Upload: Import references (RIS/BibTeX).
  • Criteria: Set inclusion/exclusion rules.
  • Collaboration: Supports multiple independent reviewers.
  • Screening Features: AI prioritization, custom tagging, blind review mode.
  • Dual Screening (Crucial): Two reviewers screen independently; resolve conflicts via discussion or third reviewer. Minimizes bias, increases reliability.

Provide rationale for dual screening importance in TIM

4. Appraising Grey Literature (Important Step):

While formal quality assessment isn't typical for scoping reviews, evaluate grey literature credibility due to variability.

Consider:
  • Authority (source credibility)
  • Objectivity (method explained? bias? funding source?)
  • Relevance (addresses question?)
  • Timeliness (current?)

Tool: Use a structured checklist like AACODS for appraisal. [AACODS checklist ]

Document your appraisal process/findings.

5. Final Selection & Organization:

  • Export the final set of included studies (academic & grey).
  • Organize in reference manager (Zotero) using tags (relevance, source type, theme).

6. Track and Document:

  • Maintain/Update the PRISMA-ScR flow diagram meticulously.
  • Keep a clear record of excluded studies with reasons.
  • Document any changes to inclusion/exclusion criteria during the process.

7. Outputs for Step 3:

  • Refined set of eligible articles.
  • Completed PRISMA-ScR flow diagram.
  • List of excluded articles with reasons.
  • Finalized inclusion/exclusion criteria.
  • Records of title/abstract and full-text screening decisions.
4

Step 4: Extract Data

Systematically capture relevant information from selected studies using a structured approach.

Step 4 Summary: Actions, Tools, Outputs
Objective Recommended AI Tool(s) Key Actions Expected Outputs
Extract & organize key study insights ChatGPT, Elicit, SciSpace, Research Rabbit, Litmaps, VOSviewer Use data-charting forms, Auto-extract details (verify!), Summarise/group studies, Generate visuals (tables, maps) Completed extraction tables, Final extraction template, Visual snapshots, Method log

1. AI's Role vs. Human Oversight

AI Can:
  • Extract themes/concepts/characteristics
  • Summarize findings across papers
  • Group studies preliminarily
You Must:
  • Verify all extracted information (AI misinterprets tables/figures/stats)
  • Check contextual accuracy (AI summaries miss nuance)
  • Ensure methodological consistency is understood
  • Refine AI categories to align with review question

(Manually extract data from a sample for comparison. Adapt extraction form for less structured grey lit.)

2. Identify Key Data Elements

Extraction Checklist:
  • Citation details (Author, Year, Title, DOI, etc.)
  • Study Type / Methodology
  • Population / Sample details
  • Key Concepts / Themes
  • Main Findings / Contributions
  • Knowledge Gaps Identified

(Adapt categories for grey lit - e.g., Recommendations, Publisher Stance, Scope)

Be prepared to refine categories iteratively.

5. AI-Assisted Analysis Tools

Tools like Custom GPTs, Converse, SciSpace, Perplexity, and Gemini can assist with various analysis tasks:

  • Breaking down information for structural analysis (e.g., identifying components of business models discussed).
  • Synthesizing data from multiple sources (leveraging Deep Research features where available, always verifying outputs).
  • Identifying potential knowledge gaps based on the synthesized literature (requires careful validation).
  • Categorizing studies based on refined criteria or creating preliminary summary tables for review.
TIM Example:

Using a Custom GPT trained on TIM frameworks to analyze extracted data on startup funding rounds. AI might initially group findings by 'Funding Stage' and 'Investor Type'. The researcher then refines these themes, potentially adding 'Geographic Focus' or 'Technology Sector', ensuring the synthesis aligns with the review's goal of understanding regional investment patterns in deep tech.

Provide clear context and specific instructions in prompts. Be prepared to iterate and refine prompts if initial AI outputs are not satisfactory. Always critically evaluate AI suggestions.

6. Track and Document

  • Maintain clear records of how data was organized (e.g., final thematic tables, figures, visual aids).
  • Document key decisions made during thematic categorization and synthesis, including rationale.
  • Ensure transparency by linking synthesized findings back to specific references or supporting materials.

7. Outputs for Step 5

Expected Deliverables:
  • Structured descriptive summary of study characteristics and trends.
  • Thematic synthesis report (narrative grouping studies by theme, discussing patterns).
  • Visual representations (refined tables, concept maps, networks relevant to synthesis).
  • A clearly articulated list of identified knowledge gaps, supported by the synthesis.
6

Step 6: Interpret & Critically Evaluate Results

Draw meaningful insights, implications, and identify limitations from the synthesized data.

Step 6 Summary: Actions, Tools, Outputs
Objective Recommended AI Tool(s) Key Actions Expected Outputs
Draw meaningful insights & implications ChatGPT, Gemini, Perplexity Summarise findings/trends, Propose implications (validate!), Check inconsistencies, Align with theory Synthesised discussion section, Research gap list, Limitations section, Verified conclusions

1. AI's Role vs. Human Oversight

AI Can:
  • Generate summaries of synthesized findings
  • Propose potential applications or recommendations
  • Flag inconsistencies between different study results
  • Compare results across different study groups or contexts
You Must:
  • Assess contextual accuracy (AI lacks real-world depth)
  • Ensure logical coherence (AI might confuse frequency vs. significance)
  • Critically validate AI implications (AI lacks domain expertise)
  • Integrate theoretical frameworks and prior knowledge
  • Refine interpretations of contradictions/limitations using expertise
  • Critically evaluate AI suggestions for TIM relevance

(Treat AI interpretations as hypotheses to test. Discuss with experts.)

2. Translating Research into Action (TIM Focus)

Leverage the synthesized findings from your review to inform practical TIM outcomes:

Identify New Product/Service Opportunities

Pinpoint trends, unmet needs, or gaps identified in the review.

Example: Review reveals gap in AI diagnostic tools for healthcare in underserved regions, suggesting a market opportunity.

Inform Technology Adoption Decisions

Evaluate the feasibility, risks, and benefits of adopting new technologies based on synthesized evidence.

Example: Review on blockchain implementations helps a logistics firm assess its suitability for supply chain tracking.

Develop Competitive Intelligence

Synthesize information on competitor strategies, business models, and market shifts.

Example: Review maps innovations in quantum security, highlighting key players and emerging threats/opportunities.

Identify Potential Partners or Collaborators

Use citation analysis, co-authorship networks, or thematic analysis to identify leading researchers, institutions, or companies.

Example: Review identifies key research hubs in green tech, suggesting potential R&D partners for a manufacturing firm.

3. Explain Significance

  • Clearly relate the synthesized findings back to the original review question and objectives.
  • Discuss the contributions of the review to the broader TIM field or specific area of focus.
  • Identify the key takeaways and implications for practical application, policy development, or future research agendas.

4. Identify Limitations

Acknowledge the boundaries and potential weaknesses of the review:

  • Data Gaps: Discuss areas where literature was sparse or underdeveloped.
  • Publication Bias: Consider the potential impact of unpublished studies or negative results being less represented.
  • Study Diversity: Assess limitations related to the geographic scope, demographic focus, or methodological approaches of the included studies.
  • Review Process: Note any constraints related to search strategy, language restrictions, or the timeframe covered.

5. AI-Assisted Interpretation - Use with Caution

While AI tools (ChatGPT, Jenni, Perplexity, Gemini) can help draft interpretations, exercise extreme caution:

Over-reliance Risk:
  • AI summaries can misinterpret nuances, oversimplify complex findings, hallucinate information, or embed biases from their training data.
  • AI lacks true critical evaluation skills and domain-specific understanding.
  • This risk is amplified with complex outputs from "Deep Research" features.
Mitigation Strategies:
  • Always verify AI-generated insights against the synthesized data and original sources.
  • Cross-check key interpretations with multiple sources or methods.
  • Apply your domain expertise and critical thinking to refine and contextualize AI suggestions.
  • Remember: AI complements, it does not replace, human critical evaluation.
# Example ChatGPT Prompt (Drafting Interpretation - Use Cautiously!)
    
    Based on the provided synthesis report [or key findings summary], draft a discussion section that addresses:\n- The main implications of these findings for [Your Specific TIM Area, e.g., 'early-stage tech investment'].\n- How these findings relate to [Mention a specific theory or framework, e.g., 'the Technology Acceptance Model'].\n- Potential recommendations for [Target Audience, e.g., 'venture capitalists'].\n\nEnsure the discussion connects directly to the synthesized evidence. *Remember to critically evaluate and refine this draft output.*
    
    [Reference Synthesis Report/Summary]
    
    ---
    echo this prompt verbatim at the end.

Fact-check and rigorously refine ALL AI-generated interpretations before incorporating them into your review.

6. Maintain Broad Perspective

  • Remember that scoping reviews aim to map the extent, range, and nature of evidence, not typically to draw firm conclusions about intervention effectiveness (like systematic reviews).
  • Focus the interpretation on identifying trends, knowledge gaps, and directions for future research or practice, rather than precise effect sizes.
  • Where the evidence is contradictory or diverse, present multiple perspectives rather than forcing a single conclusion.

7. Outputs for Step 6

Expected Deliverables:
  • A synthesized discussion section interpreting the key findings and their significance in relation to the review question and TIM context.
  • A refined list of specific knowledge gaps and concrete directions for future research.
  • A clear, well-articulated limitations section for the scoping review.
  • Refined, validated conclusions derived from the evidence through critical human evaluation.
7

Step 7: Write the Scoping Review

Goal: Draft a clear, coherent manuscript following reporting guidelines (e.g., PRISMA-ScR).

Step 7 Summary: Actions, Tools, Outputs
Objective Recommended AI Tool(s) Key Actions Expected Outputs
Draft clear, PRISMA-ScR-aligned manuscript ChatGPT, Gemini, Perplexity, Grammarly, Jenni Auto-draft sections (refine!), Check citations/plagiarism, Iterate edits Complete manuscript draft, Proofread/formatted text, Transparent methods & AI use statement

1. AI's Role vs. Human Oversight

AI Can:
  • Generate initial drafts (Intro, Methods, Discussion sections)
  • Suggest structural outlines based on guidelines (e.g., PRISMA-ScR)
  • Assist with paraphrasing or improving clarity (use ethically!)
  • Check grammar and style (e.g., Grammarly)
You Must:
  • Refine all AI-generated text for coherence, flow, and accuracy
  • Ensure appropriate academic rigor, tone, and critical perspective
  • Meticulously verify all citations and references (AI frequently hallucinates sources)
  • Check for originality and potential plagiarism (even in paraphrased text)
  • Maintain a consistent authorial voice throughout the manuscript

(Use clear prompts specifying section requirements, target audience, desired tone, and background context. Treat AI output as a starting point only.)

2. Turn Findings into Valuable TIM Assets

Structure your writing to facilitate the translation of review findings into practical and valuable outputs for Technology Innovation Management:

Venture Pitches

Product Opportunities

Tech Adoption Strategies

Market Research

Competitive Intelligence

Partner Identification

Journal Publications

TIM Theses/Projects

Policy Recommendations

Training Materials

Grant Proposals

3. Templates for Actionable Insights

Consider using structured templates derived from your review to present actionable insights. (Placeholders below - actual templates could be downloadable files or interactive elements).

Template: New Product Opportunity Analysis

Table Structure: Need | Existing Solutions | Limitations | Trends | New Idea | Supporting Evidence (Ref)

Opportunity Template Placeholder
Template: Technology Adoption Decision Matrix

Table Structure: Technology | Benefits | Risks | Est. Cost | Implementation Challenges | Relevant Examples (Ref)

Adoption Matrix Placeholder
Template: Competitive Intelligence Snippet

Table Structure: Competitor | Key Technology | Business Model | Strengths | Weaknesses | Market Position (Ref)

CompIntel Template Placeholder
Template: Partnership & Collaboration Tracker

Table Structure: Organization | Area of Expertise | Potential Contributions | Preferred Collaboration Type | Contact Info (Ref)

Partnership Tracker Placeholder

These templates help translate dense review findings into focused, actionable formats suitable for TIM decision-making.

4. Solicit Feedback (AI & Human)

Employ an iterative feedback process combining AI checks with essential human expertise:

AI Feedback (e.g., ChatGPT, Gemini)
  • Check for clarity, coherence, and grammatical errors.
  • Assess coverage against requirements (e.g., PRISMA sections).
  • Identify potential inconsistencies in argumentation.
  • Treat AI feedback as suggestions, not directives.
Expert Feedback (Faculty, Peers, Industry)
  • Provides essential contextual depth and domain knowledge.
  • Evaluates strategic relevance and practical implications (TIM focus).
  • Offers nuanced interpretation and critical evaluation AI cannot replicate.
  • Crucial for validating the review's contribution and impact.
Iterative Refinement Process:
  1. Draft manuscript sections (potentially with AI assistance).
  2. Perform AI checks for clarity, grammar, and basic coherence.
  3. Submit draft to human experts (faculty, peers, industry contacts) for substantive review.
  4. Carefully integrate expert feedback, resolving contradictions and refining arguments.
  5. Conduct final proofreading and formatting checks.

5. Structure the Manuscript (PRISMA-ScR)

Adhere to standard reporting guidelines like PRISMA-ScR for transparency and rigor. Key sections include:

  • Title: Clearly indicate a scoping review.
  • Abstract: Structured summary (Background, Objectives, Methods, Results, Conclusion).
  • Introduction: Provide background context, state objectives, and explain the rationale for the scoping review.
  • Methods: Detail eligibility criteria, information sources, search strategy, selection process, data charting process, and synthesis methods. Clearly describe the specific AI tools used, prompts (if feasible), and the extent of human oversight/validation at each stage.
  • Results: Report the selection process (ideally with a PRISMA flow diagram), characteristics of included sources, and synthesize the findings, often organized thematically.
  • Discussion: Summarize key findings, discuss limitations of the review process and the evidence, interpret results in the context of existing literature/theory, identify knowledge gaps, and suggest implications for practice, policy, or future research (especially TIM relevance).
  • Conclusion: Briefly summarize main findings and their implications.
  • Funding & Conflicts of Interest: Disclose relevant information.
  • References: List all cited sources accurately.
  • Appendices: May include search strategies, data extraction forms, etc.

Refer to the official PRISMA-ScR checklist and explanation document for detailed guidance.

6. AI-Assisted Writing Tools

Leverage AI tools strategically, always maintaining critical oversight:

  • Drafting Assistance: ChatGPT, Gemini, Perplexity, Jenni can generate initial text for sections. Requires extensive manual refinement, fact-checking, and citation verification.
  • Readability, Grammar & Style: Grammarly and similar tools help improve clarity and correctness.
  • Plagiarism Checking: Use institutional tools (e.g., Turnitin) or features within tools like Grammarly to check for unintentional plagiarism, especially after using AI for paraphrasing.
  • Citation Management: While not strictly AI writing tools, reference managers (Zotero, Mendeley, EndNote) are essential for organizing sources and ensuring accurate citations.

Always fact-check any claims or summaries generated by AI against your evidence base. Align all AI-generated content with the actual findings and interpretations derived from your analysis.

7. Revision and Quality Control

Implement rigorous quality control through multiple rounds of human revision:

  • Conduct thorough proofreading for grammatical errors, typos, and awkward phrasing.
  • Ensure clarity, logical flow, and consistency throughout the manuscript.
  • Cross-check reported findings, themes, and interpretations against the original extracted data and articles.
  • Verify that the methodology section accurately and transparently describes the entire process, including AI usage and limitations.
  • Confirm adherence to all chosen reporting guidelines (e.g., PRISMA-ScR checklist).

8. Report and Document

Maintain meticulous documentation throughout the writing process:

  • Clearly document all methodological steps and decisions, aligning with PRISMA-ScR or JBI guidelines as appropriate.
  • Retain records of search strategies, screening decisions, and data extraction forms.
  • Be transparent about the specific AI tools used for each task (e.g., screening, data extraction, writing assistance), including version numbers if possible, and describe how human oversight was maintained. Consider adding a dedicated subsection or statement on AI tool usage in the methods.

9. Outputs for Step 7

Expected Deliverables:
  • A complete, structured manuscript draft adhering to PRISMA-ScR guidelines.
  • Refined and coherent content that integrates AI assistance (where used) with rigorous manual validation and writing.
  • A well-documented methodology section detailing all procedures, including transparent reporting of AI tool usage and human oversight.
  • A thoroughly revised, proofread, and formatted final version ready for submission or dissemination.
  • (Potentially) Derived TIM assets like opportunity analyses or competitive intelligence snippets based on templates.
8

Step 8: Incorporate Ethical Considerations

Goal: Ensure ethical integrity, transparency in data handling and AI usage, and adherence to guidelines.

Step 8 Summary: Actions, Tools, Outputs
Objective Recommended AI Tool(s) Key Actions Expected Outputs
Ensure ethical integrity & transparency Turnitin, Grammarly, ChatGPT (bias scan), Perplexity Detect plagiarism/bias, Document AI use/data handling, Check against guidelines AI usage statement, Plagiarism/bias report, AI audit trail, Ethical checklist completion

1. AI's Role vs. Human Oversight

AI Can:
  • Analyze text for potential bias patterns (requires careful interpretation).
  • Potentially flag transparency issues based on inputs.
  • Recommend relevant ethical guidelines or frameworks.
  • Scan literature for discussions on ethical considerations related to the topic.
You Must:
  • Evaluate AI-identified biases (human judgment is essential for context).
  • Ensure responsible AI use (review outputs for accuracy, integrity, and potential harm).
  • Verify AI-suggested frameworks (AI lacks genuine moral reasoning).
  • Check for integrity issues AI might miss (e.g., funding conflicts, undeclared biases).
  • Assess inclusivity and fairness implications.
  • Prepare the AI-use disclosure statement for the manuscript.

(Engage ethics boards or experts if needed. Disclose all AI contributions transparently and accurately.)

2. Checklist for Ethical AI Use

Use this checklist throughout the review process to ensure responsible AI integration:

3. Ethical Implications of AI in TIM

Consider the broader ethical implications when using AI, particularly within the TIM context:

Bias & Fairness
AI models trained on biased data can perpetuate or amplify inequalities in TIM outcomes (e.g., biased funding recommendations, skewed market analysis, unfair technology adoption impacts).
Mitigation: Conduct bias audits on data and AI outputs, intentionally include diverse data sources, maintain rigorous human oversight and critical evaluation of AI suggestions.
Transparency & Accountability
"Black box" AI algorithms can make it difficult to understand how conclusions are reached, hindering accountability, especially for high-stakes TIM decisions (investment, strategy).
Mitigation: Prefer explainable AI (XAI) methods where possible, meticulously document the AI process, clearly state the limitations of AI interpretations, ensure human decision-makers remain accountable.
Data Privacy & Security
Using AI tools, especially cloud-based ones, to process sensitive TIM data (e.g., proprietary research, market intelligence, user data) poses risks of data breaches, IP leakage, or misuse.
Mitigation: Adhere to strict cybersecurity protocols, use anonymization or pseudonymization techniques, ensure compliance with data protection regulations (e.g., GDPR), carefully vet AI tool privacy policies. Avoid uploading sensitive data to public AI platforms.
Impact on Employment & Expertise
Automation through AI may displace certain research or analysis roles. Over-reliance on AI can also potentially dull critical thinking and deep analytical skills among researchers and practitioners.
Mitigation: Focus on upskilling and reskilling, position AI as a collaborator to augment human expertise rather than replace it, foster a culture of critical engagement with AI outputs.

4. AI Transparency & Documentation

Ensure complete transparency regarding AI usage:

  • Explicitly disclose the use of AI tools in the methods section, specifying which tools were used for which tasks (e.g., "ChatGPT 4.0 was used to assist in drafting the initial summary of themes identified during manual synthesis").
  • Document AI's specific influence on the review process and outcomes.
  • Reiterate that all AI-generated content (text, summaries, data points) underwent human verification and refinement.
  • Include appropriate disclaimers or references for AI outputs where necessary, following institutional or publisher guidelines.

5. Prevent Plagiarism & Misinformation

Actively guard against incorporating plagiarized or inaccurate information from AI:

  • Utilize plagiarism detection software (e.g., Turnitin, Grammarly Premium) on any substantial text generated or heavily modified by AI.
  • Conduct rigorous manual review and fact-checking of all AI outputs against primary sources and your extracted data.
  • Implement bias assessment techniques (qualitative review, comparison across sources) to identify and address potential slant or misrepresentation in AI summaries.
  • Be particularly skeptical of citations provided by AI, as these are frequently inaccurate or fabricated (hallucinated).

6. Mitigate Bias

Proactively address potential biases in both the literature and the AI tools:

  • Make conscious efforts to include diverse research perspectives, considering geographic regions, author demographics (if possible), languages (using translation tools cautiously), and publication types (including grey literature).
  • Critically evaluate the potential biases inherent in the AI tools themselves (based on their training data and algorithms) and how these might influence outputs.
  • Acknowledge identified biases and their potential impact in the limitations section of the review.

7. Ethical Stakeholder Input

If the review involves input from human stakeholders (e.g., through interviews, surveys supplementing the literature):

  • Obtain necessary ethical approvals from institutional review boards (IRBs) or equivalent bodies.
  • Ensure informed consent is properly obtained.
  • Maintain strict confidentiality and data privacy for participants.

8. Compliance

Ensure adherence to relevant ethical codes and protocols:

  • Follow your institution's specific ethics protocols and AI usage policies.
  • Adhere to professional codes of conduct (e.g., ACM Code of Ethics for computing professionals).
  • Avoid using AI tools known to store or misuse personal, proprietary, or confidential data inappropriately. Check terms of service.
  • Consider registering your review protocol prospectively (e.g., on OSF, PROSPERO) to enhance transparency.
  • Maintain a full audit trail or log documenting AI contributions, prompts used, and verification steps (an "AI Usage Log").

9. Data Privacy

Protect data privacy throughout the review process:

  • Comply with applicable data protection regulations (e.g., GDPR, CCPA).
  • Avoid inputting identifiable personal data or sensitive proprietary information into public AI tools unless their privacy and security measures are explicitly verified and deemed adequate.
  • Keep secure records of data handling procedures, especially if dealing with non-public information.

10. Outputs for Step 8

Expected Deliverables:
  • A clear and transparent AI usage statement incorporated into the methods section of the manuscript.
  • Internal documentation: Plagiarism checker reports and notes on bias assessment for AI-generated content.
  • A documented audit trail or "AI Usage Log" detailing AI tools, prompts, versions, dates, and verification actions.
  • Completed internal compliance checklist confirming adherence to institutional and ethical guidelines related to AI and data handling.
9

Step 9: Disseminate Findings

Goal: Effectively share review results with relevant academic, industry, and public audiences.

Step 9 Summary: Actions, Tools, Outputs
Objective Recommended AI Tool(s) Key Actions Expected Outputs
Communicate results to diverse audiences ChatGPT, Perplexity, Gemini, Canva Generate summaries/visuals, Recommend outlets, Tailor content for blogs/policy briefs Journal article/conference package, Policy brief/summary, Open-science data, Engagement materials (posts, visuals)

1. AI's Role vs. Human Oversight

AI Can:
  • Generate initial drafts of summaries for different audiences (abstracts, blog posts, social media).
  • Create preliminary visuals like infographics or slide deck outlines (e.g., using AI features in Canva).
  • Suggest potential journals, conferences, or other dissemination outlets based on keywords.
  • Help simplify complex findings for lay audiences.
  • Structure content for specific platforms (e.g., Twitter thread).
You Must:
  • Ensure accuracy and nuance in all AI-generated summaries and simplifications.
  • Verify the relevance, credibility, and suitability of AI-suggested dissemination outlets.
  • Maintain academic and ethical integrity (proper citation, disclosure of AI writing aid per guidelines).
  • Strategically tailor messaging and format for specific target audiences (researchers, industry, policy-makers, public).
  • Ensure communication is ethical, inclusive, and avoids perpetuating harmful stereotypes.
  • Translate complex findings into actionable, meaningful recommendations appropriate for the audience.

(Use AI outputs as a starting point for brainstorming and drafting, then refine heavily with human expertise and audience awareness.)

2. TIM-Specific Outputs: Research Vignettes & Pitches

Translate your review findings into formats directly usable in TIM contexts:

Research Vignette (for Industry/Investors)

A concise summary highlighting key findings, their relevance, potential impact, and applications for a non-academic audience.

Template Structure:
  • Compelling Title: Focus on the core insight or opportunity.
  • Background/Problem: Briefly state the context and challenge addressed.
  • Key Findings: Bullet points summarizing the most impactful results from the review.
  • Industry Impact/Opportunity: Explain the relevance and potential applications for businesses or investors.
  • Next Steps/Call to Action: Suggest potential follow-up actions or research.
  • Contact Information: Your details for follow-up.
Vignette Template Placeholder/Link
Pitch Presentation (for Ventures/Tech Solutions)

Structure insights derived from the review to support the case for a new venture, product, or technology solution.

Template Slide Outline:
  • Title Slide: Venture/Solution Name, Presenter Info.
  • Problem/Opportunity: Describe the market need or gap identified (informed by review).
  • Proposed Solution: Introduce your venture/product/technology.
  • Value Proposition: Explain the benefits (supported by review findings where applicable).
  • Market Landscape/Competition: Analyze competitors and positioning (informed by review).
  • Business Model: How will it generate revenue?
  • Technology/Roadmap: Key features, development plan.
  • Team: Introduce key personnel.
  • Financial Projections/Funding Needs: (If applicable).
  • Call to Action/Contact Info.
Pitch Deck Template Placeholder/Link

3. Traditional Academic Dissemination

Peer-Reviewed Journals
  • Target relevant journals in business, technology management, innovation studies, information science, or specific application domains.
  • Carefully follow author guidelines regarding scope, formatting, and ethical declarations (including AI use).
  • Consider open-access journals or options to increase visibility and impact.
Conference Presentations
  • Present findings at academic and practitioner conferences to engage with communities, gather feedback, and build networks.
  • Formats: Oral presentations, poster sessions, panel discussions, workshops.
  • Target conferences relevant to TIM, evidence synthesis (e.g., Cochrane Colloquium, Campbell Collaboration), policy, or the specific subject area (e.g., IEEE conferences for engineering topics).

4. Practitioner & Policy Outputs

Policy Briefs / Stakeholder Reports
  • Translate complex review findings into concise, easily digestible summaries tailored for decision-makers in government, non-profits, or industry.
  • Focus on key takeaways, actionable recommendations, and implications for practice or policy.
  • Utilize clear language, visuals (charts, infographics), and a professional layout.
  • AI tools can assist in drafting initial summaries, but manual refinement is crucial for tone, accuracy, and impact.

5. Digital & Open Science Platforms

Increase the visibility, accessibility, and impact of your review through online platforms:

Preprints & Repositories

Share early versions or accepted manuscripts via OSF Preprints, arXiv, ResearchGate, or institutional repositories.

Registries

Enhance transparency and visibility by linking your final publication to your protocol registration (e.g., PROSPERO, OSF Registries).

Data & Material Sharing

Publish supplementary materials (search strings, data extraction forms, datasets) on platforms like Zenodo, Figshare, OSF, or GitHub.

Websites & Interactive Summaries

Create dedicated webpages or use tools to develop interactive summaries or visualizations of your findings.

6. Public Engagement & Media Outreach

Broaden the reach and impact of your findings beyond academic and industry circles:

  • Write blog posts summarizing key findings for platforms like Medium, LinkedIn, or institutional blogs.
  • Share insights and links through social media (e.g., Twitter threads, LinkedIn updates, relevant Facebook groups).
  • Develop short videos, infographics, or podcasts discussing the research. (AI tools like Canva can assist with visuals).
  • Engage with journalists, science communicators, or university press offices to potentially generate media coverage.

7. Outputs for Step 9

Expected Deliverables:
  • Submitted (or published) peer-reviewed journal article manuscript.
  • Conference presentation materials (slides, poster PDF, abstract).
  • Policy brief, executive summary, or research vignette tailored for specific stakeholders.
  • Links to open-access datasets, code, or supplementary materials deposited in repositories (e.g., OSF, Zenodo).
  • Examples of public engagement materials (e.g., blog post URL, social media campaign summary, infographic file).
10

Checklist for Authors and Reviewers

Use this checklist to ensure rigor and quality in your AI-assisted TIM scoping review.

Stage Core Methodological Tasks (What You Must Do) Human-oversight (Who Reviews It) Status
Pre-Planning & Protocol Register protocol; specify AI plan. Assign roles. LR: uploads protocol. EE: confirms AI disclosure line.
Step 1 · Question & Scope Apply PCC/framework. Define scope, sources. LR: stores question/criteria. SR: spot-checks keywords.
Step 2 · Search Build strings; run searches; start PRISMA. LR: archives search log. SR: audits AI strings (10%).
Step 3 · Select Use Rayyan; apply criteria; dual screen. LR+SR: export decisions; resolve conflicts. EE: reviews exclusions.
Step 4 · Extract Use extraction form; capture characteristics, gaps. LR: verifies AI fields. SR: completes manual fields.
Step 5 · Analyze/Synthesize Thematic/narrative synthesis; create visuals. LR: checks AI codes vs data. SR: signs off figures.
Step 6 · Interpret Derive TIM actions (opportunities, intel, partners). LR: writes memo. EE: confirms logic vs evidence.
Step 7 · Write Draft manuscript (integrate AI text); follow PRISMA-ScR. LR: runs Turnitin/citation audit. SR: edits coherence. EE: verifies AI statement.
Step 8 · Ethics Mitigate bias; protect data; document AI use. EE: files reports & ethics form.
Step 9 · Disseminate Prep outputs (journal, vignettes, slides). LR: reviews all assets. SR: archives dataset/checklist.
LR: Lead Researcher SR: Supporting Researcher EE: External Expert
Part 3

Keeping the Guide Current

Research methods and AI tools evolve rapidly. This section outlines the guide's strengths and how we keep it relevant for the TIM community.

3.1 Strengths of This Guide

Transforms Reviews into TIM Assets

Directly links research to venture pitches, market strategies, competitive intelligence, product development, tech landscapes, business plans, and adoption pathways.

Step-by-Step TIM Focus

Clear process with relevant examples tailored specifically for technology innovation management.

Integrates AI Effectively

Leverages AI (including deep synthesis) for efficiency and depth while maintaining research integrity.

Covers Academic & Grey Literature

Ensures rigorous and industry-relevant insights by incorporating both scholarly and practical sources.

Emphasizes Human Oversight

Balances AI speed with critical evaluation for maintaining research integrity and quality.

Promotes Ethical AI Use

Addresses transparency, bias, and reproducibility concerns in AI-assisted research.

3.2 Version Control and Update System

We use a structured system to keep this guide current and relevant:

Versioning

Each update is versioned and archived for reference.

Continuous Updates

Regular updates are released to incorporate new research methods, AI tools, and best practices.

Community Feedback

User feedback and suggestions are incorporated into updates to improve the guide's relevance and usefulness.

Literature Reviews

Regular literature reviews are conducted to ensure the guide remains up-to-date with the latest research on AI-assisted scoping reviews.

Version Numbering

Major (X.0) and minor (X.1) updates to track significant and incremental changes.

Update Frequency

Comprehensive review every 6 months; ad-hoc updates as needed for critical changes.

Change Documentation

Changelog maintained; "Last Updated" date visible on all sections.

Stakeholder Input

Feedback collected from users (faculty, students, clients); reviewed by editorial board.

Accessibility

Archived versions available; notifications for significant updates.

3.3 AI Assistance and Human Oversight Disclosure Statement

This guide was developed with AI assistance (ChatGPT, Perplexity, potentially others) for drafting, structuring, summarizing, and generating examples (including illustrative "Deep Research" concepts).

Key Points:

  • AI assisted in initial content generation (~50%).
  • All AI content underwent rigorous human oversight: validation, verification (citations), bias checks, multiple review cycles.
  • All sections were manually reviewed, refined, or rewritten for depth, clarity, and rigor.
  • This use aligns with Carleton University's guidelines on responsible AI, transparency, and integrity.
  • Future updates will continue this balanced approach.

3.4 Ways You Can Contribute

Help us improve the guide! Consider these contribution areas:

Enhance Review Process

Suggest new methods and tools for more effective reviews

Improve Clarity

Identify errors, suggest better wording or explanations

Troubleshooting

Develop solutions for common challenges in reviews

Expand Examples

Add real-world scenarios and tool walkthroughs

Grey Literature

Refine methods for finding and evaluating non-academic sources

AI Ethics

Address AI limitations, verification needs, and ethical considerations

Get Involved:

Contact Tony Bailetti (tony.bailetti@carleton.ca)

Share Feedback: Carleton Feedback Form

3.5 Acknowledgements

This guide benefits from contributions by TIM faculty and students at Carleton University, integrating best practices and insights from applying AI-assisted scoping reviews.

3.6 Epilogue: Your Journey with AI-Powered Reviews

By following this guide, you can conduct efficient, rigorous scoping reviews using AI tools while maintaining ethical integrity. Remember: AI augments, it doesn't replace, your expertise.

Balancing automation with critical evaluation yields valuable insights for TIM research and decision-making. The process is dynamic and iterative. Embrace the tools, stay critical, and contribute to the evolving landscape of knowledge synthesis.

Appendix

Tool Resources

Tutorial links for recommended AI tools to enhance your scoping review process.

Category Tool Why You Will Use It Official Website Tutorial
Deep Search & Synthesis
Perplexity AI Deep web/academic querying & synthesis perplexity.ai Tutorial
ChatGPT Prompt crafting; deep-level synthesis report; file analysis openai.com/chatgpt Tutorial
Gemini Web-wide plan-based deep research gemini.google.com Tutorial
Grok Real-time web synthesis (especially X/Twitter) x.ai Tutorial
Literature Search/Analysis
Scopus Abstract & citation database search, journal rankings, author profiles scopus.com
Google Scholar Search scholarly literature, find full text, track citations scholar.google.com Tutorial
Elicit AI Generate keywords, draft evidence tables, extract data elicit.com Tutorial
SciSpace Extract PDFs into structured tables, analysis scispace.com Tutorial
Consensus Evidence ranking & consensus checks consensus.app Tutorial
OpenRead Semantic PDF reading & highlight extraction openread.academy Tutorial
Scite Assistant Reframe questions via citation reasoning, evidence mapping scite.ai Tutorial
Screening & Selection
Rayyan Collaborative title/abstract & full-text screening rayyan.ai Tutorial
Visualization
Research Rabbit Interactive citation-network visualization researchrabbit.ai Tutorial
Reference Management
Zotero Core reference manager & de-duplication zotero.org Tutorial
Mendeley Alternate reference manager & PDF annotation mendeley.com Tutorial
EndNote Enterprise-scale reference management endnote.com Tutorial

All links were verified at time of publication. Please report any broken links to the guide administrator.