Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Literature Review | Guide, Examples, & Templates

How to Write a Literature Review | Guide, Examples, & Templates

Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates, and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and its scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position your work in relation to other researchers and theorists
  • Show how your research addresses a gap or contributes to a debate
  • Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic.

Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.

Literature review guide

Prevent plagiarism. Run a free check.

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .

Make a list of keywords

Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can also use boolean operators to help narrow down your search.

Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models, and methods?
  • Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.

You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

literature review flow research

To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly visual platforms like Instagram and Snapchat—this is a gap that you could address in your own research.

There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.

Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, you can follow these tips:

  • Summarize and synthesize: give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: don’t just paraphrase other researchers — add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically evaluate: mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts

In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.

When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !

This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.

Scribbr slides are free to use, customize, and distribute for educational purposes.

Open Google Slides Download PowerPoint

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarize yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved August 5, 2024, from https://www.scribbr.com/dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Open access
  • Published: 20 April 2021

Getting into a “Flow” state: a systematic review of flow experience in neurological diseases

  • Beatrice Ottiger   ORCID: orcid.org/0000-0002-0242-2632 1 ,
  • Erwin Van Wegen 2 , 3 ,
  • Katja Keller 1 ,
  • Tobias Nef 4 ,
  • Thomas Nyffeler 1 , 4 ,
  • Gert Kwakkel 2 , 3 , 5 &
  • Tim Vanbellingen 1 , 4  

Journal of NeuroEngineering and Rehabilitation volume  18 , Article number:  65 ( 2021 ) Cite this article

12k Accesses

19 Citations

7 Altmetric

Metrics details

Flow is a subjective psychological state that people report when they are fully involved in an activity to the point of forgetting time and their surrounding except the activity itself. Being in flow during physical/cognitive rehabilitation may have a considerable impact on functional outcome, especially when patients with neurological diseases engage in exercises using robotics, virtual/augmented reality, or serious games on tablets/computer. When developing new therapy games, measuring flow experience can indicate whether the game motivates one to train. The purpose of this study was to identify and systematically review current literature on flow experience assessed in patients with stroke, traumatic brain injury, multiple sclerosis and Parkinson’s disease. Additionally, we critically appraised, compared and summarized the measurement properties of self-reported flow questionnaires used in neurorehabilitation setting.

A systematic review using PRISMA and COSMIN guidelines.

MEDLINE Ovid, EMBASE Ovid, CINAHL EBSCO, SCOPUS were searched. Inclusion criteria were (1) peer-reviewed studies that (2) focused on the investigation of flow experience in (3) patients with neurological diseases (i.e., stroke, traumatic brain injury, multiple sclerosis and/or Parkinson’s disease). A qualitative data synthesis was performed to present the measurement properties of the used flow questionnaires.

Ten studies out of 911 records met the inclusion criteria. Seven studies measured flow in the context of serious games in patients with stroke, traumatic brain injury, multiple sclerosis and Parkinson’s disease. Three studies assessed flow in other activities than gaming (song-writing intervention and activities of daily living). Six different flow questionnaires were used, all of which were originally validated in healthy people. None of the studies presented psychometric data in their respective research population.

The present review indicates that flow experience is increasingly measured in the physical/cognitive rehabilitation setting in patients with neurological diseases. However, psychometric properties of used flow questionnaires are lacking. For exergame developers working in the field of physical/cognitive rehabilitation in patients with neurological diseases, a valid flow questionnaire can help to further optimize the content of the games so that optimal engagement can occur during the gameplay. Whether flow experiences can ultimately have positive effects on physical/cognitive parameters needs further study.

Flow experience is a subjective psychological state that people report when they are completely involved in something to the point of forgetting time and their surrounding except the activity itself [ 1 , 2 ]. During flow, subjective perception of time may change: Time can pass faster or slower and the environment is hardly or no longer perceived. Attention is fully invested in the task at hand, and the person functions at his or her fullest capacity. The flow state was first described by Csikszentmihalyi (1975) as the “optimal experience”. He began his research on flow experiences with the simple question of why people are often highly committed to activities without obvious external rewards. Csikszentmihalyi’s first studies involved interviews with people from different backgrounds such as athletes, chess masters, rock climbers, dancers, composers of music and many more [ 3 ]. Csikszentmihalyi and his colleagues developed the “Flow-theory” with general attributes of an optimal experience and its proximal conditions. The Flow-theory proposes nine key characteristics: challenge-skill balance (balance between the challenge of the activity and personal skills), action-awareness merging (involvement in the task; actions become automatic), clear goals (clear idea of what needs to be accomplished), unambiguous feedback (clear and immediate feedback), concentration on task at hand (complete focused on the task), sense of control (clear feeling of control), loss of self-consciousness (no concerns with appearance, focused only the activity), transformation of time (altered perception of time; either speeding up or down), and autotelic experience (the activity is intrinsically rewarding) [ 2 , 4 ]. Many researchers tried to adapt the Flow-theory [ 5 ] and explored predictors and consequences of flow, but its definition and key characteristics as shortly described above, remained largely the same. In fact, a recent paper about flow clearly advocates Csikszentmihalyi’s Flow-theory as the only valid and default conceptualization so far [ 5 ].

Because flow experience is associated with elements such as motivation, peak performance, peak experience and enjoyment, the Flow-theory was further explored in various research fields, such as sports, educational science, work and software engineering for gaming [ 6 , 7 , 8 , 9 ]. Positive associations were found between athletes’ flow experience and their performance measures, indicating that positive psychological flow states are related to increased levels of performance. In addition, significant prediction of the athletes’ performance could be made based on the level of flow experience during the competition [ 10 ].

Attempts to systematically measure flow experience started in the 1990’s. Self-reported flow questionnaires were used to measure flow during specific activities, such as computer interactions among students and accountants [ 11 ], and among athletes practicing various sports such as basketball, athletics, hiking, jogging and other types of sports [ 4 ]. In the past 30 years, different flow questionnaires were developed [ 12 , 13 ]. They focussed either on the dispositional or core flow experience (tendency to experience flow in general) [ 14 ] or on the state flow experience (flow experience in a specific activity) [ 4 ]. This lead to some disagreement in literature about how flow actually should be measured, and as well as to the context and task in which a flow questionnaire should be applied [ 5 ].

Interestingly, over the last decade, several computer or tablet-based serious games, and virtual/augmented reality therapeutic training applications have been developed that integrate many of the key flow characteristics mentioned above. Furthermore, various studies evaluated the player’s flow experience with questionnaires when applying these newer technologies [ 15 , 16 , 17 ]. Serious games are intentionally programmed so that the goals are presented very clearly (i.e., visually through nice icons), and that the requirements of the exercises are adaptable according to the level of player performance. Also, the exercises should be both exciting and attractive enough to maintain the player’s attention. In this way, the player obtains a certain automatic feeling of flow while having full control over his or her actions. These games are sometimes so well designed that one loses track of time. Serious games, robotics, virtual/augmented reality, have found their way into neurorehabilitation [ 18 , 19 , 20 , 21 ], and theory of flow experience emerged in recent neurorehabilitation studies [ 22 , 23 ]. Indeed, serious exergames may have an explicit educational and/or therapeutic purpose and are often designed in such a way that they may also improve cognitive or physical capabilities [ 22 , 24 ]. Interestingly, exergame developers began to look at new games from the perspective of flow experience in order to adapt the game conditions of the players, and used flow questions to assess the users’ engagement for the new therapy form [ 23 , 25 ]. To assess flow experience during a therapeutic session with a patient, valid questionnaires are needed which may guide a clinician in adapting the level of difficulty, attractiveness, amount of feedback of an exercise, possibly further attributing to an optimal flow experience. Such optimization of the motor learning environment may enhance therapeutic efficacy during an individual training session.

However, to date, there is no consensus on how flow experience should be measured in neurologically impaired patients. Furthermore, no systematic overview exists so far, about current existing flow questionnaires and their psychometric properties. Therefore, the first aim of the present study was to identify and systematically review current literature on flow experience assessed in patients with acquired neurological diseases such as stroke, traumatic brain injury (TBI), multiple sclerosis (MS) and Parkinson’s disease (PD). The second aim was to critically appraise, compare and summarize the measurement properties of self-reported flow questionnaires used in a neurorehabilitation setting. Since flow experience has been assessed already in neurological rehabilitation and measurement tools exist, we expected these tools to be well validated.

This systematic review followed the guideline from the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Statement (PRISMA) [ 26 ]. The Consensus-based standards for the selection of health measurement instruments (COSMIN guidelines) were applied for the evaluation of the measurement properties of the flow questionnaires [ 27 ]. A flow questionnaire is a research instrument consisting of a series of questions for the purpose of gathering information from respondents about their flow experience when performing an activity.

Protocol and registration

The protocol was registered with the International prospective register of systematic review (PROSPERO) https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42020187510 on July 5, 2020 [ 28 ].

Electronic search strategy

Databases were searched up from date of inception (1975) to June 2020 (MEDLINE Ovid, EMBASE Ovid, CINAHL-EBSCO, SCOPUS). Text words and MeSH (Medical Subject Headings) terms for flow experience, flow questionnaire, flow theory, positive psychology, neurorehabilitation, neurological disease, stroke, traumatic brain injury, multiple sclerosis and Parkinson’s disease to identify intervention studies which used flow as outcome parameter. References of the included studies were screened for additional articles. The search strategy was created by one author (KK) and peer reviewed by another author (BO).

The PubMed search strategy was as follows: (flow exp*) NOT (cereb* flow OR dyn* flow OR exp* flow OR blood flow OR venous flow)) AND (stroke OR Parkinson OR traumatic brain injury OR multiple sclerosis). The search string was adapted appropriately for each database (Additional file 1 ).

Eligibility criteria

According to PRISMA guidelines [ 26 ], the Population-Intervention-Comparison-Outcome-Study Design (PICOS) approach was applied to systematically define the eligibility criteria. Inclusion and exclusion criteria are presented in Table 1 .

Selection of studies

Two reviewers (BO, KK) independently screened all titles and abstracts for the eligibility criteria. The full text papers of relevant studies were obtained if both reviewers agreed for inclusion. Otherwise, a third reviewer (TV) made the final decision. The search results were imported into Mendeley Reference Manager ( https://www.mendeley.com ) to further check for duplicates. In addition, we obtained the original validation papers of each flow questionnaire. These validation papers were used to critically appraise the validity, reliability, and responsiveness of the flow questionnaires.

The Electronic search strategy identified 911 records, of which 22 were retrieved in full text for further assessment. This resulted in the exclusion of another twelve studies (Fig.  1 ). Ten studies were included in the review.

figure 1

Flow diagram for study selection

Data extraction and assessment of methodological quality

The general characteristics of the included studies were extracted as following: population (diagnosis, sample size, age, gender), study design, intervention (therapeutic activity in a rehabilitation setting), main outcomes parameters, flow measurement and key findings regarding flow experience. The results are presented in Table 2 . The characteristics of the flow questionnaires used, such as the flow construct, mode of administration/instruction, subscales (items) and response option were extracted and are listed in Table 3 . Furthermore, we evaluated the measurement properties of the flow questionnaires by assessing the content validity (including relevance, comprehensiveness and comprehensibility of the construct, population and context of use in order to apply the flow questionnaires in a neurorehabilitation setting), construct validity (including structural validity, hypotheses testing, and cross-cultural validity), reliability (containing the measurement properties internal consistency and measurement error and test–retest) and responsiveness (the ability of the flow questionnaires to detect change over time in the flow experience) following the COSMIN guidelines [ 27 ]. We verified whether the content of the questionnaires was an adequate reflection of the flow construct. For this purpose, we recorded if the target population was asked about the relevance, comprehensiveness, and comprehensibility of the flow questionnaire (content validity). Regarding construct validity, we examined if the scores of the flow questionnaire were an adequate reflection of the dimensionality of the flow construct (structural validity). We also investigated if the scores of the questionnaires were consistent with hypotheses based on the assumption that the questionnaires validly measure the flow construct (hypotheses testing). Additionally, we investigated if the performance of the items on a translated or culturally adapted questionnaire were an adequate reflection of the performance of the items of the original version of the questionnaire (cross-cultural validity). The domain reliability refers to the degree to which the measurement is free from measurement error. For this reason, we reviewed the degree of the interrelatedness among the items (internal consistency) and the proportion of the total variance in the measurements which was due to true differences between patients (reliability). The results and the psychometric properties’ rating criteria of the flow questionnaires are presented in the Additional file 2 . The Summary of Findings (SoF) per measurement property, its overall rating and the grading of the quality of evidence are presented in Table 4 . The COMSIN guidelines [ 27 ] were applied for the rating of the SoF.

Different flow questionnaires and their use in neurological diseases

The Flow State Scale (FSS) was used in patients with PD [ 43 ] and in patients with MS [ 44 ]. Baker et al. (2015) applied the Short Flow Scale (SFS) and the Core Flow Scale (CFS) [ 40 ] in patients with TBI. Van der Kuil et al. (2018) used a self-developed overall appreciation questionnaire in patients with stroke, TBI and spinal cord injury. Six items in this questionnaire were adapted from the FSS and three items were further added. The Flow State Scale for Occupational Tasks questionnaires (FSSOT) was used by Yoshida Kazuki, et al. (2014; 2018) in patients with TBI and was also used by Yoshida Ippei, et al. (2018) in patients with stroke and spinal cord injury. In contrast to these previous studies, which used known questionnaires, Shin and colleagues (2014) used six different flow questions [ 45 ] in patients with stroke, which were slightly adapted from another study done in TBI [ 46 ].

The different flow questionnaires were mainly used to get an overall impression of the flow psychological state of neurologically impaired patients when they were engaged in different training modes, such as upper limb or lower limb training in patients with stroke [ 45 ] 45 , balance training in patients with MS [ 44 ] and PD [ 43 ], cognitive training in patients with TBI [ 47 , 48 ], and stroke [ 42 ]. In seven out of the ten studies, as presented in Table 2 , serious games were used as therapeutic intervention. The designs of the studies were either pilot and explorative in nature, testing the usability of a new serious game [ 42 , 43 , 45 , 47 ] or pilot Randomized Controlled Trials (RCT) evaluating the preliminary efficacy of new games [ 44 , 46 , 48 ].

Four usability studies measured flow in order to quantify the level of immersion into the gameplay [ 42 , 43 , 45 , 47 ]. Shin et al. (2014) developed a task-specific interactive, game-based virtual reality rehabilitation system (RehabMaster) for the rehabilitation of the upper extremities after a stroke. During the development phase 20 stroke patients completed a six-item questionnaire adopted by [ 11 ] to test if they were engaged and if the training was a positive experience, so that they were motivated to continue. For all statements, the participants gave lower scores for the negative questions (e.g., “Using RehabMaster was boring for me”) and higher scores for the positive questions (e.g., “RehabMaster was fun for me to use”) on a 5-point Likert Scale [ 45 ]. The participants indicated that the RehabMaster-based training and games maintained their attention, were enjoyable and without eliciting any negative feelings [ 45 ]. Galna et al. (2014) developed a computer game to rehabilitate dynamic postural control for patients with PD using the Microsoft Kinect. Also, during the pilot phase, flow experience was recorded from nine participants with PD by means of the FSS questionnaire. The FSS was rated on a 5-point Likert Scale. The flow subscales “concentration” showed the highest mean value across the participants (Mean 4.56), followed by high scores of the subscales “loss of self-consciousness” (Mean 4.14), clear goals (Mean 4.22) and enjoyment (Mean 4.03). Lower flow scores were found in the subscale “transience” (Mean 2.67) and action-awareness (Mean 3.11). Van der Kuil et al. (2018) designed a cognitive rehabilitation therapy for patients with acquired brain injuries in form of a serious game. The aim of the serious game was to aid patients in the development of compensatory navigation strategies by providing exercises in 3D virtual environments on their home computers. During the testing of the software application, questions about the general appreciation were asked at the beginning and at the end of the experimental phase. Van der Kuil et al. (2018) constructed an “overall appreciation questionnaire” of nine items rated on a 5-point Likert scale. Six items were adapted from the FSS and three items were constructed in the context of a usability test. The highest scores were found in the “attention” (Mean 4.79) and “concentration” items (Mean 4.54). The item “control” presented the lowest score (Mean 3.29). Yoshida K. et al. (2014) conducted an exploratory case study with two patients with attention-deficit disorder after TBI. Two types of video game tasks for attention training were created. The first type of video game was balancing levels of skill and challenge and gave quick feedback about the score. In the second type of video game, the level of the difficulty of the task was constant and the participant received no information about the goal or a score feedback. Patient A performed the first type of video game for 14 days after receiving general occupational therapy for 11 days. Patient B performed the first type of video game for 15 days after performing the second type of video game for 10 days. The FSSOT was administered to identify the patient’s flow state. The results for Patient A suggested that the first type of video game was more effective than general occupational therapy for improving attention deficits. The results for Patient B suggested that the first type of video game was more effective than the second type of video game.

Five RCTs measured flow in intervention groups and in control groups. Three RCTs used video games and actually compared levels of flow between the intervention and control group (Wii Fit™ vs. traditional balance training in patients with MS [ 44 ]; or Mobile Game—Neuromuscular Electrical Stimulation (NMES) vs. Conventional NMES in patients with stroke [ 46 ] and Yoshida K. et al. (2018) compared flow in an attention gameplay intervention in patients with traumatic brain injuries. In Robinson et al. (2015) the intervention group that trained balance with Wii Fit™ showed significantly higher flow scores in the flow subscales clear goals (p = 0.05), concentration on the task (p = 0.03), unambiguous feedback (p = 0.04), action awareness merging (p = 0.03) and transformation of time (p = 0.001) than the control group [ 44 ]. Likewise, the hand-wrist and foot–ankle training with serious games presented significantly higher scores in attention (p < 0.05), curiosity (p < 0.05) and intrinsic interest (p < 0.05) compared to the control group which was not playing serious games [ 46 ]. Both previous RCT’s focused on videogames based on physical training, whereas the third RCT by Yoshida K. et al. (2018) investigated flow during cognitive training. They examined whether the intervention group during a serious game for attentional training by adapting the challenge to the patient’s skill, gave clear goals and prompt feedback about the score. The level of the difficulty of the task was constant in the control group and they received no information about the goal or score feedback. The study population in this RCT had a traumatic brain injury at least 6 months ago. The researchers stated that the FSSOT score was significantly higher in the intervention group than in the control group. Both groups showed a positive association between the increase in the composite score of the attention tests [Trail Making Test (TMT), Symbol Digit Modalities Test (SDMT), Paced Auditory Serial Addition Test (PASAT)] and the FSSOT score. Although the correlation coefficients presented a large effect, the correlations were not significant (Flow: r = 0.456, p = 0.21; Control r = 0.554, p = 0.9). The total of the Moss attention rating scale (MARS) demonstrated no association with the FSSOT score, except one subitem that obtained a significant negative correlation (sustained/consistent attention, r = 0.51, p < 0.05). Two RCT’s by Yoshida I. et al. (2018; 2019) did not use videogame-based training but consciously adapted the challenge to the abilities during occupational therapy (OT) in patients with cerebral, spinal disease [ 49 ] and older adults with various neurological disease [ 50 ]. Attention was paid to an optimal challenge-skill balance when performing activities of daily living (ADLs) such as eating, laundry, cooking, shopping, etc. The training was adapted so that in the interventions group the participants and the therapists quantified and shared the task performance based on a scale of challenges and skills and adjusted the requirements for the task accordingly. On the other hand, in the control group the challenge-skill of the trained ADLs was not adjusted over the training sessions. In the 2018 paper there were 10 sessions, once a week and training focused on just one activity, evaluated and selected after filling out the Canadian Occupational Performance Measure (COPM) [ 51 ]. The COPM is a personalized, client-centred instrument designed to identify the occupational performance problems experienced by the client. Using a semi-structured interview, the therapist initiates the COPM process by engaging the client in identifying daily occupations of importance that they either want to do, need to do, or are expected to do but are unable to accomplish [ 51 ]. In the 2019 study, the participants selected not one, but several ADLs based on the outcome of the COPM as treatment goals. Treatments in each group comprised sessions lasting 40–60 min, conducted six times per week. In both RCT’s flow experience was measured pre- and post-treatments with the FSSOT. In the first RCT [ 50 ] there was a highly significant interaction effect for flow (p = 0.008, d = 0.82), in favour of the adjusted challenge-skill OT, as compared with the control group. This interaction was not confirmed in their follow-up study (p > 0.05, d = 0.31) [ 49 ].

Similar to Yoshida I. (2018, 2019), Baker et al. (2015) also did not use videogame based training but explored if song writing interventions for patients with TBI and spinal cord injuries in the early phase of neurorehabilitation would support a change in self-concept and well-being [ 52 ]. By means of a non-randomized repeated measures design, they found that flow scores were very high after the intervention. However, these scores did not significantly correlate with self-concept Head Injury Semantic Differential Scale (HISDS) (State Flow Scale r = − 0.10; p > 0.05; Core Flow Scale r = 0.02; p > 0.05) nor with 7 different well-being measures evaluating sense of flourishing, life satisfaction, coping, affect, depression, and anxiety (State Flow Scale r = between − 0.40 and 0.43; p > 0.05; Core Flow Scale r = between − 0.24 and 0.32; p > 0.05).

Psychometric properties of flow questionnaires

The Summary of Findings (SoF) per measurement property, its overall rating and the grading of the quality of evidence are presented in Table 4 . The COMSIN guidelines [ 27 ] were applied for the rating of the SoF and were as following: [Overall Rating: sufficient (+), insufficient (−), undetermined (?); Quality of Evidence high (h), moderate (m), low (l), very low (lw)]. If a measurement property was not analysed or not reported, the rating box remains empty. The rating criteria for good measurement properties and for the quality of evidence are presented in the Additional file 2 .

Content validity

Content validity including relevance, comprehensiveness and comprehensibility was assessed for the FSS and for FSSOT. Jackson et al. conducted two qualitative studies with elite athletes [ 58 , 59 ] prior to the development of the FSS. The SFS and CFS were also developed by the Jackson Group with the intention of creating a short version of the FSS and DFS, respectively. Yoshida K. et al. (2013) tested the FSSOT in the development phase by experts on flow theory. Both Jackson et al. (1996) and Yoshida K. et al. (2013) conducted pilot-testing before the validation procedure.

Structural validity

Structural validity, by means of confirmatory and internal consistency was determined in all flow questionnaires. All studies presented good internal consistency (Cronbach alpha above 0.70). Confirmatory factory analysis was performed in all flow questionnaires. Taking the strict COSMIN guidelines [ 27 ] into account the CFS questionnaire fulfilled the parameters requested by the COSMIN guidelines (CFI or TLI > 0.95 OR RMSEA < 0.06 OR SRMR < 0.08), the SFS, FSS and FSSOT had parameters approaching closely these cut-offs, so validating high quality of evidence. The questionnaire by Webster et al. (1993) showed considerably lower scores, pinpointing to moderate quality of evidence.

Cross-cultural validity

The FSS was cross-culturally validated in Greek [ 55 , 56 ] and in Spanish [ 57 ]. They all followed standard back and forward translation procedures. Stavrou and Zervas (2004) tested a second FSS-Greek version, since the first one done by Doganis et al. (2002) indicated rather a moderately fit to the data, whereas the internal consistency (Cronbach alpha) was below 0.70 for some of the FSS subscales (action-awareness merging = 0.34, concentration on task at hand = 0.64, transformation of time = 0.67). The FSS-Greek version by Stavrou and Zervas (2004) presented an internal structure validity ranging from Cronbach alpha of 0.75 to 0.92 (mean = 0.82) and a closely fit to the cut-off’s parameters requested by the COSMIN guidelines. The Spanish version of the FSS presented a good internal consistency (Cronbach alpha above 0.70) and the structural validity was tested with a confirmatory factory analysis, demonstrating a close fit to the cut-offs parameters [ 57 ].

Construct validity

Construct validity, by means of convergent validity, was assessed for the FSSOT total scores, showing significant negative correlations with the total score of State-Trait Anxiety Inventory (STAI) (r = − 0.537, p < 0.01) [ 41 ]. Jackson et al. (1998) examined psychological correlates of state flow in a separate study than the original validation paper [ 4 ]. Significant associations were found between the variables FSS total and perceived athletic ability (PSA) (r = 0.33, p < 0.01); total anxiety (A-SUM) (r = − 0.34, p < 0.01) and intrinsic motivation to experience stimulation (IMSTIM) (r = 0.25, p < 0.01). A series of external validity analyses was conducted for the SFS and CFS by Martin et al. (2008) for each subdomain “work”, “sport” and “music” in SFS and “general school”, “mathematics” and “extracurricular” in CFS with the Motivation and Engagement Scale (MES), which includes the following key correlates: participation (SFS: mean r 0.74–0.90; CFS: mean r 0.25–0.56), enjoyment (SFS: mean r 0.73–0.89); CFS mean r 0.13–0.71), buoyancy (SFS: mean r 0.68–0.81; CFS: mean r 0.15–0.42), aspirations (SFS: mean r 0.71–0.81; CFS: mean r 0.12–0.68), adaptive cognitions (SFS: mean r 0.72–0.82; CFS: mean r 0.23–0.74), adaptive behaviours (SFS: mean r 0.59–0.70; CFS: mean r 0.18–0.83), impeding/maladaptive cognitions (SFS: mean r − 0.37 to − 0.59; CFS: mean r − 0.10 to − 0.23), and maladaptive behaviours (SFS: mean r − 0.47 to − 70; CFS: mean r − 0.15 to − 0.79). The SFS presents higher correlations with the MES than the CFS. Significance of the correlations was not reported.

Reliability

None of the identified studies investigated reliability (test–retest), measurement error, criterion validity or responsiveness of the flow questionnaires. As far as we know, none of the flow questionnaires have been tested for their psychometric properties in neurologically impaired people.

Interpretability and feasibility of the included flow questionnaires

Floor and ceiling effects, completion time and costs of instrument and contact information of used outcomes measuring flow are listed in Table 5 .

The aim of the present study was to identify and systematically review current literature on flow experience assessed in patients with neurological diseases such as stroke, TBI, MS and PD. In addition, we critically appraised, compared and summarized the measurement properties of self-reported flow questionnaires used in a neurorehabilitation setting.

Flow experience in patients with neurological disorders has so far been measured in only a few studies, some of them very pilot in nature, being usability studies, other were RCTs, and mostly related to serious gaming [ 42 , 43 , 44 , 45 , 47 , 48 ]. One aim of such interventions is to achieve an optimal flow state of the patient, possibly creating an optimal learning environment to improve either physical and/or cognitive functions (being for example improving balance, or attention). Flow questionnaires are one way to capture this flow state, since the patient is, immediately after the intervention, asked for his or her experiences. In this way, the clinician gets an overall impression whether the patient was in an optimal psychological state of flow or not. Our systematic review demonstrated that six flow questionnaires were used so far.

However, psychometric properties of these questionnaires were established only in athletes and other healthy populations so far, and not in neurologically impaired patients. Latter population often suffer from cognitive problems (disturbed vigilance, working memory deficits, language comprehension difficulties) which may impact the assessment of flow.

The FSS and FSSOT appear to be good candidate questionnaires, based on their good psychometric validity properties in healthy subjects. The FSSOT, compared to the FSS, requires less administration time so probably being more feasible for neurologically impaired patients, taking mild cognitive deficits into account. Besides proper validation, reliability measures such as test–retest, measurement errors will have to be established as well because these reliability measures give an overall impression about the stability of item responses. A final aspect will be to evaluate the internal (the ability to measure change over time) and external responsiveness (the extent to which changes in a measure relate to corresponding changes in a reference measure) of these flow questionnaires. Only when these psychometric properties are well defined the outcome of flow questionnaires can be better interpreted in either usability studies or RCT’s.

The investigation of flow experience in neurological patients started at about the same time as the development of serious games for rehabilitation therapy. The integration of motivational strategies in the form of “gamification” is one of the benefits of the new therapy options [ 19 , 60 ]. The expectation of such therapy programs is that they will strengthen compliance with repetitive high-dose functional training programs [ 19 , 60 ]. The game developer's aim is to bring the patient into a flow state that leads to an optimal gaming experience [ 61 ]. They expect to foster the engagement through the gamification of the therapeutic exercises and at the same time give the therapist the possibility to control and customize the levels of complexity of the rehabilitation training. Seven of the ten included studies measured flow experience in the context of serious games in patients with stroke, PD, MS and/or TBI [ 42 , 43 , 44 , 45 , 46 , 47 , 48 ]. Flow experience was mainly assessed in the context of usability studies in newly developed serious game therapy programs for rehabilitation purposes [ 42 , 43 , 45 , 47 ]. Our review showed that total flow mean scores between 3.76 and 4.33 points on a 5-point Likert scale were achieved in all studies when serious games were used as physical-therapeutic exercises [ 42 , 43 , 44 , 45 , 46 ] compared to control groups without serious games, these flow mean points reached 3.65–3.76 [ 44 , 46 ]. It turns out that therapeutic interventions with a game-like character stimulate concentration and enjoyment. This assumption was substantiated as flow experience was higher in game therapy versus conventional therapy, shown in two intervention studies investigating balance with Wii FIT™ [ 44 ] and hand wrist, foot ankle training with serious games [ 46 ] (Table 1 ). An advantage of rehabilitation therapy with a game character is that the goals and the rules of the task of the game are clearly defined. In addition, players receive immediate feedback of performance as to whether the task was performed correctly or not, a key element of the motor learning theory [ 57 ]. This, in turn, allows the movements to be deliberately adjusted in line with performance. If these components are appropriate, this also has a positive effect on concentration. In the principles of motor learning, feedback, but also the ability to concentrate on a task, and the motivation to perform an exercise, are essential for learning new motor skills [ 62 , 63 ]. Therefore, we assume that positive flow experiences during physical exercises support motor learning. From this perspective, it makes sense to measure flow experience in the development and testing phase of new therapy games. In this way it is possible to determine which adjustments should be made, e.g., to define the goal or the rules of the application more precisely.

Whether flow experiences ultimately had a positive effect on the physical outcome parameters was not investigated in these studies. Three studies from Japan explored in TBI patients and older adults with various neurological diseases whether flow experience had an effect on attention [ 48 ] and health related quality of life [ 49 , 50 ]. In a small RCT (n = 20), Yoshida K. et al. (2018) created two types of attention demanding serious games exercises, the flow task and the control task. The control task maintained a constant level of task difficulty regardless of the patient’s skill and did not give any goal and feedback about the score. Both tasks had identical content, except that the flow task was designed to induce flow by increasing task difficulty according to patients’ skill and giving clear goals and quick feedback about the score. Yoshida and colleagues (2018) referred to the Flow Theory of Nakamura and Csikszentmihalyi (2009), suggesting that three key characteristics of the flow theory (challenge-skill balance, clear goals and feedback) are essential to generate flow experience and that these characteristics are externally controllable. They found significantly (p-value not reported in the paper) higher flow total values in the intervention group (flow task) compared to the control group (control task) [ 48 ], suggesting that the way a serious game is designed, with regard to its task difficulty, can positively affect the flow state of a patient. Both groups showed a positive, but non-significant association between the increase in the composite score of cognitive attention tests (TMT, SDMT, PASAT) and the FSSOT total score (Flow: r = 0.456, p = 0.21; Control r = 0.554, p = 0.9) [ 48 ]. The lack of significant correlation, between attention and flow test scores may be explained by the pilot nature and small sample size of this RCT. Regardless, the fact that the flow psychological state was amenable to task difficulty gave a first indication that the state of flow may facilitate training, being worthwhile to investigate in further studies.

In two larger RCT’s, both conducted by Yoshida I. (2018, 2019), the outcomes of both RCT’s differed regarding the effect of the training on flow. While in their first RCT significant effects on flow, in favour of the experimental OT were found, this was not the case in their follow-up RCT. The reason for this discrepancy may be twofold. Firstly, in their first RCT the focus was on one activity and not on multiple ADLs, as in their second RCT. Presumably, in a rehabilitation setting, the focus is on improving the skills of one activity at a time rather than several at once. Therefore, it may be easier for participants to experience flow. For achievement of performance competence is a process that takes time, practice, and thorough skill development until the optimal performance of the skill (referred to as mastery) is characterized by an obvious ease and grace [ 2 ]. According to Flow-theory, to attain this state, an optimal balance between challenge and skill during training is crucial [ 36 , 49 ]. This is because anxiety is experienced when challenge exceeds ability, and boredom is experienced when ability exceeds challenge. Thus, it can be said that the better the challenge is matched to the ability and the expertise in performing is increasing, the easier it is to experience flow, as shown in other studies [ 6 , 7 , 64 ]. The second reason may lie in the much higher baseline flow levels the patients had in the second RCT, as compared with the flow levels of the patients in the first RCT, therefore leaving almost no room for further improvement. Irrespective of the discrepancy of results between both RCTs, the fact that patients could improve their flow by means of an adjusted challenge-skill OT training, by focusing on one specific ADL task is promising. One could explore, in future studies, for example the effects of improved flow on upper limb skills by doing challenge-skill ADL training, and this in different contexts, so the patient gets into high levels of flow.

Six different flow questionnaires were applied in these studies, leaving the question open which one to be taken for future validation in neurologically impaired patients. Based on their good psychometric properties in healthy subjects, both FSS and the FSSOT seem to be good candidates. The flow questions in the FSS are strongly related to concepts in the field of sport, and its administration time is rather long, (36 items). Therefore, feasibility might be questionable, especially if one considers the rather busy schedules of clinicians working in neurorehabilitation facilities. Subsequent shorter versions of the FSS were developed, being the SFS and CFS [ 40 ]. Still, the authors do recommend combining these measures when evaluating flow, which may be impractical. Furthermore, the flow questions are still very much related to the context of sport psychology, and less for neurorehabilitation purposes. This might also explain why, for example, Van der Kuil et al. (2018), for their study in patients with acquired brain injury, used 6 items of the FSS and then adapted them content wise, to make it more comprehensible and applicable for these patients’ group.

With regard to the FSSOT, its 14-item length seems more feasible as compared to the longer FSS. Furthermore, having been used already in two RCT’s to assess flow experience after challenge-skills based ADL training [ 49 , 50 ] and in one RCT to assess flow experience in attentional training in patients with neurological impairments [ 48 ], this questionnaire seems to be best candidate, and worthwhile to be properly validated in these patient groups. Depending on other contexts, such as upper limb virtual reality or robotic-assisted training, the questions of the FSSOT can be further adapted in the light of different cultural backgrounds.

A possible limitation of this review was that we could not present a quality assessment of study design, since both exploratory, non-randomized as well as randomized trials were included. Another limitation is that we included studies in patients with various neurological disorders that affect overall study population homogeneity. Hence, one has to be careful in comparing the results of these studies directly. Finally, publication bias may be present, as well as a language bias, given that we considered only flow questionnaires described in predefined databases and restricted our search to English language publications.

To sum up, the present review indicates that flow experience is increasingly measured in the physical/cognitive rehabilitation setting in patients with neurological disease such as stroke, TBI, MS and PD. Flow experience was mainly measured immediately after a therapeutic intervention that aimed to improve physical or cognitive functions with serious exergaming. In seven out of ten studies in which new games for therapy were developed, patients flow experience was measured to find out to what extent they were engaged to the new games [ 42 , 43 , 44 , 45 , 46 , 47 , 48 ]. The other three studies assessed flow during occupational therapy when practicing ADL’s [ 49 , 50 ] and during music therapy [ 52 ]. Six different flow questionnaires were applied in these studies. None were specifically validated in patients with neurological diseases. Therefore, the psychometric properties of used tests for measuring flow experience are lacking and will have to be evaluated in future studies. For exergame developers working in the field of physical/cognitive rehabilitation in patients with neurological diseases, a valid flow questionnaire can help to further optimize the content of the games so that optimal engagement can occur during the gameplay.

Availability of data and materials

All data generated or analysed during this study are included in the published article.

Abbreviations

Activity of daily living

Total anxiety

Comparative fit index

Core Flow Scale

Canadian occupational performance measure

Consensus-based standards for the selection of health measurement instrument

Fugl-Meyer assessment

Flow State Scale

Flow State Scale occupational task

Grading of recommendations assessment, development, and evaluation

Head injury semantic differential scale

Intrinsic motivation to experience stimulation

Modified Barthel Index

Motivation and engagement scale

Medical subject headings

Multiple sclerosis

Neuromuscular electrical stimulation

Not reported

Occupational therapy

Paced auditory serial addition test

Parkinson’s disease

Participants, intervention, comparison, outcome, study design framework

Preferred reporting items for systematic reviews and meta-analyses statement

Patients (or participants)-reported outcome measures

International prospective register of systematic review

Perceived athletic ability

Ray’s auditory verbal learning test

Randomized controlled trial

Root mean square error of approximation

Standard deviation

Symbol digit modalities test

Short-form health survey for general health

Short Flow Scale

Summary of findings

Standardized root mean square residual

State-Trait anxiety inventory

Traumatic brain injury

Tucker-Lewis Index

Trail-making test

Upper extremity

Unified theory of acceptance and use of technology

Virtual reality

Csikszentmihalyi M. Beyond boredom and anxiety. San Francisco: Jossey-Bass; 1975.

Google Scholar  

Csikszentmihalyi M. Flow and the foundations of positive psychology. Flow and the foundations of positive psychology. 2014.

Nakamura J, Csikszentmihalyi M. The concept of flow. In: Oxford handbook of positive psychology. 2009. p. 195–206.

Jackson SA, Marsh HW. Development and validation of a scale to measure optimal experience: the flow state scale. J Sport Exerc Psychol. 1996;18(1):17–35.

Article   Google Scholar  

Abuhamdeh S. Investigating the “Flow” experience: key conceptual and operational issues. Front Psychol. 2020;11(February):1–13.

Jackson SA, Thomas PR, Marsh HW, Smethurst CJ. Relationships between flow, self-concept, psychological skills, and performance. J Appl Sport Psychol. 2001;13(2):129–53.

Engeser S, Rheinberg F. Flow, performance and moderators of challenge-skill balance. Motiv Emot. 2008;32(3):158–72.

Koehn SMT. The relationship between performance and flow state in tennis competition. J Sport Med Phys Fit. 2012;52:1–11.

Perttula A, Kiili K, Lindstedt A, Tuomi P. Flow experience in game based learning—a systematic literature review. Int J Serious Games. 2017;4(1):57–72.

Stavrou NA, Jackson SA, Zervas Y, Karteroliotis K. Flow experience and athletes’ performance with reference to the orthogonal model of flow. Sport Psychol. 2007;21(4):438–57.

Webster J, Klebe Trevino L, Ryan L. J human behavior. Comput Human Behav. 1993;9(1):411–26.

Delle Fave A, Massimini F, Bassi M. Instruments and methods in flow research. In: Psychological selection and optimal experience across cultures advancements in positive psychology 2. 2011. p. 59–87.

Swann CF. Flow in sport. In: Flow experience: empirical research and applications. 2016. p. 51–64.

Marsh HW, Jackson SA. Flow experience in sport: construct validation of multidimensional, hierarchical state and trait responses. Struct Equ Model. 1999;6(4):343–71.

Kiili K. Evaluations of an experiential gaming model. Hum Technol Interdiscip J Humans ICT Environ. 2006;2(2):187–201.

Procci K, Singer AR, Levy KR, Bowers C. Measuring the flow experience of gamers: an evaluation of the DFS-2. Comput Human Behav [Internet]. 2012;28(6):2306–12. https://doi.org/10.1016/j.chb.2012.06.039 .

Zhang J, Fang X, Chan SS. Development of an instrument for studying flow in computer game play. Int J Hum Comput Interact. 2013;29(7):456–70.

Laver K, Lange B, George S, Deutsch J, Saposnik G, Crotty M. Virtual reality for stroke rehabilitation. Cochran Database Syst Rev. 2017;(11).

O’Neil O, Fernandez MM, Herzog J, Beorchia M, Gower V, Gramatica F, et al. Virtual reality for neurorehabilitation: insights from 3 European clinics. PM R. 2018;10(9):S198-206.

Article   PubMed   Google Scholar  

van Beek JJ, van Wegen EE, Bohlhalter S, Vanbellingen T. Exergaming-based dexterity training in persons with Parkinson disease: a pilot feasibility Study. J Neurol Phys Ther. 2019;43(3):168–74.

Nef T, Chesham A, Schütz N, Botros AA, Vanbellingen T, Burgunder J, et al. Development and evaluation of maze-like puzzle games to assess cognitive and motor function in aging and neurodegenerative diseases. 2020;12(April).

Ma M, Zheng H. Virtual reality and serious games in healthcare. In: Brahnam S, Jain L, editors. Adv comput intell paradigms in healthcare. Springer-Verlag Berlin Heidelberg; 2011. p. 169–92.

Swanson LR, Whittinghill DM. Intrinsic or extrinsic? Using videogames to motivate stroke survivors: a systematic review. Games Health J. 2015;4(3):253–8.

Barbosa H, Castro AV, Carrapatoso E. Serious games and rehabilitation for elderly adults. GSJ. 2018;6(1):275–83.

Baur K, Schättin A, De Bruin ED, Riener R, Duarte JE, Wolf P. Trends in robot-assisted and virtual reality-assisted neuromuscular therapy: a systematic review of health-related multiplayer games. J Neuroeng Rehabil. 2018;15(1).

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7).

Prinsen CAC, Mokkink LB, Bouter LM, Alonso J, Patrick DL, de Vet HCW, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures. Qual Life Res [Internet]. 2018;27(5):1147–57. https://doi.org/10.1007/s11136-018-1798-3 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ottiger B, Van Wegen E, Sigrist K, Nef T, Nyffeler T, Kwakkel G, et al. Getting into a “Flow” state: a systematic review of flow experience in neurological disease. PROSPERO [Internet]. 2020;CRD4202018. https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42020187510 .

Foletto AA, d’Ornellas MC, Prado ALC. Serious games for Parkinson’s disease fine motor skills rehabilitation using natural interfaces. Stud Health Technol Inform. 2017;245:74–8.

PubMed   Google Scholar  

Shirzad N, Van Der Loos HFM. Evaluating the user experience of exercising reaching motions with a robot that predicts desired movement difficulty. J Mot Behav. 2016;48(1):31–46.

Belchior P, Marsiske M, Sisco S, Anna Yam WM. Older adults’ engagement with a video game training program, activities, adaptation & aging. Act Adapt Aging. 2012;36(4):269–79.

PubMed   PubMed Central   Google Scholar  

Barry G, van Schaik P, MacSween A, Dixon J, Martin D. Exergaming (XBOX Kinect TM ) versus traditional gym-based exercise for postural control, flow and technology acceptance in healthy adults: a randomised controlled trial. BMC Sports Sci Med Rehabil [Internet]. 2016;8(1):1–11. https://doi.org/10.1186/s13102-016-0050-0 .

Pedroli E, Greci L, Colombo D, Serino S, Cipresso P, Arlati S, Mondellini M, Boilini L, Giussani V, Goulene K, Agostoni M, Sacco M, Stramba-Badiale M, Giuseppe Riva AG. Characteristics, usability, and users experience of a system combining cognitive and physical therapy in a virtual environment: positive bike. Sensors. 2018;18(7):2343.

Article   PubMed Central   Google Scholar  

de Sampaio Barros MF, Araújo-Moreira FM, Trevelin LC, Radel R. Flow experience and the mobilization of attentional resources. Cogn Affect Behav Neurosci. 2018;18(4):810–23.

Kawabata M. Facilitating flow experience in physical education settings. Psychol Sport Exerc. 2018;38(September):28–38.

Yoshida I, Hirao K, Kobayashi R. Effect of adjusting the challenge-skill balance for occupational therapy: study protocol for a randomised controlled trial. BMJ Open. 2018;8(12):1–6.

Thomas S, Fazakarley L, Thomas PW, Collyer S, Brenton S, Perring S, et al. Mii-vitaliSe: a pilot randomised controlled trial of a home gaming system (Nintendo Wii) to increase activity levels, vitality and well-being in people with multiple sclerosis. BMJ Open. 2017;7(9):1–16.

Esfahlani SS, Thompson T, Parsa AD, Brown I, Cirstea S. ReHabgame: a non-immersive virtual reality rehabilitation system with applications in neuroscience. Heliyon [Internet]. 2018. https://doi.org/10.1016/j.heliyon.2018.e00526 .

Lin CS, Jeng MY, Yeh TM. The elderly perceived meanings and values of virtual reality leisure activities: a means-end chain approach. Int J Environ Res Public Health. 2018;15(4).

Martin AJ, Jackson SA. Brief approaches to assessing task absorption and enhanced subjective experience: examining “short” and “core” flow in diverse performance domains. Motiv Emot. 2008;32(3):141–57.

Yoshida K, Asakawa K, Yamauchi T, Sakuraba S, Sawamura D, Murakami Y, et al. The flow state scale for occupational tasks: development, reliability, and validity. Hong Kong J Occup Ther [Internet]. 2013;23(2):54–61. https://doi.org/10.1016/j.hkjot.2013.09.002 .

van der Kuil MNA, Visser-Meily JMA, Evers AWM, van der Ham IJM. A usability study of a serious game in cognitive rehabilitation: a compensatory navigation training in acquired brain injury patients. Front Psychol. 2018;9(JUN):1–12.

Galna B, Jackson D, Schofield G, McNaney R, Webster M, Barry G, et al. Retraining function in people with Parkinson’s disease using the Microsoft kinect: game design and pilot testing. J Neuroeng Rehabil. 2014;11(1):1–12.

Robinson J, Dixon J, Macsween A, van Schaik P, Martin D. The effects of exergaming on balance, gait, technology acceptance and flow experience in people with multiple sclerosis: a randomized controlled trial. BMC Sports Sci Med Rehabil. 2015;7(1):1–12.

Shin JH, Ryu H, Jang SH. A task-specific interactive game-based virtual reality rehabilitation system for patients with stroke: a usability test and two clinical experiments. J Neuroeng Rehabil. 2014;11(1):1–10.

Ku J, Lim T, Han Y, Kang YJ. Mobile game induces active engagement on neuromuscular electrical stimulation training in patients with stroke. Cyberpsychol, Behav Soc Netw. 2018;21(8):504–10.

Yoshida K, Sawamura D, Ogawa K, Ikoma K, Asakawa K, Yamauchi T, et al. Flow experience during attentional training improves cognitive functions in patients with traumatic brain injury: an exploratory case study. Hong Kong J Occup Ther. 2014;24(2):81–7.

Yoshida K, Ogawa K, Mototani T, Inagaki Y, Sawamura D, Ikoma K, et al. Flow experience enhances the effectiveness of attentional training: a pilot randomized controlled trial of patients with attention deficits after traumatic brain injury. NeuroRehabilitation [Internet]. 2018;43(2):183–93. http://hdl.handle.net/2115/71654 .

Yoshida I, Hirao K, Kobayashi R. The effect on subjective quality of life of occupational therapy based on adjusting the challenge–skill balance: a randomized controlled trial. Clin Rehabil. 2019;33(11):1732–46.

Article   PubMed   PubMed Central   Google Scholar  

Yoshida I, Hirao K, Nonaka T. Adjusting challenge-skill balance to improve quality of life in older adults: a randomized controlled trial. Am J Occup Ther. 2018;72(1):3–10.

Law M, Baptiste S, Mccoll M, Opzoomer A, Polatajko H, Pollock N. The Canadian occupational performance measure: an outcome measure for occupational therapy. Can J Occup Ther. 1990;57(2):82–7.

Article   CAS   PubMed   Google Scholar  

Baker FA, Rickard N, Tamplin J, Roddy C. Flow and meaningfulness as mechanisms of change in self-concept and well-being following a songwriting intervention for people in the early phase of neurorehabilitation. Front Hum Neurosci. 2015;9(May):1–10.

Jackson SA, Ford SK, Kimiecik JC, Marsh HW. Psychological correlates of flow in sport. J Sport Exerc Psychol. 1998;20(4):358–78.

Vlachopoulos SP, Karageorghis CI, Terry PC. Hierarchical confirmatory factor analysis of the Flow State Scale in exercise. J Sport Sci. 2000;18:815–23.

Article   CAS   Google Scholar  

Doganis G, Iosifidou P, Vlachopoulos S. Factor structure and internal consistency of the Greek version of the Flow State Scale. Percept Mot Skills. 2000;91:1231–40.

Stavrou NA, Zervas Y. Confirmatory factor analysis of the Flow State Scale in sports. Int J Sport Exerc Psychol. 2004;2(2):161–81.

García Calvo T, Jiménez Castro R, Santos-Rosa Ruano FJ, Reina Vaíllo R, Cervelló GE. Psychometric properties of the Spanish version of the Flow State Scale. Span J Psychol. 2008;11(2):660–9.

Jackson SA. Athletes in flow: a qualitative investigation of flow states in elite figure skaters. J Appl Sport Psychol. 1992;4(2):161–80.

Jackson SA. Factors influencing the occurrence of flow state in elite athletes. J Appl Sport Psychol. 1995;7(2):138–66.

van der Kooij K, van Dijsseldonk R, van Veen M, Steenbrink F, de Weerd C, Overvliet KE. Gamification as a sustainable source of enjoyment during balance and gait exercises. Front Psychol. 2019;10(MAR):1–12.

Kiili K. Content creation challenges and flow experience in educational games: the IT-Emperor case. Internet High Educ. 2005;8(3):183–98.

Cirstea CM, Ptito A, Levin MF. Feedback and cognition in arm motor skill reacquisition after stroke. Stroke. 2006;37(5):1237–42.

Kitago T, Krakauer JW. Motor learning principles for neurorehabilitation. Handb Clin Neurol. 2013;110:93–103.

Stavrou NAM, Psychountaki M, Georgiadis E, Karteroliotis K, Zervas Y. Flow theory—goal orientation theory: positive experience is related to athlete’s goal orientation. Front Psychol. 2015;6(OCT):1–12.

Download references

Acknowledgements

Not applicable.

The present study was funded by the Jacques and Gloria Gossweiler Foundation.

Author information

Authors and affiliations.

Neurocenter, Luzerner Kantonsspital, Spitalstrasse 31, 6000, Luzern 16, Switzerland

Beatrice Ottiger, Katja Keller, Thomas Nyffeler & Tim Vanbellingen

Department of Rehabilitation Medicine, Amsterdam Movement Sciences, Amsterdam UMC, VU University Medical Center, Amsterdam, The Netherlands

Erwin Van Wegen & Gert Kwakkel

Amsterdam Neuroscience, Vrije Universiteit, Amsterdam, The Netherlands

ARTORG Center for Biomedical Engineering Research, Gerontechnology and Rehabilitation Group, University Bern, 3008, Bern, Switzerland

Tobias Nef, Thomas Nyffeler & Tim Vanbellingen

Department of Physical Therapy and Human Movement Sciences, Northwestern University, Evanston, IL, USA

Gert Kwakkel

You can also search for this author in PubMed   Google Scholar

Contributions

Study objective: BO, EVW, TN, TNyf, GK, TV. Literature search: BO, KK, TV. Data extraction: BO, KK, TV. Methodological quality assessment: BO, KK, EVW, TV. Critical review and approval of manuscript: BO, EVW, KK, TN, TNyf, GK, TV. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Tim Vanbellingen .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Search strategy.

Additional file 2.

Methodology quality and results of flow questionnaires per measurement properties and the rating criteria for good measurement properties.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Ottiger, B., Van Wegen, E., Keller, K. et al. Getting into a “Flow” state: a systematic review of flow experience in neurological diseases. J NeuroEngineering Rehabil 18 , 65 (2021). https://doi.org/10.1186/s12984-021-00864-w

Download citation

Received : 27 August 2020

Accepted : 14 April 2021

Published : 20 April 2021

DOI : https://doi.org/10.1186/s12984-021-00864-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic review
  • Flow experience
  • Neurological diseases

Journal of NeuroEngineering and Rehabilitation

ISSN: 1743-0003

literature review flow research

  • UNC Libraries
  • HSL Academic Process
  • Systematic Reviews
  • Step 8: Write the Review

Systematic Reviews: Step 8: Write the Review

Created by health science librarians.

HSL Logo

  • Step 1: Complete Pre-Review Tasks
  • Step 2: Develop a Protocol
  • Step 3: Conduct Literature Searches
  • Step 4: Manage Citations
  • Step 5: Screen Citations
  • Step 6: Assess Quality of Included Studies
  • Step 7: Extract Data from Included Studies

About Step 8: Write the Review

Write your review, report your review with prisma, review sections, plain language summaries for systematic reviews, writing the review- webinars.

  • Writing the Review FAQs

  Check our FAQ's

   Email us

   Call (919) 962-0800

   Make an appointment with a librarian

  Request a systematic or scoping review consultation

Search the FAQs

In Step 8, you will write an article or a paper about your systematic review.  It will likely have five sections: introduction, methods, results, discussion, and conclusion.  You will: 

  • Review the reporting standards you will use, such as PRISMA. 
  • Gather your completed data tables and PRISMA chart. 
  • Write the Introduction to the topic and your study, Methods of your research, Results of your research, and Discussion of your results.
  • Write an Abstract describing your study and a Conclusion summarizing your paper. 
  • Cite the studies included in your systematic review and any other articles you may have used in your paper. 
  • If you wish to publish your work, choose a target journal for your article.

The PRISMA Checklist will help you report the details of your systematic review. Your paper will also include a PRISMA chart that is an image of your research process. 

Click an item below to see how it applies to Step 8: Write the Review.

Reporting your review with PRISMA

To write your review, you will need the data from your PRISMA flow diagram .  Review the PRISMA checklist to see which items you should report in your methods section.

Managing your review with Covidence

When you screen in Covidence, it will record the numbers you need for your PRISMA flow diagram from duplicate removal through inclusion of studies.  You may need to add additional information, such as the number of references from each database, citations you find through grey literature or other searching methods, or the number of studies found in your previous work if you are updating a systematic review.

How a librarian can help with Step 8

A librarian can advise you on the process of organizing and writing up your systematic review, including: 

  • Applying the PRISMA reporting templates and the level of detail to include for each element
  • How to report a systematic review search strategy and your review methodology in the completed review
  • How to use prior published reviews to guide you in organizing your manuscript 

Reporting standards & guidelines

Be sure to reference reporting standards when writing your review. This helps ensure that you communicate essential components of your methods, results, and conclusions. There are a number of tools that can be used to ensure compliance with reporting guidelines. A few review-writing resources are listed below.

  • Cochrane Handbook - Chapter 15: Interpreting results and drawing conclusions
  • JBI Manual for Evidence Synthesis - Chapter 1: systematic reviews
  • PRISMA 2020 (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) The aim of the PRISMA Statement is to help authors improve the reporting of systematic reviews and meta-analyses.

Tools for writing your review

  • RevMan (Cochrane Training)
  • Methods Wizard (Systematic Review Accelerator) The Methods Wizard is part of the Systematic Review Accelerator created by Bond University and the Institute for Evidence-Based Healthcare.
  • UNC HSL Systematic Review Manuscript Template Systematic review manuscript template(.doc) adapted from the PRISMA 2020 checklist. This document provides authors with template for writing about their systematic review. Each table contains a PRISMA checklist item that should be written about in that section, the matching PRISMA Item number, and a box where authors can indicate if an item has been completed. Once text has been added, delete any remaining instructions and the PRISMA checklist tables from the end of each section.
  • The PRISMA 2020 statement: an updated guideline for reporting systematic reviews The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies.
  • PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews This document is intended to enhance the use, understanding and dissemination of the PRISMA 2020 Statement. Through examples and explanations, the meaning and rationale for each checklist item are presented.

The PRISMA checklist

The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) is a 27-item checklist used to improve transparency in systematic reviews. These items cover all aspects of the manuscript, including title, abstract, introduction, methods, results, discussion, and funding. The PRISMA checklist can be downloaded in PDF or Word files.

  • PRISMA 2020 Checklists Download the 2020 PRISMA Checklists in Word or PDF formats or download the expanded checklist (PDF).

The PRISMA flow diagram

The PRISMA Flow Diagram visually depicts the flow of studies through each phase of the review process. The PRISMA Flow Diagram can be downloaded in Word files.

  • PRISMA 2020 Flow Diagrams The flow diagram depicts the flow of information through the different phases of a systematic review. It maps out the number of records identified, included and excluded, and the reasons for exclusions. Different templates are available depending on the type of review (new or updated) and sources used to identify studies.

Documenting grey literature and/or hand searches

If you have also searched additional sources, such as professional organization websites, cited or citing references, etc., document your grey literature search using the flow diagram template version 1 PRISMA 2020 flow diagram for new systematic reviews which included searches of databases, registers and other sources or the version 2 PRISMA 2020 flow diagram for updated systematic reviews which included searches of databases, registers and other sources . 

Complete the boxes documenting your database searches,  Identification of studies via databases and registers, according to the PRISMA flow diagram instructions.  Complete the boxes documenting your grey literature and/or hand searches on the right side of the template, Identification of studies via other methods, using the steps below.

Need help completing the PRISMA flow diagram?

There are different PRISMA flow diagram templates for new and updated reviews, as well as different templates for reviews with and without grey literature searches. Be sure you download the correct template to match your review methods, then follow the steps below for each portion of the diagram you have available.

View the step-by-step explanation of the PRISMA flow diagram

Step 1: Preparation Download the flow diagram template version 1 PRISMA 2020 flow diagram for new systematic reviews which included searches of databases and registers only or the version 2 PRISMA 2020 flow diagram for updated systematic reviews which included searches of databases and registers only . 

PRISMA Diagram: Step by Step

Run the search for each
database individually, including ALL your search terms, any
MeSH or other subject headings, truncation (like hemipleg ),
and/or wildcards (like sul ur). Apply all your limits (such as
years of search, English language only, and so on). Once all
search terms have been combined and you have applied all
relevant limits, you should have a final number of records or
articles for each database. Enter this information in the top
left box of the PRISMA flow chart. You should add the total
number of combined results from all databases (including
duplicates) after the equal sign where it says .
Many researchers also add notations in the box for the number
of results from each database search, for example, Pubmed
(n=335), Embase (n= 600), and so on.  If you search trial
registers, such as , , , or others,
you should enter that number after the equal sign in .

NOTE:Some citation managers automatically remove duplicates
with each file you import.  Be sure to capture the number of articles
from your database searches before any duplicates are removed.

To avoid reviewing duplicate articles,
you need to remove any articles that appear more than once in your
results. You may want to export the entire list of articles from each
database to a citation manager such as EndNote, Sciwheel, Zotero,
or Mendeley (including both citation and abstract in your file) and
remove the duplicates there. If you are using Covidence for your
review, you should also add the duplicate articles identified in
Covidence to the citation manager number.  Enter the number of
records removed as duplicates in the second box on your PRISMA
template.  If you are using automation tools to help evaluate the
relevance of citations in your results, you would also enter that
number here.

If you are using Covidence to screen your articles, you can
copy the numbers from the PRISMA diagram in your Covidence
review into the boxes mentioned below.  Covidence does not include
the number of results from each database, so you will need to keep
track of that  number yourself.

The next step
is to add the number of articles that you will screen. This should be
the number of records identified minus the number from the duplicates
removed box.
You will need to
screen the titles and abstracts for articles which are relevant to your
research question. Any articles that appear to help you provide an
answer to your research question should be included. Record the
number of articles excluded through title/abstract screening in the box
to the right titled "Records excluded."  You can optionally add exclusion
reasons at this level, but they are not required until full text screening.
This is the number of articles
you obtain in preparation for full text screening.  Subtract the number
of excluded records (Step 5) from the total number screened (Step 4)
and this will be your number sought for retrieval.
List the number of articles for which
you are unable to find the full text.  Remember to use Find@UNC
and to request articles to see if we can order them
from other libraries before automatically excluding them.
  This
should be the number of reports sought for retrieval (Step 6) minus
the number of reports not retrieved (Step 7). Review the full text for
these articles to assess their eligibility for inclusion in your systematic
review. 
After reviewing all articles in the full-text
screening stage for eligibility, enter the total number of articles you
exclude in the box titled "Reports excluded," and then list your reasons
for excluding the articles as well as the number of records excluded
for each reason.  Examples include wrong setting, wrong patient
population, wrong intervention, wrong dosage, etc.  You should only
count an excluded article once in your list even if if meets multiple
exclusion criteria.

The final step is to subtract the number
of records excluded during the review of full-texts (Step 9)
from the total number of full-texts reviewed (Step 8). Enter
this number in the box labeled "Studies included in review,"
combining numbers with your grey literature search results in this
box if needed. 

You have now completed your PRISMA flow diagram, unless you
have also performed searches in non-database sources or are
performing a search update. If so, complete those portions of the template as well.

View the step-by-step explanation of the grey literature & hand searching portion of the PRISMA flow diagram

Step 1: Preparation Download the flow diagram template version 1 PRISMA 2020 flow diagram for new systematic reviews which included searches of databases, registers and other sources or the version 2 PRISMA 2020 flow diagram for updated systematic reviews which included searches of databases, registers and other sources . 

PRISMA grey literature step-by-step
If you have identified articles through other
sources than databases (such as manual searches through reference
lists of articles you have found or search engines like Google Scholar),
enter the total number of records from each source type in the box on
the top right of the flow diagram.
This should be the total number
of reports you obtain from each grey literature source. 
List the number of documents for which
you are unable to find the full text.  Remember to use Find@UNC and
to request items to see if we can order them from other
libraries before automatically excluding them.
This should be the number of
grey literature reports sought for retrieval (Step 2) minus the number of
reports not retrieved (Step 3). Review the full text for these items to
assess their eligibility for inclusion in your systematic review. 
After reviewing all items in the full-text
screening stage for eligibility, enter the total number of articles you
exclude in the box titled "Reports Excluded," and then list your reasons
for excluding the item as well as the number of items excluded for each
reason.  Examples include wrong setting, wrong patient population,
wrong intervention, wrong dosage, etc.  You should only count an
excluded item once in your list even if if meets multiple exclusion criteria.
The final step is to subtract the number of
excluded articles or records during the eligibility review of full-texts from
the total number of articles reviewed for eligibility. Enter this number in
the box labeled "Studies included in review," combining numbers with
your database search results in this box if needed.  You have now
completed your PRISMA flow diagram, which you can now include in
the results section of your article or assignment.

View the step-by-step explanation of review update portion of the PRISMA flow diagram

Step 1: Preparation Download the flow diagram template version 2 PRISMA 2020 flow diagram for updated systematic reviews which included searches of databases and registers only or the version 2 PRISMA 2020 flow diagram for updated systematic reviews which included searches of databases, registers and other sources . 

PRISMA review update step-by-step

In the Previous
Studies column on the left side of your PRISMA flow diagram review
update template, indicate the number of studies included in the previous
version of your systematic review and the number of reports of studies
included in the previous version of your review.

 

At the bottom of the column,
Identification of studies via databases and registers, there will be a box
to indicate the number of new studies included in the review and the
number of reports of new included studies.  This box should contain the
number of any new items from your review update. 

There will also be a box for the total number of studies included in your
review update and the number of reports of total included studies.  This
box should contain the sum of studies and reports from your previous
systematic review and the studies and reports from your new review
update.

For more information about updating your systematic review, see the box Updating Your Review? on the Step 3: Conduct Literature Searches page of the guide.

Sections of a Scientific Manuscript

Scientific articles often follow the IMRaD format: Introduction, Methods, Results, and Discussion.  You will also need a title and an abstract to summarize your research.

You can read more about scientific writing through the library guides below.

  • Structure of Scholarly Articles & Peer Review • Explains the standard parts of a medical research article • Compares scholarly journals, professional trade journals, and magazines • Explains peer review and how to find peer reviewed articles and journals
  • Writing in the Health Sciences (For Students and Instructors)
  • Citing & Writing Tools & Guides Includes links to guides for popular citation managers such as EndNote, Sciwheel, Zotero; copyright basics; APA & AMA Style guides; Plagiarism & Citing Sources; Citing & Writing: How to Write Scientific Papers

Sections of a Systematic Review Manuscript

Systematic reviews follow the same structure as original research articles, but you will need to report on your search instead of on details like the participants or sampling. Sections of your manuscript are shown as bold headings in the PRISMA checklist.

Sections of a Systematic Review Manuscript
Title Describe your manuscript and state whether it is a systematic review, meta-analysis, or both.
Abstract Structure the abstract and include (as applicable): background, objectives, data sources, study eligibility criteria, participants, interventions, quality assessment and synthesis methods, results, limitations, conclusions, implications of key findings, and systematic review registration number.
Introduction Describe the rationale for the review and provide a statement of questions being addressed.
Methods Include details regarding the protocol, eligibility criteria, databases searched, full search strategy of at least one database (often reported in appendix), and the study selection process. Describe how data were extracted and analyzed. If a librarian is part of your research team, that person may be best suited to write this section. 
Results Report the numbers of articles screened at each stage using a PRISMA diagram. Include information about included study characteristics, risk of bias (quality assessment) within studies, and results across studies.
Discussion Summarize main findings, including the strength of evidence and limitations of the review. Provide a general interpretation of the results and implications for future research.
Funding Describe any sources of funding for the systematic review.
Appendix Include entire search strategy for at least one database in the appendix (include search strategies for all databases searched for more transparency). 

Refer to the PRISMA checklist for more information.

Consider including a Plain Language Summary (PLS) when you publish your systematic review. Like an abstract, a PLS gives an overview of your study, but is specifically written and formatted to be easy for non-experts to understand. 

Tips for writing a PLS:

  • Use clear headings e.g. "why did we do this study?"; "what did we do?"; "what did we find?"
  • Use active voice e.g. "we searched for articles in 5 databases instead of "5 databases were searched"
  • Consider need-to-know vs. nice-to-know: what is most important for readers to understand about your study? Be sure to provide the most important points without misrepresenting your study or misleading the reader. 
  • Keep it short: Many journals recommend keeping your plain language summary less than 250 words. 
  • Check journal guidelines: Your journal may have specific guidelines about the format of your plain language summary and when you can publish it. Look at journal guidelines before submitting your article. 

Learn more about Plain Language Summaries: 

  • Rosenberg, A., Baróniková, S., & Feighery, L. (2021). Open Pharma recommendations for plain language summaries of peer-reviewed medical journal publications. Current Medical Research and Opinion, 37(11), 2015–2016.  https://doi.org/10.1080/03007995.2021.1971185
  • Lobban, D., Gardner, J., & Matheis, R. (2021). Plain language summaries of publications of company-sponsored medical research: what key questions do we need to address? Current Medical Research and Opinion, 1–12. https://doi.org/10.1080/03007995.2021.1997221
  • Cochrane Community. (2022, March 21). Updated template and guidance for writing Plain Language Summaries in Cochrane Reviews now available. https://community.cochrane.org/news/updated-template-and-guidance-writing-plain-language-summaries-cochrane-reviews-now-available
  • You can also look at our Health Literacy LibGuide:  https://guides.lib.unc.edu/healthliteracy 

How to Approach Writing a Background Section

What Makes a Good Discussion Section

Writing Up Risk of Bias

Developing Your Implications for Research Section

  • << Previous: Step 7: Extract Data from Included Studies
  • Next: FAQs >>
  • Last Updated: Jul 15, 2024 4:55 PM
  • URL: https://guides.lib.unc.edu/systematic-reviews

Banner Image

Research Process :: Step by Step

  • Introduction
  • Select Topic
  • Identify Keywords
  • Background Information
  • Develop Research Questions
  • Refine Topic
  • Search Strategy
  • Popular Databases
  • Evaluate Sources
  • Types of Periodicals
  • Reading Scholarly Articles
  • Primary & Secondary Sources
  • Organize / Take Notes
  • Writing & Grammar Resources
  • Annotated Bibliography
  • Literature Review
  • Citation Styles
  • Paraphrasing
  • Privacy / Confidentiality
  • Research Process
  • Selecting Your Topic
  • Identifying Keywords
  • Gathering Background Info
  • Evaluating Sources

literature review flow research

Organize the literature review into sections that present themes or identify trends, including relevant theory. You are not trying to list all the material published, but to synthesize and evaluate it according to the guiding concept of your thesis or research question.  

What is a literature review?

A literature review is an account of what has been published on a topic by accredited scholars and researchers. Occasionally you will be asked to write one as a separate assignment, but more often it is part of the introduction to an essay, research report, or thesis. In writing the literature review, your purpose is to convey to your reader what knowledge and ideas have been established on a topic, and what their strengths and weaknesses are. As a piece of writing, the literature review must be defined by a guiding concept (e.g., your research objective, the problem or issue you are discussing, or your argumentative thesis). It is not just a descriptive list of the material available, or a set of summaries

A literature review must do these things:

  • be organized around and related directly to the thesis or research question you are developing
  • synthesize results into a summary of what is and is not known
  • identify areas of controversy in the literature
  • formulate questions that need further research

Ask yourself questions like these:

  • What is the specific thesis, problem, or research question that my literature review helps to define?
  • What type of literature review am I conducting? Am I looking at issues of theory? methodology? policy? quantitative research (e.g. on the effectiveness of a new procedure)? qualitative research (e.g., studies of loneliness among migrant workers)?
  • What is the scope of my literature review? What types of publications am I using (e.g., journals, books, government documents, popular media)? What discipline am I working in (e.g., nursing psychology, sociology, medicine)?
  • How good was my information seeking? Has my search been wide enough to ensure I've found all the relevant material? Has it been narrow enough to exclude irrelevant material? Is the number of sources I've used appropriate for the length of my paper?
  • Have I critically analyzed the literature I use? Do I follow through a set of concepts and questions, comparing items to each other in the ways they deal with them? Instead of just listing and summarizing items, do I assess them, discussing strengths and weaknesses?
  • Have I cited and discussed studies contrary to my perspective?
  • Will the reader find my literature review relevant, appropriate, and useful?

Ask yourself questions like these about each book or article you include:

  • Has the author formulated a problem/issue?
  • Is it clearly defined? Is its significance (scope, severity, relevance) clearly established?
  • Could the problem have been approached more effectively from another perspective?
  • What is the author's research orientation (e.g., interpretive, critical science, combination)?
  • What is the author's theoretical framework (e.g., psychological, developmental, feminist)?
  • What is the relationship between the theoretical and research perspectives?
  • Has the author evaluated the literature relevant to the problem/issue? Does the author include literature taking positions she or he does not agree with?
  • In a research study, how good are the basic components of the study design (e.g., population, intervention, outcome)? How accurate and valid are the measurements? Is the analysis of the data accurate and relevant to the research question? Are the conclusions validly based upon the data and analysis?
  • In material written for a popular readership, does the author use appeals to emotion, one-sided examples, or rhetorically-charged language and tone? Is there an objective basis to the reasoning, or is the author merely "proving" what he or she already believes?
  • How does the author structure the argument? Can you "deconstruct" the flow of the argument to see whether or where it breaks down logically (e.g., in establishing cause-effect relationships)?
  • In what ways does this book or article contribute to our understanding of the problem under study, and in what ways is it useful for practice? What are the strengths and limitations?
  • How does this book or article relate to the specific thesis or question I am developing?

Text written by Dena Taylor, Health Sciences Writing Centre, University of Toronto

http://www.writing.utoronto.ca/advice/specific-types-of-writing/literature-review

  • << Previous: Annotated Bibliography
  • Next: Step 5: Cite Sources >>
  • Last Updated: Jun 13, 2024 4:27 PM
  • URL: https://libguides.uta.edu/researchprocess

University of Texas Arlington Libraries 702 Planetarium Place · Arlington, TX 76019 · 817-272-3000

  • Internet Privacy
  • Accessibility
  • Problems with a guide? Contact Us.

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing a Literature Review

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays). When we say “literature review” or refer to “the literature,” we are talking about the research ( scholarship ) in a given field. You will often see the terms “the research,” “the scholarship,” and “the literature” used mostly interchangeably.

Where, when, and why would I write a lit review?

There are a number of different situations where you might write a literature review, each with slightly different expectations; different disciplines, too, have field-specific expectations for what a literature review is and does. For instance, in the humanities, authors might include more overt argumentation and interpretation of source material in their literature reviews, whereas in the sciences, authors are more likely to report study designs and results in their literature reviews; these differences reflect these disciplines’ purposes and conventions in scholarship. You should always look at examples from your own discipline and talk to professors or mentors in your field to be sure you understand your discipline’s conventions, for literature reviews as well as for any other genre.

A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research methodology.

Lit reviews can also be standalone pieces, either as assignments in a class or as publications. In a class, a lit review may be assigned to help students familiarize themselves with a topic and with scholarship in their field, get an idea of the other researchers working on the topic they’re interested in, find gaps in existing research in order to propose new projects, and/or develop a theoretical framework and methodology for later research. As a publication, a lit review usually is meant to help make other scholars’ lives easier by collecting and summarizing, synthesizing, and analyzing existing research on a topic. This can be especially helpful for students or scholars getting into a new research area, or for directing an entire community of scholars toward questions that have not yet been answered.

What are the parts of a lit review?

Most lit reviews use a basic introduction-body-conclusion structure; if your lit review is part of a larger paper, the introduction and conclusion pieces may be just a few sentences while you focus most of your attention on the body. If your lit review is a standalone piece, the introduction and conclusion take up more space and give you a place to discuss your goals, research methods, and conclusions separately from where you discuss the literature itself.

Introduction:

  • An introductory paragraph that explains what your working topic and thesis is
  • A forecast of key topics or texts that will appear in the review
  • Potentially, a description of how you found sources and how you analyzed them for inclusion and discussion in the review (more often found in published, standalone literature reviews than in lit review sections in an article or research paper)
  • Summarize and synthesize: Give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: Don’t just paraphrase other researchers – add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically Evaluate: Mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: Use transition words and topic sentence to draw connections, comparisons, and contrasts.

Conclusion:

  • Summarize the key findings you have taken from the literature and emphasize their significance
  • Connect it back to your primary research question

How should I organize my lit review?

Lit reviews can take many different organizational patterns depending on what you are trying to accomplish with the review. Here are some examples:

  • Chronological : The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. Try to analyze the patterns, turning points, and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred (as mentioned previously, this may not be appropriate in your discipline — check with a teacher or mentor if you’re unsure).
  • Thematic : If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
  • Qualitative versus quantitative research
  • Empirical versus theoretical scholarship
  • Divide the research by sociological, historical, or cultural sources
  • Theoretical : In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theorical concepts to create a framework for your research.

What are some strategies or tips I can use while writing my lit review?

Any lit review is only as good as the research it discusses; make sure your sources are well-chosen and your research is thorough. Don’t be afraid to do more research if you discover a new thread as you’re writing. More info on the research process is available in our "Conducting Research" resources .

As you’re doing your research, create an annotated bibliography ( see our page on the this type of document ). Much of the information used in an annotated bibliography can be used also in a literature review, so you’ll be not only partially drafting your lit review as you research, but also developing your sense of the larger conversation going on among scholars, professionals, and any other stakeholders in your topic.

Usually you will need to synthesize research rather than just summarizing it. This means drawing connections between sources to create a picture of the scholarly conversation on a topic over time. Many student writers struggle to synthesize because they feel they don’t have anything to add to the scholars they are citing; here are some strategies to help you:

  • It often helps to remember that the point of these kinds of syntheses is to show your readers how you understand your research, to help them read the rest of your paper.
  • Writing teachers often say synthesis is like hosting a dinner party: imagine all your sources are together in a room, discussing your topic. What are they saying to each other?
  • Look at the in-text citations in each paragraph. Are you citing just one source for each paragraph? This usually indicates summary only. When you have multiple sources cited in a paragraph, you are more likely to be synthesizing them (not always, but often
  • Read more about synthesis here.

The most interesting literature reviews are often written as arguments (again, as mentioned at the beginning of the page, this is discipline-specific and doesn’t work for all situations). Often, the literature review is where you can establish your research as filling a particular gap or as relevant in a particular way. You have some chance to do this in your introduction in an article, but the literature review section gives a more extended opportunity to establish the conversation in the way you would like your readers to see it. You can choose the intellectual lineage you would like to be part of and whose definitions matter most to your thinking (mostly humanities-specific, but this goes for sciences as well). In addressing these points, you argue for your place in the conversation, which tends to make the lit review more compelling than a simple reporting of other sources.

  • UConn Library
  • Literature Review: The What, Why and How-to Guide
  • Introduction

Literature Review: The What, Why and How-to Guide — Introduction

  • Getting Started
  • How to Pick a Topic
  • Strategies to Find Sources
  • Evaluating Sources & Lit. Reviews
  • Tips for Writing Literature Reviews
  • Writing Literature Review: Useful Sites
  • Citation Resources
  • Other Academic Writings

What are Literature Reviews?

So, what is a literature review? "A literature review is an account of what has been published on a topic by accredited scholars and researchers. In writing the literature review, your purpose is to convey to your reader what knowledge and ideas have been established on a topic, and what their strengths and weaknesses are. As a piece of writing, the literature review must be defined by a guiding concept (e.g., your research objective, the problem or issue you are discussing, or your argumentative thesis). It is not just a descriptive list of the material available, or a set of summaries." Taylor, D.  The literature review: A few tips on conducting it . University of Toronto Health Sciences Writing Centre.

Goals of Literature Reviews

What are the goals of creating a Literature Review?  A literature could be written to accomplish different aims:

  • To develop a theory or evaluate an existing theory
  • To summarize the historical or existing state of a research topic
  • Identify a problem in a field of research 

Baumeister, R. F., & Leary, M. R. (1997). Writing narrative literature reviews .  Review of General Psychology , 1 (3), 311-320.

What kinds of sources require a Literature Review?

  • A research paper assigned in a course
  • A thesis or dissertation
  • A grant proposal
  • An article intended for publication in a journal

All these instances require you to collect what has been written about your research topic so that you can demonstrate how your own research sheds new light on the topic.

Types of Literature Reviews

What kinds of literature reviews are written?

Narrative review: The purpose of this type of review is to describe the current state of the research on a specific topic/research and to offer a critical analysis of the literature reviewed. Studies are grouped by research/theoretical categories, and themes and trends, strengths and weakness, and gaps are identified. The review ends with a conclusion section which summarizes the findings regarding the state of the research of the specific study, the gaps identify and if applicable, explains how the author's research will address gaps identify in the review and expand the knowledge on the topic reviewed.

  • Example : Predictors and Outcomes of U.S. Quality Maternity Leave: A Review and Conceptual Framework:  10.1177/08948453211037398  

Systematic review : "The authors of a systematic review use a specific procedure to search the research literature, select the studies to include in their review, and critically evaluate the studies they find." (p. 139). Nelson, L. K. (2013). Research in Communication Sciences and Disorders . Plural Publishing.

  • Example : The effect of leave policies on increasing fertility: a systematic review:  10.1057/s41599-022-01270-w

Meta-analysis : "Meta-analysis is a method of reviewing research findings in a quantitative fashion by transforming the data from individual studies into what is called an effect size and then pooling and analyzing this information. The basic goal in meta-analysis is to explain why different outcomes have occurred in different studies." (p. 197). Roberts, M. C., & Ilardi, S. S. (2003). Handbook of Research Methods in Clinical Psychology . Blackwell Publishing.

  • Example : Employment Instability and Fertility in Europe: A Meta-Analysis:  10.1215/00703370-9164737

Meta-synthesis : "Qualitative meta-synthesis is a type of qualitative study that uses as data the findings from other qualitative studies linked by the same or related topic." (p.312). Zimmer, L. (2006). Qualitative meta-synthesis: A question of dialoguing with texts .  Journal of Advanced Nursing , 53 (3), 311-318.

  • Example : Women’s perspectives on career successes and barriers: A qualitative meta-synthesis:  10.1177/05390184221113735

Literature Reviews in the Health Sciences

  • UConn Health subject guide on systematic reviews Explanation of the different review types used in health sciences literature as well as tools to help you find the right review type
  • << Previous: Getting Started
  • Next: How to Pick a Topic >>
  • Last Updated: Sep 21, 2022 2:16 PM
  • URL: https://guides.lib.uconn.edu/literaturereview

Creative Commons

literature review flow research

How to Write a Literature Review: Six Steps to Get You from Start to Finish

Writing-a-literature-review-six-steps-to-get-you-from-start-to-finish.

Tanya Golash-Boza, Associate Professor of Sociology, University of California

February 03, 2022

Writing a literature review is often the most daunting part of writing an article, book, thesis, or dissertation. “The literature” seems (and often is) massive. I have found it helpful to be as systematic as possible when completing this gargantuan task.

Sonja Foss and William Walters* describe an efficient and effective way of writing a literature review. Their system provides an excellent guide for getting through the massive amounts of literature for any purpose: in a dissertation, an M.A. thesis, or preparing a research article for publication  in any field of study. Below is a  summary of the steps they outline as well as a step-by-step method for writing a literature review.

How to Write a Literature Review

Step One: Decide on your areas of research:

Before you begin to search for articles or books, decide beforehand what areas you are going to research. Make sure that you only get articles and books in those areas, even if you come across fascinating books in other areas. A literature review I am currently working on, for example, explores barriers to higher education for undocumented students.

Step Two: Search for the literature:

Conduct a comprehensive bibliographic search of books and articles in your area. Read the abstracts online and download and/or print those articles that pertain to your area of research. Find books in the library that are relevant and check them out. Set a specific time frame for how long you will search. It should not take more than two or three dedicated sessions.

Step Three: Find relevant excerpts in your books and articles:

Skim the contents of each book and article and look specifically for these five things:

1. Claims, conclusions, and findings about the constructs you are investigating

2. Definitions of terms

3. Calls for follow-up studies relevant to your project

4. Gaps you notice in the literature

5. Disagreement about the constructs you are investigating

When you find any of these five things, type the relevant excerpt directly into a Word document. Don’t summarize, as summarizing takes longer than simply typing the excerpt. Make sure to note the name of the author and the page number following each excerpt. Do this for each article and book that you have in your stack of literature. When you are done, print out your excerpts.

Step Four: Code the literature:

Get out a pair of scissors and cut each excerpt out. Now, sort the pieces of paper into similar topics. Figure out what the main themes are. Place each excerpt into a themed pile. Make sure each note goes into a pile. If there are excerpts that you can’t figure out where they belong, separate those and go over them again at the end to see if you need new categories. When you finish, place each stack of notes into an envelope labeled with the name of the theme.

Step Five: Create Your Conceptual Schema:

Type, in large font, the name of each of your coded themes. Print this out, and cut the titles into individual slips of paper. Take the slips of paper to a table or large workspace and figure out the best way to organize them. Are there ideas that go together or that are in dialogue with each other? Are there ideas that contradict each other? Move around the slips of paper until you come up with a way of organizing the codes that makes sense. Write the conceptual schema down before you forget or someone cleans up your slips of paper.

Step Six: Begin to Write Your Literature Review:

Choose any section of your conceptual schema to begin with. You can begin anywhere, because you already know the order. Find the envelope with the excerpts in them and lay them on the table in front of you. Figure out a mini-conceptual schema based on that theme by grouping together those excerpts that say the same thing. Use that mini-conceptual schema to write up your literature review based on the excerpts that you have in front of you. Don’t forget to include the citations as you write, so as not to lose track of who said what. Repeat this for each section of your literature review.

Once you complete these six steps, you will have a complete draft of your literature review. The great thing about this process is that it breaks down into manageable steps something that seems enormous: writing a literature review.

I think that Foss and Walter’s system for writing the literature review is ideal for a dissertation, because a Ph.D. candidate has already read widely in his or her field through graduate seminars and comprehensive exams.

It may be more challenging for M.A. students, unless you are already familiar with the literature. It is always hard to figure out how much you need to read for deep meaning, and how much you just need to know what others have said. That balance will depend on how much you already know.

For people writing literature reviews for articles or books, this system also could work, especially when you are writing in a field with which you are already familiar. The mere fact of having a system can make the literature review seem much less daunting, so I recommend this system for anyone who feels overwhelmed by the prospect of writing a literature review.

*Destination Dissertation: A Traveler's Guide to a Done Dissertation

Image Credit/Source: Goldmund Lukic/Getty Images

literature review flow research

Watch our Webinar to help you get published

Please enter your Email Address

Please enter valid email address

Please Enter your First Name

Please enter your Last Name

Please enter your Questions or Comments.

Please enter the Privacy

Please enter the Terms & Conditions

literature review flow research

Leveraging user research to improve author guidelines at the Council of Science Editors Annual Meeting

literature review flow research

How research content supports academic integrity

literature review flow research

Finding time to publish as a medical student: 6 tips for Success

literature review flow research

Software to Improve Reliability of Research Image Data: Wiley, Lumina, and Researchers at Harvard Medical School Work Together on Solutions

literature review flow research

Driving Research Outcomes: Wiley Partners with CiteAb

literature review flow research

ISBN, ISSN, DOI: what they are and how to find them

literature review flow research

Image Collections for Medical Practitioners with TDS Health

literature review flow research

How do you Discover Content?

literature review flow research

Writing for Publication for Nurses (Mandarin Edition)

literature review flow research

Get Published - Your How to Webinar

Related articles.

User Experience (UX) Research is the process of discovering and understanding user requirements, motivations, and behaviours 

Learn how Wiley partners with plagiarism detection services to support academic integrity around the world

Medical student Nicole Foley shares her top tips for writing and getting your work published.

Wiley and Lumina are working together to support the efforts of researchers at Harvard Medical School to develop and test new machine learning tools and artificial intelligence (AI) software that can

Learn more about our relationship with a company that helps scientists identify the right products to use in their research

What is ISBN? ISSN? DOI? Learn about some of the unique identifiers for book and journal content.

Learn how medical practitioners can easily access and search visual assets from our article portfolio

Explore free-to-use services that can help you discover new content

Watch this webinar to help you learn how to get published.

literature review flow research

Finding time to publish as a medical student: 6 tips for success

literature review flow research

How to Easily Access the Most Relevant Research: A Q&A With the Creator of Scitrus

Atypon launches Scitrus, a personalized web app that allows users to create a customized feed of the latest research.

FOR INDIVIDUALS

FOR INSTITUTIONS & BUSINESSES

WILEY NETWORK

ABOUT WILEY

Corporate Responsibility

Corporate Governance

Leadership Team

Cookie Preferences

Copyright @ 2000-2024  by John Wiley & Sons, Inc., or related companies. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.

Rights & Permissions

Privacy Policy

Terms of Use

Writing in the Health and Social Sciences

  • Journal Publishing
  • Style and Writing Guides
  • Readings about Writing
  • Citing in APA Style This link opens in a new window
  • Resources for Dissertation Authors
  • Citation Management and Formatting Tools

Systematic Literature Reviews: Steps & Resources

Literature review & systematic review steps.

  • What are Literature Reviews?
  • Conducting & Reporting Systematic Reviews
  • Finding Systematic Reviews
  • Tutorials & Tools for Literature Reviews

What are Systematic Reviews? (3 minutes, 24 second YouTube Video)

literature review flow research

These steps for conducting a systematic literature review are listed below . 

Also see subpages for more information about:

  • The different types of literature reviews, including systematic reviews and other evidence synthesis methods
  • Tools & Tutorials
  • Develop a Focused Question
  • Scope the Literature  (Initial Search)
  • Refine & Expand the Search
  • Limit the Results
  • Download Citations
  • Abstract & Analyze
  • Create Flow Diagram
  • Synthesize & Report Results

1. Develop a Focused   Question 

Consider the PICO Format: Population/Problem, Intervention, Comparison, Outcome

Focus on defining the Population or Problem and Intervention (don't narrow by Comparison or Outcome just yet!)

"What are the effects of the Pilates method for patients with low back pain?"

Tools & Additional Resources:

  • PICO Question Help
  • Stillwell, Susan B., DNP, RN, CNE; Fineout-Overholt, Ellen, PhD, RN, FNAP, FAAN; Melnyk, Bernadette Mazurek, PhD, RN, CPNP/PMHNP, FNAP, FAAN; Williamson, Kathleen M., PhD, RN Evidence-Based Practice, Step by Step: Asking the Clinical Question, AJN The American Journal of Nursing : March 2010 - Volume 110 - Issue 3 - p 58-61 doi: 10.1097/01.NAJ.0000368959.11129.79

2. Scope the Literature

A "scoping search" investigates the breadth and/or depth of the initial question or may identify a gap in the literature. 

Eligible studies may be located by searching in:

  • Background sources (books, point-of-care tools)
  • Article databases
  • Trial registries
  • Grey literature
  • Cited references
  • Reference lists

When searching, if possible, translate terms to controlled vocabulary of the database. Use text word searching when necessary.

Use Boolean operators to connect search terms:

  • Combine separate concepts with AND  (resulting in a narrower search)
  • Connecting synonyms with OR  (resulting in an expanded search)

Search:  pilates AND ("low back pain"  OR  backache )

Video Tutorials - Translating PICO Questions into Search Queries

  • Translate Your PICO Into a Search in PubMed (YouTube, Carrie Price, 5:11) 
  • Translate Your PICO Into a Search in CINAHL (YouTube, Carrie Price, 4:56)

3. Refine & Expand Your Search

Expand your search strategy with synonymous search terms harvested from:

  • database thesauri
  • reference lists
  • relevant studies

Example: 

(pilates OR exercise movement techniques) AND ("low back pain" OR backache* OR sciatica OR lumbago OR spondylosis)

As you develop a final, reproducible strategy for each database, save your strategies in a:

  • a personal database account (e.g., MyNCBI for PubMed)
  • Log in with your NYU credentials
  • Open and "Make a Copy" to create your own tracker for your literature search strategies

4. Limit Your Results

Use database filters to limit your results based on your defined inclusion/exclusion criteria.  In addition to relying on the databases' categorical filters, you may also need to manually screen results.  

  • Limit to Article type, e.g.,:  "randomized controlled trial" OR multicenter study
  • Limit by publication years, age groups, language, etc.

NOTE: Many databases allow you to filter to "Full Text Only".  This filter is  not recommended . It excludes articles if their full text is not available in that particular database (CINAHL, PubMed, etc), but if the article is relevant, it is important that you are able to read its title and abstract, regardless of 'full text' status. The full text is likely to be accessible through another source (a different database, or Interlibrary Loan).  

  • Filters in PubMed
  • CINAHL Advanced Searching Tutorial

5. Download Citations

Selected citations and/or entire sets of search results can be downloaded from the database into a citation management tool. If you are conducting a systematic review that will require reporting according to PRISMA standards, a citation manager can help you keep track of the number of articles that came from each database, as well as the number of duplicate records.

In Zotero, you can create a Collection for the combined results set, and sub-collections for the results from each database you search.  You can then use Zotero's 'Duplicate Items" function to find and merge duplicate records.

File structure of a Zotero library, showing a combined pooled set, and sub folders representing results from individual databases.

  • Citation Managers - General Guide

6. Abstract and Analyze

  • Migrate citations to data collection/extraction tool
  • Screen Title/Abstracts for inclusion/exclusion
  • Screen and appraise full text for relevance, methods, 
  • Resolve disagreements by consensus

Covidence is a web-based tool that enables you to work with a team to screen titles/abstracts and full text for inclusion in your review, as well as extract data from the included studies.

Screenshot of the Covidence interface, showing Title and abstract screening phase.

  • Covidence Support
  • Critical Appraisal Tools
  • Data Extraction Tools

7. Create Flow Diagram

The PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) flow diagram is a visual representation of the flow of records through different phases of a systematic review.  It depicts the number of records identified, included and excluded.  It is best used in conjunction with the PRISMA checklist .

Example PRISMA diagram showing number of records identified, duplicates removed, and records excluded.

Example from: Stotz, S. A., McNealy, K., Begay, R. L., DeSanto, K., Manson, S. M., & Moore, K. R. (2021). Multi-level diabetes prevention and treatment interventions for Native people in the USA and Canada: A scoping review. Current Diabetes Reports, 2 (11), 46. https://doi.org/10.1007/s11892-021-01414-3

  • PRISMA Flow Diagram Generator (ShinyApp.io, Haddaway et al. )
  • PRISMA Diagram Templates  (Word and PDF)
  • Make a copy of the file to fill out the template
  • Image can be downloaded as PDF, PNG, JPG, or SVG
  • Covidence generates a PRISMA diagram that is automatically updated as records move through the review phases

8. Synthesize & Report Results

There are a number of reporting guideline available to guide the synthesis and reporting of results in systematic literature reviews.

It is common to organize findings in a matrix, also known as a Table of Evidence (ToE).

Example of a review matrix, using Microsoft Excel, showing the results of a systematic literature review.

  • Reporting Guidelines for Systematic Reviews
  • Download a sample template of a health sciences review matrix  (GoogleSheets)

Steps modified from: 

Cook, D. A., & West, C. P. (2012). Conducting systematic reviews in medical education: a stepwise approach.   Medical Education , 46 (10), 943–952.

  • << Previous: Citation Management and Formatting Tools
  • Next: What are Literature Reviews? >>
  • Last Updated: Jul 9, 2024 8:26 AM
  • URL: https://guides.nyu.edu/healthwriting
  • - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • The PRISMA statement...

The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration

  • Related content
  • Peer review
  • Alessandro Liberati 1 2 ,
  • Douglas G Altman 3 ,
  • Jennifer Tetzlaff 4 ,
  • Cynthia Mulrow 5 ,
  • Peter C Gøtzsche 6 ,
  • John P A Ioannidis 7 ,
  • Mike Clarke 8 9 ,
  • P J Devereaux 10 ,
  • Jos Kleijnen 11 12 ,
  • David Moher 4 13
  • 1 Università di Modena e Reggio Emilia, Modena, Italy
  • 2 Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Milan, Italy
  • 3 Centre for Statistics in Medicine, University of Oxford, Oxford
  • 4 Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
  • 5 Annals of Internal Medicine, Philadelphia, Pennsylvania, USA
  • 6 Nordic Cochrane Centre, Copenhagen, Denmark
  • 7 Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece
  • 8 UK Cochrane Centre, Oxford
  • 9 School of Nursing and Midwifery, Trinity College, Dublin, Republic of Ireland
  • 10 Departments of Medicine, Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada
  • 11 Kleijnen Systematic Reviews, York
  • 12 School for Public Health and Primary Care (CAPHRI), University of Maastricht, Maastricht, Netherlands
  • 13 Department of Epidemiology and Community Medicine, Faculty of Medicine, Ottawa, Ontario, Canada
  • Correspondence to: alesslib{at}mailbase.it
  • Accepted 5 June 2009

Systematic reviews and meta-analyses are essential to summarise evidence relating to efficacy and safety of healthcare interventions accurately and reliably. The clarity and transparency of these reports, however, are not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users.

Since the development of the QUOROM (quality of reporting of meta-analysis) statement—a reporting guideline published in 1999—there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realising these issues, an international group that included experienced authors and methodologists developed PRISMA (preferred reporting items for systematic reviews and meta-analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions.

The PRISMA statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this explanation and elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA statement, this document, and the associated website ( www.prisma-statement.org/ ) should be helpful resources to improve reporting of systematic reviews and meta-analyses.

Introduction

Systematic reviews and meta-analyses are essential tools for summarising evidence accurately and reliably. They help clinicians keep up to date; provide evidence for policy makers to judge risks, benefits, and harms of healthcare behaviours and interventions; gather together and summarise related research for patients and their carers; provide a starting point for clinical practice guideline developers; provide summaries of previous research for funders wishing to support new research; 1 and help editors judge the merits of publishing reports of new studies. 2 Recent data suggest that at least 2500 new systematic reviews reported in English are indexed in Medline annually. 3

Unfortunately, there is considerable evidence that key information is often poorly reported in systematic reviews, thus diminishing their potential usefulness. 3 4 5 6 As is true for all research, systematic reviews should be reported fully and transparently to allow readers to assess the strengths and weaknesses of the investigation. 7 That rationale led to the development of the QUOROM (quality of reporting of meta-analysis) statement; those detailed reporting recommendations were published in 1999. 8 In this paper we describe the updating of that guidance. Our aim is to ensure clear presentation of what was planned, done, and found in a systematic review.

Terminology used to describe systematic reviews and meta-analyses has evolved over time and varies across different groups of researchers and authors (see box 1 at end of document). In this document we adopt the definitions used by the Cochrane Collaboration. 9 A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria to answer a specific research question. It uses explicit, systematic methods that are selected to minimise bias, thus providing reliable findings from which conclusions can be drawn and decisions made. Meta-analysis is the use of statistical methods to summarise and combine the results of independent studies. Many systematic reviews contain meta-analyses, but not all.

The QUOROM statement and its evolution into PRISMA

The QUOROM statement, developed in 1996 and published in 1999, 8 was conceived as a reporting guidance for authors reporting a meta-analysis of randomised trials. Since then, much has happened. First, knowledge about the conduct and reporting of systematic reviews has expanded considerably. For example, the Cochrane Library’s Methodology Register (which includes reports of studies relevant to the methods for systematic reviews) now contains more than 11 000 entries (March 2009). Second, there have been many conceptual advances, such as “outcome-level” assessments of the risk of bias, 10 11 that apply to systematic reviews. Third, authors have increasingly used systematic reviews to summarise evidence other than that provided by randomised trials.

However, despite advances, the quality of the conduct and reporting of systematic reviews remains well short of ideal. 3 4 5 6 All of these issues prompted the need for an update and expansion of the QUOROM statement. Of note, recognising that the updated statement now addresses the above conceptual and methodological issues and may also have broader applicability than the original QUOROM statement, we changed the name of the reporting guidance to PRISMA (preferred reporting items for systematic reviews and meta-analyses).

Development of PRISMA

The PRISMA statement was developed by a group of 29 review authors, methodologists, clinicians, medical editors, and consumers. 12 They attended a three day meeting in 2005 and participated in extensive post-meeting electronic correspondence. A consensus process that was informed by evidence, whenever possible, was used to develop a 27-item checklist (table 1 ⇓ ) and a four-phase flow diagram (fig 1 ⇓ ) (also available as extra items on bmj.com for researchers to download and re-use). Items deemed essential for transparent reporting of a systematic review were included in the checklist. The flow diagram originally proposed by QUOROM was also modified to show numbers of identified records, excluded articles, and included studies. After 11 revisions the group approved the checklist, flow diagram, and this explanatory paper.

Fig 1 Flow of information through the different phases of a systematic review.

  • Download figure
  • Open in new tab
  • Download powerpoint

 Checklist of items to include when reporting a systematic review or meta-analysis

  • View inline

The PRISMA statement itself provides further details regarding its background and development. 12 This accompanying explanation and elaboration document explains the meaning and rationale for each checklist item. A few PRISMA Group participants volunteered to help draft specific items for this document, and four of these (DGA, AL, DM, and JT) met on several occasions to further refine the document, which was circulated and ultimately approved by the larger PRISMA Group.

Scope of PRISMA

PRISMA focuses on ways in which authors can ensure the transparent and complete reporting of systematic reviews and meta-analyses. It does not address directly or in a detailed manner the conduct of systematic reviews, for which other guides are available. 13 14 15 16

We developed the PRISMA statement and this explanatory document to help authors report a wide array of systematic reviews to assess the benefits and harms of a healthcare intervention. We consider most of the checklist items relevant when reporting systematic reviews of non-randomised studies assessing the benefits and harms of interventions. However, we recognise that authors who address questions relating to aetiology, diagnosis, or prognosis, for example, and who review epidemiological or diagnostic accuracy studies may need to modify or incorporate additional items for their systematic reviews.

How to use this paper

We modeled this explanation and elaboration document after those prepared for other reporting guidelines. 17 18 19 To maximise the benefit of this document, we encourage people to read it in conjunction with the PRISMA statement. 11

We present each checklist item and follow it with a published exemplar of good reporting for that item. (We edited some examples by removing citations or web addresses, or by spelling out abbreviations.) We then explain the pertinent issue, the rationale for including the item, and relevant evidence from the literature, whenever possible. No systematic search was carried out to identify exemplars and evidence. We also include seven boxes at the end of the document that provide a more comprehensive explanation of certain thematic aspects of the methodology and conduct of systematic reviews.

Although we focus on a minimal list of items to consider when reporting a systematic review, we indicate places where additional information is desirable to improve transparency of the review process. We present the items numerically from 1 to 27; however, authors need not address items in this particular order in their reports. Rather, what is important is that the information for each item is given somewhere within the report.

The PRISMA checklist

Title and abstract, item 1: title.

Identify the report as a systematic review, meta-analysis, or both.

Examples “Recurrence rates of video-assisted thoracoscopic versus open surgery in the prevention of recurrent pneumothoraces: a systematic review of randomised and non-randomised trials” 20

“Mortality in randomised trials of antioxidant supplements for primary and secondary prevention: systematic review and meta-analysis” 21

Explanation Authors should identify their report as a systematic review or meta-analysis. Terms such as “review” or “overview” do not describe for readers whether the review was systematic or whether a meta-analysis was performed. A recent survey found that 50% of 300 authors did not mention the terms “systematic review” or “meta-analysis” in the title or abstract of their systematic review. 3 Although sensitive search strategies have been developed to identify systematic reviews, 22 inclusion of the terms systematic review or meta-analysis in the title may improve indexing and identification.

We advise authors to use informative titles that make key information easily accessible to readers. Ideally, a title reflecting the PICOS approach (participants, interventions, comparators, outcomes, and study design) (see item 11 and box 2) may help readers as it provides key information about the scope of the review. Specifying the design(s) of the studies included, as shown in the examples, may also help some readers and those searching databases.

Some journals recommend “indicative titles” that indicate the topic matter of the review, while others require declarative titles that give the review’s main conclusion. Busy practitioners may prefer to see the conclusion of the review in the title, but declarative titles can oversimplify or exaggerate findings. Thus, many journals and methodologists prefer indicative titles as used in the examples above.

Item 2: Structured summary

Provide a structured summary including, as applicable, background; objectives; data sources; study eligibility criteria, participants, and interventions; study appraisal and synthesis methods; results; limitations; conclusions and implications of key findings; funding for the systematic review; and systematic review registration number.

Example “ Context : The role and dose of oral vitamin D supplementation in nonvertebral fracture prevention have not been well established.

Objective : To estimate the effectiveness of vitamin D supplementation in preventing hip and nonvertebral fractures in older persons.

Data Sources : A systematic review of English and non-English articles using MEDLINE and the Cochrane Controlled Trials Register (1960-2005), and EMBASE (1991-2005). Additional studies were identified by contacting clinical experts and searching bibliographies and abstracts presented at the American Society for Bone and Mineral Research (1995-2004). Search terms included randomised controlled trial (RCT), controlled clinical trial, random allocation, double-blind method, cholecalciferol, ergocalciferol, 25-hydroxyvitamin D, fractures, humans, elderly, falls, and bone density.

Study Selection : Only double-blind RCTs of oral vitamin D supplementation (cholecalciferol, ergocalciferol) with or without calcium supplementation vs calcium supplementation or placebo in older persons (>60 years) that examined hip or nonvertebral fractures were included.

Data Extraction : Independent extraction of articles by 2 authors using predefined data fields, including study quality indicators.

Data Synthesis : All pooled analyses were based on random-effects models. Five RCTs for hip fracture (n=9294) and 7 RCTs for nonvertebral fracture risk (n=9820) met our inclusion criteria. All trials used cholecalciferol. Heterogeneity among studies for both hip and nonvertebral fracture prevention was observed, which disappeared after pooling RCTs with low-dose (400 IU/d) and higher-dose vitamin D (700-800 IU/d), separately. A vitamin D dose of 700 to 800 IU/d reduced the relative risk (RR) of hip fracture by 26% (3 RCTs with 5572 persons; pooled RR, 0.74; 95% confidence interval [CI], 0.61-0.88) and any nonvertebral fracture by 23% (5 RCTs with 6098 persons; pooled RR, 0.77; 95% CI, 0.68-0.87) vs calcium or placebo. No significant benefit was observed for RCTs with 400 IU/d vitamin D (2 RCTs with 3722 persons; pooled RR for hip fracture, 1.15; 95% CI, 0.88-1.50; and pooled RR for any nonvertebral fracture, 1.03; 95% CI, 0.86-1.24).

Conclusions : Oral vitamin D supplementation between 700 to 800 IU/d appears to reduce the risk of hip and any nonvertebral fractures in ambulatory or institutionalised elderly persons. An oral vitamin D dose of 400 IU/d is not sufficient for fracture prevention.” 23

Explanation Abstracts provide key information that enables readers to understand the scope, processes, and findings of a review and to decide whether to read the full report. The abstract may be all that is readily available to a reader, for example, in a bibliographic database. The abstract should present a balanced and realistic assessment of the review’s findings that mirrors, albeit briefly, the main text of the report.

We agree with others that the quality of reporting in abstracts presented at conferences and in journal publications needs improvement. 24 25 While we do not uniformly favour a specific format over another, we generally recommend structured abstracts. Structured abstracts provide readers with a series of headings pertaining to the purpose, conduct, findings, and conclusions of the systematic review being reported. 26 27 They give readers more complete information and facilitate finding information more easily than unstructured abstracts. 28 29 30 31 32

A highly structured abstract of a systematic review could include the following headings: Context (or Background ); Objective (or Purpose ); Data sources ; Study selection (or Eligibility criteria ); Study appraisal and Synthesis methods (or Data extraction and Data synthesis ); Results ; Limitations ; and Conclusions (or Implications ). Alternatively, a simpler structure could cover but collapse some of the above headings (such as label Study selection and Study appraisal as Review methods ) or omit some headings such as Background and Limitations .

In the highly structured abstract mentioned above, authors use the Background heading to set the context for readers and explain the importance of the review question. Under the Objectives heading, they ideally use elements of PICOS (see box 2) to state the primary objective of the review. Under a Data sources heading, they summarise sources that were searched, any language or publication type restrictions, and the start and end dates of searches. Study selection statements then ideally describe who selected studies using what inclusion criteria. Data extraction methods statements describe appraisal methods during data abstraction and the methods used to integrate or summarise the data. The Data synthesis section is where the main results of the review are reported. If the review includes meta-analyses, authors should provide numerical results with confidence intervals for the most important outcomes. Ideally, they should specify the amount of evidence in these analyses (numbers of studies and numbers of participants). Under a Limitations heading, authors might describe the most important weaknesses of included studies as well as limitations of the review process. Then authors should provide clear and balanced Conclusions that are closely linked to the objective and findings of the review. Additionally, it would be helpful if authors included some information about funding for the review. Finally, although protocol registration for systematic reviews is still not common practice, if authors have registered their review or received a registration number, we recommend providing the registration information at the end of the abstract.

Taking all the above considerations into account, the intrinsic tension between the goal of completeness of the abstract and its keeping into the space limit often set by journal editors is recognised as a major challenge.

Item 3: Rationale

Describe the rationale for the review in the context of what is already known.

Example “Reversing the trend of increasing weight for height in children has proven difficult. It is widely accepted that increasing energy expenditure and reducing energy intake form the theoretical basis for management. Therefore, interventions aiming to increase physical activity and improve diet are the foundation of efforts to prevent and treat childhood obesity. Such lifestyle interventions have been supported by recent systematic reviews, as well as by the Canadian Paediatric Society, the Royal College of Paediatrics and Child Health, and the American Academy of Pediatrics. However, these interventions are fraught with poor adherence. Thus, school-based interventions are theoretically appealing because adherence with interventions can be improved. Consequently, many local governments have enacted or are considering policies that mandate increased physical activity in schools, although the effect of such interventions on body composition has not been assessed.” 33

Explanation Readers need to understand the rationale behind the study and what the systematic review may add to what is already known. Authors should tell readers whether their report is a new systematic review or an update of an existing one. If the review is an update, authors should state reasons for the update, including what has been added to the evidence base since the previous version of the review.

An ideal background or introduction that sets context for readers might include the following. First, authors might define the importance of the review question from different perspectives (such as public health, individual patient, or health policy). Second, authors might briefly mention the current state of knowledge and its limitations. As in the above example, information about the effects of several different interventions may be available that helps readers understand why potential relative benefits or harms of particular interventions need review. Third, authors might whet readers’ appetites by clearly stating what the review aims to add. They also could discuss the extent to which the limitations of the existing evidence base may be overcome by the review.

Item 4: Objectives

Provide an explicit statement of questions being addressed with reference to participants, interventions, comparisons, outcomes, and study design (PICOS).

Example “To examine whether topical or intraluminal antibiotics reduce catheter-related bloodstream infection, we reviewed randomised, controlled trials that assessed the efficacy of these antibiotics for primary prophylaxis against catheter-related bloodstream infection and mortality compared with no antibiotic therapy in adults undergoing hemodialysis.” 34

Explanation The questions being addressed, and the rationale for them, are one of the most critical parts of a systematic review. They should be stated precisely and explicitly so that readers can understand quickly the review’s scope and the potential applicability of the review to their interests. 35 Framing questions so that they include the following five “PICOS” components may improve the explicitness of review questions: (1) the patient population or disease being addressed (P), (2) the interventions or exposure of interest (I), (3) the comparators (C), (4) the main outcome or endpoint of interest (O), and (5) the study designs chosen (S). For more detail regarding PICOS, see box 2.

Good review questions may be narrowly focused or broad, depending on the overall objectives of the review. Sometimes broad questions might increase the applicability of the results and facilitate detection of bias, exploratory analyses, and sensitivity analyses. 35 36 Whether narrowly focused or broad, precisely stated review objectives are critical as they help define other components of the review process such as the eligibility criteria (item 6) and the search for relevant literature (items 7 and 8).

Item 5: Protocol and registration

Indicate if a review protocol exists, if and where it can be accessed (such as a web address), and, if available, provide registration information including the registration number.

Example “Methods of the analysis and inclusion criteria were specified in advance and documented in a protocol.” 37

Explanation A protocol is important because it pre-specifies the objectives and methods of the systematic review. For instance, a protocol specifies outcomes of primary interest, how reviewers will extract information about those outcomes, and methods that reviewers might use to quantitatively summarise the outcome data (see item 13). Having a protocol can help restrict the likelihood of biased post hoc decisions in review methods, such as selective outcome reporting. Several sources provide guidance about elements to include in the protocol for a systematic review. 16 38 39 For meta-analyses of individual patient-level data, we advise authors to describe whether a protocol was explicitly designed and whether, when, and how participating collaborators endorsed it. 40 41

Authors may modify protocols during the research, and readers should not automatically consider such modifications inappropriate. For example, legitimate modifications may extend the period of searches to include older or newer studies, broaden eligibility criteria that proved too narrow, or add analyses if the primary analyses suggest that additional ones are warranted. Authors should, however, describe the modifications and explain their rationale.

Although worthwhile protocol amendments are common, one must consider the effects that protocol modifications may have on the results of a systematic review, especially if the primary outcome is changed. Bias from selective outcome reporting in randomised trials has been well documented. 42 43 An examination of 47 Cochrane reviews revealed indirect evidence for possible selective reporting bias for systematic reviews. Almost all (n=43) contained a major change, such as the addition or deletion of outcomes, between the protocol and the full publication. 44 Whether (or to what extent) the changes reflected bias, however, was not clear. For example, it has been rather common not to describe outcomes that were not presented in any of the included studies.

Registration of a systematic review, typically with a protocol and registration number, is not yet common, but some opportunities exist. 45 46 Registration may possibly reduce the risk of multiple reviews addressing the same question, 45 46 47 48 reduce publication bias, and provide greater transparency when updating systematic reviews. Of note, a survey of systematic reviews indexed in Medline in November 2004 found that reports of protocol use had increased to about 46% 3 from 8% noted in previous surveys. 49 The improvement was due mostly to Cochrane reviews, which, by requirement, have a published protocol. 3

Item 6: Eligibility criteria

Specify study characteristics (such as PICOS, length of follow-up) and report characteristics (such as years considered, language, publication status) used as criteria for eligibility, giving rationale.

Examples Types of studies: “Randomised clinical trials studying the administration of hepatitis B vaccine to CRF [chronic renal failure] patients, with or without dialysis. No language, publication date, or publication status restrictions were imposed…”

Types of participants: “Participants of any age with CRF or receiving dialysis (haemodialysis or peritoneal dialysis) were considered. CRF was defined as serum creatinine greater than 200 µmol/L for a period of more than six months or individuals receiving dialysis (haemodialysis or peritoneal dialysis)…Renal transplant patients were excluded from this review as these individuals are immunosuppressed and are receiving immunosuppressant agents to prevent rejection of their transplanted organs, and they have essentially normal renal function...”

Types of intervention: “Trials comparing the beneficial and harmful effects of hepatitis B vaccines with adjuvant or cytokine co-interventions [and] trials comparing the beneficial and harmful effects of immunoglobulin prophylaxis. This review was limited to studies looking at active immunisation. Hepatitis B vaccines (plasma or recombinant (yeast) derived) of all types, dose, and regimens versus placebo, control vaccine, or no vaccine…”

Types of outcome measures: “Primary outcome measures: Seroconversion, ie, proportion of patients with adequate anti-HBs response (>10 IU/L or Sample Ratio Units). Hepatitis B infections (as measured by hepatitis B core antigen (HBcAg) positivity or persistent HBsAg positivity), both acute and chronic. Acute (primary) HBV [hepatitis B virus] infections were defined as seroconversion to HBsAg positivity or development of IgM anti-HBc. Chronic HBV infections were defined as the persistence of HBsAg for more than six months or HBsAg positivity and liver biopsy compatible with a diagnosis or chronic hepatitis B. Secondary outcome measures: Adverse events of hepatitis B vaccinations…[and]…mortality.” 50

Explanation Knowledge of the eligibility criteria is essential in appraising the validity, applicability, and comprehensiveness of a review. Thus, authors should unambiguously specify eligibility criteria used in the review. Carefully defined eligibility criteria inform various steps of the review methodology. They influence the development of the search strategy and serve to ensure that studies are selected in a systematic and unbiased manner.

A study may be described in multiple reports, and one report may describe multiple studies. Therefore, we separate eligibility criteria into the following two components: study characteristics and report characteristics. Both need to be reported. Study eligibility criteria are likely to include the populations, interventions, comparators, outcomes, and study designs of interest (PICOS, see box 2), as well as other study-specific elements, such as specifying a minimum length of follow-up. Authors should state whether studies will be excluded because they do not include (or report) specific outcomes to help readers ascertain whether the systematic review may be biased as a consequence of selective reporting. 42 43

Report eligibility criteria are likely to include language of publication, publication status (such as inclusion of unpublished material and abstracts), and year of publication. Inclusion or not of non-English language literature, 51 52 53 54 55 unpublished data, or older data can influence the effect estimates in meta-analyses. 56 57 58 59 Caution may need to be exercised in including all identified studies due to potential differences in the risk of bias such as, for example, selective reporting in abstracts. 60 61 62

Item 7: Information sources

Describe all information sources in the search (such as databases with dates of coverage, contact with study authors to identify additional studies) and date last searched.

Example “Studies were identified by searching electronic databases, scanning reference lists of articles and consultation with experts in the field and drug companies…No limits were applied for language and foreign papers were translated. This search was applied to Medline (1966 - Present), CancerLit (1975 - Present), and adapted for Embase (1980 - Present), Science Citation Index Expanded (1981 - Present) and Pre-Medline electronic databases. Cochrane and DARE (Database of Abstracts of Reviews of Effectiveness) databases were reviewed…The last search was run on 19 June 2001. In addition, we handsearched contents pages of Journal of Clinical Oncology 2001, European Journal of Cancer 2001 and Bone 2001, together with abstracts printed in these journals 1999 - 2001. A limited update literature search was performed from 19 June 2001 to 31 December 2003.” 63

Explanation The National Library of Medicine’s Medline database is one of the most comprehensive sources of healthcare information in the world. Like any database, however, its coverage is not complete and varies according to the field. Retrieval from any single database, even by an experienced searcher, may be imperfect, which is why detailed reporting is important within the systematic review.

At a minimum, for each database searched, authors should report the database, platform, or provider (such as Ovid, Dialog, PubMed) and the start and end dates for the search of each database. This information lets readers assess the currency of the review, which is important because the publication time-lag outdates the results of some reviews. 64 This information should also make updating more efficient. 65 Authors should also report who developed and conducted the search. 66

In addition to searching databases, authors should report the use of supplementary approaches to identify studies, such as hand searching of journals, checking reference lists, searching trials registries or regulatory agency websites, 67 contacting manufacturers, or contacting authors. Authors should also report if they attempted to acquire any missing information (such as on study methods or results) from investigators or sponsors; it is useful to describe briefly who was contacted and what unpublished information was obtained.

Item 8: Search

Present the full electronic search strategy for at least one major database, including any limits used, such that it could be repeated.

Examples In text: “We used the following search terms to search all trials registers and databases: immunoglobulin*; IVIG; sepsis; septic shock; septicaemia; and septicemia…” 68

In appendix: “Search strategy: MEDLINE (OVID)

01. immunoglobulins/

02. immunoglobulin$.tw.

03. ivig.tw.

04. 1 or 2 or 3

05. sepsis/

06. sepsis.tw.

07. septic shock/

08. septic shock.tw.

09. septicemia/

10. septicaemia.tw.

11. septicemia.tw.

12. 5 or 6 or 7 or 8 or 9 or 10 or 11

13. 4 and 12

14. randomised controlled trials/

15. randomised-controlled-trial.pt.

16. controlled-clinical-trial.pt.

17. random allocation/

18. double-blind method/

19. single-blind method/

20. 14 or 15 or 16 or 17 or 18 or 19

21. exp clinical trials/

22. clinical-trial.pt.

23. (clin$ adj trial$).ti,ab.

24. ((singl$ or doubl$ or trebl$ or tripl$) adj (blind$)).ti,ab.

25. placebos/

26. placebo$.ti,ab.

27. random$.ti,ab.

28. 21 or 22 or 23 or 24 or 25 or 26 or 27

29. research design/

30. comparative study/

31. exp evaluation studies/

32. follow-up studies/

33. prospective studies/

34. (control$ or prospective$ or volunteer$).ti,ab.

35. 30 or 31 or 32 or 33 or 34

36. 20 or 28 or 29 or 35

37. 13 and 36” 68

Explanation The search strategy is an essential part of the report of any systematic review. Searches may be complicated and iterative, particularly when reviewers search unfamiliar databases or their review is addressing a broad or new topic. Perusing the search strategy allows interested readers to assess the comprehensiveness and completeness of the search, and to replicate it. Thus, we advise authors to report their full electronic search strategy for at least one major database. As an alternative to presenting search strategies for all databases, authors could indicate how the search took into account other databases searched, as index terms vary across databases. If different searches are used for different parts of a wider question (such as questions relating to benefits and questions relating to harms), we recommend authors provide at least one example of a strategy for each part of the objective. 69 We also encourage authors to state whether search strategies were peer reviewed as part of the systematic review process. 70

We realise that journal restrictions vary and that having the search strategy in the text of the report is not always feasible. We strongly encourage all journals, however, to find ways—such as a “web extra,” appendix, or electronic link to an archive—to make search strategies accessible to readers. We also advise all authors to archive their searches so that (1) others may access and review them (such as replicate them or understand why their review of a similar topic did not identify the same reports), and (2) future updates of their review are facilitated.

Several sources provide guidance on developing search strategies. 71 72 73 Most searches have constraints, such as relating to limited time or financial resources, inaccessible or inadequately indexed reports and databases, unavailability of experts with particular language or database searching skills, or review questions for which pertinent evidence is not easy to find. Authors should be straightforward in describing their search constraints. Apart from the keywords used to identify or exclude records, they should report any additional limitations relevant to the search, such as language and date restrictions (see also eligibility criteria, item 6). 51

Item 9: Study selection

State the process for selecting studies (that is, for screening, for determining eligibility, for inclusion in the systematic review, and, if applicable, for inclusion in the meta-analysis).

Example “Eligibility assessment…[was] performed independently in an unblinded standardized manner by 2 reviewers…Disagreements between reviewers were resolved by consensus.” 74

Explanation There is no standard process for selecting studies to include in a systematic review. Authors usually start with a large number of identified records from their search and sequentially exclude records according to eligibility criteria. We advise authors to report how they screened the retrieved records (typically a title and abstract), how often it was necessary to review the full text publication, and if any types of record (such as letters to the editor) were excluded. We also advise using the PRISMA flow diagram to summarise study selection processes (see item 17 and box 3).

Efforts to enhance objectivity and avoid mistakes in study selection are important. Thus authors should report whether each stage was carried out by one or several people, who these people were, and, whenever multiple independent investigators performed the selection, what the process was for resolving disagreements. The use of at least two investigators may reduce the possibility of rejecting relevant reports. 75 The benefit may be greatest for topics where selection or rejection of an article requires difficult judgments. 76 For these topics, authors should ideally tell readers the level of inter-rater agreement, how commonly arbitration about selection was required, and what efforts were made to resolve disagreements (such as by contact with the authors of the original studies).

Item 10: Data collection process

Describe the method of data extraction from reports (such as piloted forms, independently by two reviewers) and any processes for obtaining and confirming data from investigators.

Example “We developed a data extraction sheet (based on the Cochrane Consumers and Communication Review Group’s data extraction template), pilot-tested it on ten randomly-selected included studies, and refined it accordingly. One review author extracted the following data from included studies and the second author checked the extracted data…Disagreements were resolved by discussion between the two review authors; if no agreement could be reached, it was planned a third author would decide. We contacted five authors for further information. All responded and one provided numerical data that had only been presented graphically in the published paper.” 77

Explanation Reviewers extract information from each included study so that they can critique, present, and summarise evidence in a systematic review. They might also contact authors of included studies for information that has not been, or is unclearly, reported. In meta-analysis of individual patient data, this phase involves collection and scrutiny of detailed raw databases. The authors should describe these methods, including any steps taken to reduce bias and mistakes during data collection and data extraction. 78 (See box 3)

Some systematic reviewers use a data extraction form that could be reported as an appendix or “Web extra” to their report. These forms could show the reader what information reviewers sought (see item 11) and how they extracted it. Authors could tell readers if the form was piloted. Regardless, we advise authors to tell readers who extracted what data, whether any extractions were completed in duplicate, and, if so, whether duplicate abstraction was done independently and how disagreements were resolved.

Published reports of the included studies may not provide all the information required for the review. Reviewers should describe any actions they took to seek additional information from the original researchers (see item 7). The description might include how they attempted to contact researchers, what they asked for, and their success in obtaining the necessary information. Authors should also tell readers when individual patient data were sought from the original researchers. 41 (see item 11) and indicate the studies for which such data were used in the analyses. The reviewers ideally should also state whether they confirmed the accuracy of the information included in their review with the original researchers, for example, by sending them a copy of the draft review. 79

Some studies are published more than once. Duplicate publications may be difficult to ascertain, and their inclusion may introduce bias. 80 81 We advise authors to describe any steps they used to avoid double counting and piece together data from multiple reports of the same study (such as juxtaposing author names, treatment comparisons, sample sizes, or outcomes). We also advise authors to indicate whether all reports on a study were considered, as inconsistencies may reveal important limitations. For example, a review of multiple publications of drug trials showed that reported study characteristics may differ from report to report, including the description of the design, number of patients analysed, chosen significance level, and outcomes. 82 Authors ideally should present any algorithm that they used to select data from overlapping reports and any efforts they used to solve logical inconsistencies across reports.

Item 11: Data items

List and define all variables for which data were sought (such as PICOS, funding sources) and any assumptions and simplifications made.

Examples “Information was extracted from each included trial on: (1) characteristics of trial participants (including age, stage and severity of disease, and method of diagnosis), and the trial’s inclusion and exclusion criteria; (2) type of intervention (including type, dose, duration and frequency of the NSAID [non-steroidal anti-inflammatory drug]; versus placebo or versus the type, dose, duration and frequency of another NSAID; or versus another pain management drug; or versus no treatment); (3) type of outcome measure (including the level of pain reduction, improvement in quality of life score (using a validated scale), effect on daily activities, absence from work or school, length of follow up, unintended effects of treatment, number of women requiring more invasive treatment).” 83

Explanation It is important for readers to know what information review authors sought, even if some of this information was not available. 84 If the review is limited to reporting only those variables that were obtained, rather than those that were deemed important but could not be obtained, bias might be introduced and the reader might be misled. It is therefore helpful if authors can refer readers to the protocol (see item 5) and archive their extraction forms (see item 10), including definitions of variables. The published systematic review should include a description of the processes used with, if relevant, specification of how readers can get access to additional materials.

We encourage authors to report whether some variables were added after the review started. Such variables might include those found in the studies that the reviewers identified (such as important outcome measures that the reviewers initially overlooked). Authors should describe the reasons for adding any variables to those already pre-specified in the protocol so that readers can understand the review process.

We advise authors to report any assumptions they made about missing or unclear information and to explain those processes. For example, in studies of women aged 50 or older it is reasonable to assume that none were pregnant, even if this is not reported. Likewise, review authors might make assumptions about the route of administration of drugs assessed. However, special care should be taken in making assumptions about qualitative information. For example, the upper age limit for “children” can vary from 15 years to 21 years, “intense” physiotherapy might mean very different things to different researchers at different times and for different patients, and the volume of blood associated with “heavy” blood loss might vary widely depending on the setting.

Item 12: Risk of bias in individual studies

Describe methods used for assessing risk of bias in individual studies (including specification of whether this was done at the study or outcome level, or both), and how this information is to be used in any data synthesis.

Example “To ascertain the validity of eligible randomized trials, pairs of reviewers working independently and with adequate reliability determined the adequacy of randomization and concealment of allocation, blinding of patients, health care providers, data collectors, and outcome assessors; and extent of loss to follow-up (i.e. proportion of patients in whom the investigators were not able to ascertain outcomes).” 85

“To explore variability in study results (heterogeneity) we specified the following hypotheses before conducting the analysis. We hypothesised that effect size may differ according to the methodological quality of the studies.” 86

Explanation The likelihood that the treatment effect reported in a systematic review approximates the truth depends on the validity of the included studies, as certain methodological characteristics may be associated with effect sizes. 87 88 For example, trials without reported adequate allocation concealment exaggerate treatment effects on average compared with those with adequate concealment. 88 Therefore, it is important for authors to describe any methods that they used to gauge the risk of bias in the included studies and how that information was used. 89 Additionally, authors should provide a rationale if no assessment of risk of bias was undertaken. The most popular term to describe the issues relevant to this item is “quality,” but for the reasons that are elaborated in box 4 we prefer to name this item as “assessment of risk of bias.”

Many methods exist to assess the overall risk of bias in included studies, including scales, checklists, and individual components. 90 91 As discussed in box 4, scales that numerically summarise multiple components into a single number are misleading and unhelpful. 92 93 Rather, authors should specify the methodological components that they assessed. Common markers of validity for randomised trials include the following: appropriate generation of random allocation sequence; 94 concealment of the allocation sequence; 93 blinding of participants, health care providers, data collectors, and outcome adjudicators; 95 96 97 98 proportion of patients lost to follow-up; 99 100 stopping of trials early for benefit; 101 and whether the analysis followed the intention-to-treat principle. 100 102 The ultimate decision regarding which methodological features to evaluate requires consideration of the strength of the empiric data, theoretical rationale, and the unique circumstances of the included studies.

Authors should report how they assessed risk of bias; whether it was in a blind manner; and if assessments were completed by more than one person, and if so, whether they were completed independently. 103 104 Similarly, we encourage authors to report any calibration exercises among review team members that were done. Finally, authors need to report how their assessments of risk of bias are used subsequently in the data synthesis (see item 16). Despite the often difficult task of assessing the risk of bias in included studies, authors are sometimes silent on what they did with the resultant assessments. 89 If authors exclude studies from the review or any subsequent analyses on the basis of the risk of bias, they should tell readers which studies they excluded and explain the reasons for those exclusions (see item 6). Authors should also describe any planned sensitivity or subgroup analyses related to bias assessments (see item 16).

Item 13: Summary measures

State the principal summary measures (such as risk ratio, difference in means).

Examples “Relative risk of mortality reduction was the primary measure of treatment effect.” 105

“The meta-analyses were performed by computing relative risks (RRs) using random-effects model. Quantitative analyses were performed on an intention-to-treat basis and were confined to data derived from the period of follow-up. RR and 95% confidence intervals for each side effect (and all side effects) were calculated.” 106

“The primary outcome measure was the mean difference in log 10 HIV-1 viral load comparing zinc supplementation to placebo...” 107

Explanation When planning a systematic review, it is generally desirable that authors pre-specify the outcomes of primary interest (see item 5) as well as the intended summary effect measure for each outcome. The chosen summary effect measure may differ from that used in some of the included studies. If possible the choice of effect measures should be explained, though it is not always easy to judge in advance which measure is the most appropriate.

For binary outcomes, the most common summary measures are the risk ratio, odds ratio, and risk difference. 108 Relative effects are more consistent across studies than absolute effects, 109 110 although absolute differences are important when interpreting findings (see item 24).

For continuous outcomes, the natural effect measure is the difference in means. 108 Its use is appropriate when outcome measurements in all studies are made on the same scale. The standardised difference in means is used when the studies do not yield directly comparable data. Usually this occurs when all studies assess the same outcome but measure it in a variety of ways (such as different scales to measure depression).

For time-to-event outcomes, the hazard ratio is the most common summary measure. Reviewers need the log hazard ratio and its standard error for a study to be included in a meta-analysis. 111 This information may not be given for all studies, but methods are available for estimating the desired quantities from other reported information. 111 Risk ratio and odds ratio (in relation to events occurring by a fixed time) are not equivalent to the hazard ratio, and median survival times are not a reliable basis for meta-analysis. 112 If authors have used these measures they should describe their methods in the report.

Item 14: Planned methods of analysis

Describe the methods of handling data and combining results of studies, if done, including measures of consistency (such as I 2 ) for each meta-analysis.

Examples “We tested for heterogeneity with the Breslow-Day test, and used the method proposed by Higgins et al. to measure inconsistency (the percentage of total variation across studies due to heterogeneity) of effects across lipid-lowering interventions. The advantages of this measure of inconsistency (termed I 2 ) are that it does not inherently depend on the number of studies and is accompanied by an uncertainty interval.” 113

“In very few instances, estimates of baseline mean or mean QOL [Quality of life] responses were obtained without corresponding estimates of variance (standard deviation [SD] or standard error). In these instances, an SD was imputed from the mean of the known SDs. In a number of cases, the response data available were the mean and variance in a pre study condition and after therapy. The within-patient variance in these cases could not be calculated directly and was approximated by assuming independence.” 114

Explanation The data extracted from the studies in the review may need some transformation (processing) before they are suitable for analysis or for presentation in an evidence table. Although such data handling may facilitate meta-analyses, it is sometimes needed even when meta-analyses are not done. For example, in trials with more than two intervention groups it may be necessary to combine results for two or more groups (such as receiving similar but non-identical interventions), or it may be desirable to include only a subset of the data to match the review’s inclusion criteria. When several different scales (such as for depression) are used across studies, the sign of some scores may need to be reversed to ensure that all scales are aligned (such as so low values represent good health on all scales). Standard deviations may have to be reconstructed from other statistics such as P values and t statistics, 115 116 or occasionally they may be imputed from the standard deviations observed in other studies. 117 Time-to-event data also usually need careful conversions to a consistent format. 111 Authors should report details of any such data processing.

Statistical combination of data from two or more separate studies in a meta-analysis may be neither necessary nor desirable (see box 5 and item 21). Regardless of the decision to combine individual study results, authors should report how they planned to evaluate between-study variability (heterogeneity or inconsistency) (box 6). The consistency of results across trials may influence the decision of whether to combine trial results in a meta-analysis.

When meta-analysis is done, authors should specify the effect measure (such as relative risk or mean difference) (see item 13), the statistical method (such as inverse variance), and whether a fixed-effects or random-effects approach, or some other method (such as Bayesian) was used (see box 6). If possible, authors should explain the reasons for those choices.

Item 15: Risk of bias across studies

Specify any assessment of risk of bias that may affect the cumulative evidence (such as publication bias, selective reporting within studies).

Examples “For each trial we plotted the effect by the inverse of its standard error. The symmetry of such ‘funnel plots’ was assessed both visually, and formally with Egger’s test, to see if the effect decreased with increasing sample size.” 118

“We assessed the possibility of publication bias by evaluating a funnel plot of the trial mean differences for asymmetry, which can result from the non publication of small trials with negative results…Because graphical evaluation can be subjective, we also conducted an adjusted rank correlation test and a regression asymmetry test as formal statistical tests for publication bias...We acknowledge that other factors, such as differences in trial quality or true study heterogeneity, could produce asymmetry in funnel plots.” 119

Explanation Reviewers should explore the possibility that the available data are biased. They may examine results from the available studies for clues that suggest there may be missing studies (publication bias) or missing data from the included studies (selective reporting bias) (see box 7). Authors should report in detail any methods used to investigate possible bias across studies.

It is difficult to assess whether within-study selective reporting is present in a systematic review. If a protocol of an individual study is available, the outcomes in the protocol and the published report can be compared. Even in the absence of a protocol, outcomes listed in the methods section of the published report can be compared with those for which results are presented. 120 In only half of 196 trial reports describing comparisons of two drugs in arthritis were all the effect variables in the methods and results sections the same. 82 In other cases, knowledge of the clinical area may suggest that it is likely that the outcome was measured even if it was not reported. For example, in a particular disease, if one of two linked outcomes is reported but the other is not, then one should question whether the latter has been selectively omitted. 121 122

Only 36% (76 of 212) of therapeutic systematic reviews published in November 2004 reported that study publication bias was considered, and only a quarter of those intended to carry out a formal assessment for that bias. 3 Of 60 meta-analyses in 24 articles published in 2005 in which formal assessments were reported, most were based on fewer than 10 studies; most displayed statistically significant heterogeneity; and many reviewers misinterpreted the results of the tests employed. 123 A review of trials of antidepressants found that meta-analysis of only the published trials gave effect estimates 32% larger on average than when all trials sent to the drug agency were analysed. 67

Item 16: Additional analyses

Describe methods of additional analyses (such as sensitivity or subgroup analyses, meta-regression), if done, indicating which were pre-specified.

Example “Sensitivity analyses were pre-specified. The treatment effects were examined according to quality components (concealed treatment allocation, blinding of patients and caregivers, blinded outcome assessment), time to initiation of statins, and the type of statin. One post-hoc sensitivity analysis was conducted including unpublished data from a trial using cerivastatin.” 124

Explanation Authors may perform additional analyses to help understand whether the results of their review are robust, all of which should be reported. Such analyses include sensitivity analysis, subgroup analysis, and meta-regression. 125

Sensitivity analyses are used to explore the degree to which the main findings of a systematic review are affected by changes in its methods or in the data used from individual studies (such as study inclusion criteria, results of risk of bias assessment). Subgroup analyses address whether the summary effects vary in relation to specific (usually clinical) characteristics of the included studies or their participants. Meta-regression extends the idea of subgroup analysis to the examination of the quantitative influence of study characteristics on the effect size. 126 Meta-regression also allows authors to examine the contribution of different variables to the heterogeneity in study findings. Readers of systematic reviews should be aware that meta-regression has many limitations, including a danger of over-interpretation of findings. 127 128

Even with limited data, many additional analyses can be undertaken. The choice of which analysis to undertake will depend on the aims of the review. None of these analyses, however, is exempt from producing potentially misleading results. It is important to inform readers whether these analyses were performed, their rationale, and which were pre-specified.

Item 17: Study selection

Give numbers of studies screened, assessed for eligibility, and included in the review, with reasons for exclusions at each stage, ideally with a flow diagram.

Examples In text: “A total of 10 studies involving 13 trials were identified for inclusion in the review. The search of Medline, PsycInfo and Cinahl databases provided a total of 584 citations. After adjusting for duplicates 509 remained. Of these, 479 studies were discarded because after reviewing the abstracts it appeared that these papers clearly did not meet the criteria. Three additional studies…were discarded because full text of the study was not available or the paper could not be feasibly translated into English. The full text of the remaining 27 citations was examined in more detail. It appeared that 22 studies did not meet the inclusion criteria as described. Five studies…met the inclusion criteria and were included in the systematic review. An additional five studies...that met the criteria for inclusion were identified by checking the references of located, relevant papers and searching for studies that have cited these papers. No unpublished relevant studies were obtained.” 129

See flow diagram in fig 2 ⇓ .

Fig 2 Example flow diagram of study selection. DDW = Digestive Disease Week; UEGW = United European Gastroenterology Week. Adapted from Fuccio et al 130

Explanation Authors should report, ideally with a flow diagram, the total number of records identified from electronic bibliographic sources (including specialised database or registry searches), hand searches of various sources, reference lists, citation indices, and experts. It is useful if authors delineate for readers the number of selected articles that were identified from the different sources so that they can see, for example, whether most articles were identified through electronic bibliographic sources or from references or experts. Literature identified primarily from references or experts may be prone to citation or publication bias. 131 132

The flow diagram and text should describe clearly the process of report selection throughout the review. Authors should report unique records identified in searches, records excluded after preliminary screening (such as screening of titles and abstracts), reports retrieved for detailed evaluation, potentially eligible reports that were not retrievable, retrieved reports that did not meet inclusion criteria and the primary reasons for exclusion, and the studies included in the review. Indeed, the most appropriate layout may vary for different reviews.

Authors should also note the presence of duplicate or supplementary reports so that readers understand the number of individual studies compared with the number of reports that were included in the review. Authors should be consistent in their use of terms, such as whether they are reporting on counts of citations, records, publications, or studies. We believe that reporting the number of studies is the most important.

A flow diagram can be very useful; it should depict all the studies included based on fulfilling the eligibility criteria, and whether data have been combined for statistical analysis. A recent review of 87 systematic reviews found that about half included a QUOROM flow diagram. 133 The authors of this research recommended some important ways that reviewers can improve the use of a flow diagram when describing the flow of information throughout the review process, including a separate flow diagram for each important outcome reported. 133

Item 18: Study characteristics

For each study, present characteristics for which data were extracted (such as study size, PICOS, follow-up period) and provide the citation.

Examples In text: “ Characteristics of included studies

All four studies finally selected for the review were randomised controlled trials published in English. The duration of the intervention was 24 months for the RIO-North America and 12 months for the RIO-Diabetes, RIO-Lipids and RIO-Europe study. Although the last two described a period of 24 months during which they were conducted, only the first 12-months results are provided. All trials had a run-in, as a single blind period before the randomisation.

Participants

The included studies involved 6625 participants. The main inclusion criteria entailed adults (18 years or older), with a body mass index greater than 27 kg/m 2 and less than 5 kg variation in body weight within the three months before study entry.

Intervention

All trials were multicentric. The RIO-North America was conducted in the USA and Canada, RIO-Europe in Europe and the USA, RIO-Diabetes in the USA and 10 other different countries not specified, and RIO-Lipids in eight unspecified different countries.

The intervention received was placebo, 5 mg of rimonabant or 20 mg of rimonabant once daily in addition to a mild hypocaloric diet (600 kcal/day deficit).

In all studies the primary outcome assessed was weight change from baseline after one year of treatment and the RIO-North America study also evaluated the prevention of weight regain between the first and second year. All studies evaluated adverse effects, including those of any kind and serious events. Quality of life was measured in only one study, but the results were not described (RIO-Europe).

Secondary and additional outcomes

These included prevalence of metabolic syndrome after one year and change in cardiometabolic risk factors such as blood pressure, lipid profile, etc.

No study included mortality and costs as outcome.

The timing of outcome measures was variable and could include monthly investigations, evaluations every three months or a single final evaluation after one year.” 134

In table: See table 2 ⇓ .

 Example of summary of study characteristics: Summary of included studies evaluating the efficacy of antiemetic agents in acute gastroenteritis. Adapted from DeCamp et al 135

Explanation For readers to gauge the validity and applicability of a systematic review’s results, they need to know something about the included studies. Such information includes PICOS (box 2) and specific information relevant to the review question. For example, if the review is examining the long term effects of antidepressants for moderate depressive disorder, authors should report the follow-up periods of the included studies. For each included study, authors should provide a citation for the source of their information regardless of whether or not the study is published. This information makes it easier for interested readers to retrieve the relevant publications or documents.

Reporting study-level data also allows the comparison of the main characteristics of the studies included in the review. Authors should present enough detail to allow readers to make their own judgments about the relevance of included studies. Such information also makes it possible for readers to conduct their own subgroup analyses and interpret subgroups, based on study characteristics.

Authors should avoid, whenever possible, assuming information when it is missing from a study report (such as sample size, method of randomisation). Reviewers may contact the original investigators to try to obtain missing information or confirm the data extracted for the systematic review. If this information is not obtained, this should be noted in the report. If information is imputed, the reader should be told how this was done and for which items. Presenting study-level data makes it possible to clearly identify unpublished information obtained from the original researchers and make it available for the public record.

Typically, study-level characteristics are presented as a table as in the example (table 2 ⇑ ). Such presentation ensures that all pertinent items are addressed and that missing or unclear information is clearly indicated. Although paper based journals do not generally allow for the quantity of information available in electronic journals or Cochrane reviews, this should not be accepted as an excuse for omission of important aspects of the methods or results of included studies, since these can, if necessary, be shown on a website.

Following the presentation and description of each included study, as discussed above, reviewers usually provide a narrative summary of the studies. Such a summary provides readers with an overview of the included studies. It may, for example, address the languages of the published papers, years of publication, and geographic origins of the included studies.

The PICOS framework is often helpful in reporting the narrative summary indicating, for example, the clinical characteristics and disease severity of the participants and the main features of the intervention and of the comparison group. For non-pharmacological interventions, it may be helpful to specify for each study the key elements of the intervention received by each group. Full details of the interventions in included studies were reported in only three of 25 systematic reviews relevant to general practice. 84

Item 19: Risk of bias within studies

Present data on risk of bias of each study and, if available, any outcome-level assessment (see item 12).

Example See table 3 ⇓ .

 Example of assessment of the risk of bias: Quality measures of the randomised controlled trials that failed to fulfil any one of six markers of validity. Adapted from Devereaux et al 96

Explanation We recommend that reviewers assess the risk of bias in the included studies using a standard approach with defined criteria (see item 12). They should report the results of any such assessments. 89

Reporting only summary data (such as “two of eight trials adequately concealed allocation”) is inadequate because it fails to inform readers which studies had the particular methodological shortcoming. A more informative approach is to explicitly report the methodological features evaluated for each study. The Cochrane Collaboration’s new tool for assessing the risk of bias also requests that authors substantiate these assessments with any relevant text from the original studies. 11 It is often easiest to provide these data in a tabular format, as in the example. However, a narrative summary describing the tabular data can also be helpful for readers.

Item 20: Results of individual studies

For all outcomes considered (benefits and harms), present, for each study, simple summary data for each intervention group and effect estimates and confidence intervals, ideally with a forest plot.

Examples See table 4 ⇓ and fig 3 ⇓ .

Fig 3 Example of summary results: Overall failure (defined as failure of assigned regimen or relapse) with tetracycline-rifampicin versus tetracycline-streptomycin. Adapted from Skalsky et al 137

 Example of summary results: Heterotopic ossification in trials comparing radiotherapy to non-steroidal anti-inflammatory drugs after major hip procedures and fractures. Adapted from Pakos et al 136

Explanation Publication of summary data from individual studies allows the analyses to be reproduced and other analyses and graphical displays to be investigated. Others may wish to assess the impact of excluding particular studies or consider subgroup analyses not reported by the review authors. Displaying the results of each treatment group in included studies also enables inspection of individual study features. For example, if only odds ratios are provided, readers cannot assess the variation in event rates across the studies, making the odds ratio impossible to interpret. 138 Additionally, because data extraction errors in meta-analyses are common and can be large, 139 the presentation of the results from individual studies makes it easier to identify errors. For continuous outcomes, readers may wish to examine the consistency of standard deviations across studies, for example, to be reassured that standard deviation and standard error have not been confused. 138

For each study, the summary data for each intervention group are generally given for binary outcomes as frequencies with and without the event (or as proportions such as 12/45). It is not sufficient to report event rates per intervention group as percentages. The required summary data for continuous outcomes are the mean, standard deviation, and sample size for each group. In reviews that examine time-to-event data, the authors should report the log hazard ratio and its standard error (or confidence interval) for each included study. Sometimes, essential data are missing from the reports of the included studies and cannot be calculated from other data but may need to be imputed by the reviewers. For example, the standard deviation may be imputed using the typical standard deviations in the other trials 116 117 (see item 14). Whenever relevant, authors should indicate which results were not reported directly and had to be estimated from other information (see item 13). In addition, the inclusion of unpublished data should be noted.

For all included studies it is important to present the estimated effect with a confidence interval. This information may be incorporated in a table showing study characteristics or may be shown in a forest plot. 140 The key elements of the forest plot are the effect estimates and confidence intervals for each study shown graphically, but it is preferable also to include, for each study, the numerical group-specific summary data, the effect size and confidence interval, and the percentage weight (see second example, fig 3 ⇑ ). For discussion of the results of meta-analysis, see item 21.

In principle, all the above information should be provided for every outcome considered in the review, including both benefits and harms. When there are too many outcomes for full information to be included, results for the most important outcomes should be included in the main report with other information provided as a web appendix. The choice of the information to present should be justified in light of what was originally stated in the protocol. Authors should explicitly mention if the planned main outcomes cannot be presented due to lack of information. There is some evidence that information on harms is only rarely reported in systematic reviews, even when it is available in the original studies. 141 Selective omission of harms results biases a systematic review and decreases its ability to contribute to informed decision making.

Item 21: Syntheses of results

Present the main results of the review. If meta-analyses are done, include for each, confidence intervals and measures of consistency.

Examples “Mortality data were available for all six trials, randomizing 311 patients and reporting data for 305 patients. There were no deaths reported in the three respiratory syncytial virus/severe bronchiolitis trials; thus our estimate is based on three trials randomizing 232 patients, 64 of whom died. In the pooled analysis, surfactant was associated with significantly lower mortality (relative risk =0.7, 95% confidence interval =0.4–0.97, P=0.04). There was no evidence of heterogeneity (I 2 =0%).” 142

“Because the study designs, participants, interventions, and reported outcome measures varied markedly, we focused on describing the studies, their results, their applicability, and their limitations and on qualitative synthesis rather than meta-analysis.” 143

“We detected significant heterogeneity within this comparison (I 2 =46.6%, χ 2 =13.11, df=7, P=0.07). Retrospective exploration of the heterogeneity identified one trial that seemed to differ from the others. It included only small ulcers (wound area less than 5 cm 2 ). Exclusion of this trial removed the statistical heterogeneity and did not affect the finding of no evidence of a difference in healing rate between hydrocolloids and simple low adherent dressings (relative risk=0.98, [95% confidence interval] 0.85 to 1.12, I 2 =0%).” 144

Explanation Results of systematic reviews should be presented in an orderly manner. Initial narrative descriptions of the evidence covered in the review (see item 18) may tell readers important things about the study populations and the design and conduct of studies. These descriptions can facilitate the examination of patterns across studies. They may also provide important information about applicability of evidence, suggest the likely effects of any major biases, and allow consideration, in a systematic manner, of multiple explanations for possible differences of findings across studies.

If authors have conducted one or more meta-analyses, they should present the results as an estimated effect across studies with a confidence interval. It is often simplest to show each meta-analysis summary with the actual results of included studies in a forest plot (see item 20). 140 It should always be clear which of the included studies contributed to each meta-analysis. Authors should also provide, for each meta-analysis, a measure of the consistency of the results from the included studies such as I 2 (heterogeneity, see box 6); a confidence interval may also be given for this measure. 145 If no meta-analysis was performed, the qualitative inferences should be presented as systematically as possible with an explanation of why meta-analysis was not done, as in the second example above. 143 Readers may find a forest plot, without a summary estimate, helpful in such cases.

Authors should in general report syntheses for all the outcome measures they set out to investigate (that is, those described in the protocol, see item 4) to allow readers to draw their own conclusions about the implications of the results. Readers should be made aware of any deviations from the planned analysis. Authors should tell readers if the planned meta-analysis was not thought appropriate or possible for some of the outcomes and the reasons for that decision.

It may not always be sensible to give meta-analysis results and forest plots for each outcome. If the review addresses a broad question, there may be a very large number of outcomes. Also, some outcomes may have been reported in only one or two studies, in which case forest plots are of little value and may be seriously biased.

Of 300 systematic reviews indexed in Medline in 2004, a little more than half (54%) included meta-analyses, of which the majority (91%) reported assessing for inconsistency in results.

Item 22: Risk of bias across studies

Present results of any assessment of risk of bias across studies (see item 15).

Example “Strong evidence of heterogeneity (I 2 =79%, P <0.001) was observed. To explore this heterogeneity, a funnel plot was drawn. The funnel plot [fig 4 ⇓ ] shows evidence of considerable asymmetry.” 146

Fig 4 Example of a funnel plot showing evidence of considerable asymmetry. SE = standard error. Adapted from Appleton et al 146

“Specifically, four sertraline trials involving 486 participants and one citalopram trial involving 274 participants were reported as having failed to achieve a statistically significant drug effect, without reporting mean HRSD [Hamilton Rating Scale for Depression] scores. We were unable to find data from these trials on pharmaceutical company Web sites or through our search of the published literature. These omissions represent 38% of patients in sertraline trials and 23% of patients in citalopram trials. Analyses with and without inclusion of these trials found no differences in the patterns of results; similarly, the revealed patterns do not interact with drug type. The purpose of using the data obtained from the FDA was to avoid publication bias, by including unpublished as well as published trials. Inclusion of only those sertraline and citalopram trials for which means were reported to the FDA would constitute a form of reporting bias similar to publication bias and would lead to overestimation of drug–placebo differences for these drug types. Therefore, we present analyses only on data for medications for which complete clinical trials’ change was reported.” 147

Explanation Authors should present the results of any assessments of risk of bias across studies. If a funnel plot is reported, authors should specify the effect estimate and measure of precision used, presented typically on the x axis and y axis, respectively. Authors should describe if and how they have tested the statistical significance of any possible asymmetry (see item 15). Results of any investigations of selective reporting of outcomes within studies (as discussed in item 15) should also be reported. Also, we advise authors to tell readers if any pre-specified analyses for assessing risk of bias across studies were not completed and the reasons (such as too few included studies).

Item 23: Additional analyses

Give results of additional analyses, if done (such as sensitivity or subgroup analyses, meta-regression [see item 16]).

Example “...benefits of chondroitin were smaller in trials with adequate concealment of allocation compared with trials with unclear concealment (P for interaction =0.050), in trials with an intention-to-treat analysis compared with those that had excluded patients from the analysis (P for interaction =0.017), and in large compared with small trials (P for interaction =0.022).” 148

“Subgroup analyses according to antibody status, antiviral medications, organ transplanted, treatment duration, use of antilymphocyte therapy, time to outcome assessment, study quality and other aspects of study design did not demonstrate any differences in treatment effects. Multivariate meta-regression showed no significant difference in CMV [cytomegalovirus] disease after allowing for potential confounding or effect-modification by prophylactic drug used, organ transplanted or recipient serostatus in CMV positive recipients and CMV negative recipients of CMV positive donors.” 149

Explanation Authors should report any subgroup or sensitivity analyses and whether they were pre-specified (see items 5 and 16). For analyses comparing subgroups of studies (such as separating studies of low and high dose aspirin), the authors should report any tests for interactions, as well as estimates and confidence intervals from meta-analyses within each subgroup. Similarly, meta-regression results (see item 16) should not be limited to P values but should include effect sizes and confidence intervals, 150 as the first example reported above does in a table. The amount of data included in each additional analysis should be specified if different from that considered in the main analyses. This information is especially relevant for sensitivity analyses that exclude some studies; for example, those with high risk of bias.

Importantly, all additional analyses conducted should be reported, not just those that were statistically significant. This information will help avoid selective outcome reporting bias within the review as has been demonstrated in reports of randomised controlled trials. 42 44 121 151 152 Results from exploratory subgroup or sensitivity analyses should be interpreted cautiously, bearing in mind the potential for multiple analyses to mislead.

Item 24: Summary of evidence

Summarise the main findings, including the strength of evidence for each main outcome; consider their relevance to key groups (such as healthcare providers, users, and policy makers).

Example “Overall, the evidence is not sufficiently robust to determine the comparative effectiveness of angioplasty (with or without stenting) and medical treatment alone. Only 2 randomized trials with long-term outcomes and a third randomized trial that allowed substantial crossover of treatment after 3 months directly compared angioplasty and medical treatment…the randomized trials did not evaluate enough patients or did not follow patients for a sufficient duration to allow definitive conclusions to be made about clinical outcomes, such as mortality and cardiovascular or kidney failure events.

Some acceptable evidence from comparison of medical treatment and angioplasty suggested no difference in long-term kidney function but possibly better blood pressure control after angioplasty, an effect that may be limited to patients with bilateral atherosclerotic renal artery stenosis. The evidence regarding other outcomes is weak. Because the reviewed studies did not explicitly address patients with rapid clinical deterioration who may need acute intervention, our conclusions do not apply to this important subset of patients.” 143

Explanation Authors should give a brief and balanced summary of the nature and findings of the review. Sometimes, outcomes for which little or no data were found should be noted due to potential relevance for policy decisions and future research. Applicability of the review’s findings—to different patients, settings, or target audiences, for example—should be mentioned. Although there is no standard way to assess applicability simultaneously to different audiences, some systems do exist. 153 Sometimes, authors formally rate or assess the overall body of evidence addressed in the review and can present the strength of their summary recommendations tied to their assessments of the quality of evidence (such as the GRADE system). 10

Authors need to keep in mind that statistical significance of the effects does not always suggest clinical or policy relevance. Likewise, a non-significant result does not demonstrate that a treatment is ineffective. Authors should ideally clarify trade-offs and how the values attached to the main outcomes would lead different people to make different decisions. In addition, adroit authors consider factors that are important in translating the evidence to different settings and that may modify the estimates of effects reported in the review. 153 Patients and healthcare providers may be primarily interested in which intervention is most likely to provide a benefit with acceptable harms, while policy makers and administrators may value data on organisational impact and resource utilisation.

Item 25: Limitations

Discuss limitations at study and outcome level (such as risk of bias), and at review level (such as incomplete retrieval of identified research, reporting bias).

Examples Outcome level: “The meta-analysis reported here combines data across studies in order to estimate treatment effects with more precision than is possible in a single study. The main limitation of this meta-analysis, as with any overview, is that the patient population, the antibiotic regimen and the outcome definitions are not the same across studies.” 154

Study and review level: “Our study has several limitations. The quality of the studies varied. Randomization was adequate in all trials; however, 7 of the articles did not explicitly state that analysis of data adhered to the intention-to-treat principle, which could lead to overestimation of treatment effect in these trials, and we could not assess the quality of 4 of the 5 trials reported as abstracts. Analyses did not identify an association between components of quality and re-bleeding risk, and the effect size in favour of combination therapy remained statistically significant when we excluded trials that were reported as abstracts.

Publication bias might account for some of the effect we observed. Smaller trials are, in general, analyzed with less methodological rigor than larger studies, and an asymmetrical funnel plot suggests that selective reporting may have led to an overestimation of effect sizes in small trials.” 155

Explanation A discussion of limitations should address the validity (that is, risk of bias) and reporting (informativeness) of the included studies, limitations of the review process, and generalisability (applicability) of the review. Readers may find it helpful if authors discuss whether studies were threatened by serious risks of bias, whether the estimates of the effect of the intervention are too imprecise, or if there were missing data for many participants or important outcomes.

Limitations of the review process might include limitations of the search (such as restricting to English-language publications), and any difficulties in the study selection, appraisal, and meta-analysis processes. For example, poor or incomplete reporting of study designs, patient populations, and interventions may hamper interpretation and synthesis of the included studies. 84 Applicability of the review may be affected if there are limited data for certain populations or subgroups where the intervention might perform differently or few studies assessing the most important outcomes of interest; or if there is a substantial amount of data relating to an outdated intervention or comparator or heavy reliance on imputation of missing values for summary estimates (item 14).

Item 26: Conclusions

Provide a general interpretation of the results in the context of other evidence, and implications for future research.

Example Implications for practice: “Between 1995 and 1997 five different meta-analyses of the effect of antibiotic prophylaxis on infection and mortality were published. All confirmed a significant reduction in infections, though the magnitude of the effect varied from one review to another. The estimated impact on overall mortality was less evident and has generated considerable controversy on the cost effectiveness of the treatment. Only one among the five available reviews, however, suggested that a weak association between respiratory tract infections and mortality exists and lack of sufficient statistical power may have accounted for the limited effect on mortality.”

Implications for research: “A logical next step for future trials would thus be the comparison of this protocol against a regimen of a systemic antibiotic agent only to see whether the topical component can be dropped. We have already identified six such trials but the total number of patients so far enrolled (n=1056) is too small for us to be confident that the two treatments are really equally effective. If the hypothesis is therefore considered worth testing more and larger randomised controlled trials are warranted. Trials of this kind, however, would not resolve the relevant issue of treatment induced resistance. To produce a satisfactory answer to this, studies with a different design would be necessary. Though a detailed discussion goes beyond the scope of this paper, studies in which the intensive care unit rather than the individual patient is the unit of randomisation and in which the occurrence of antibiotic resistance is monitored over a long period of time should be undertaken.” 156

Explanation Systematic reviewers sometimes draw conclusions that are too optimistic 157 or do not consider the harms equally as carefully as the benefits, although some evidence suggests these problems are decreasing. 158 If conclusions cannot be drawn because there are too few reliable studies, or too much uncertainty, this should be stated. Such a finding can be as important as finding consistent effects from several large studies.

Authors should try to relate the results of the review to other evidence, as this helps readers to better interpret the results. For example, there may be other systematic reviews about the same general topic that have used different methods or have addressed related but slightly different questions. 159 160 Similarly, there may be additional information relevant to decision makers, such as the cost-effectiveness of the intervention (such as health technology assessment). Authors may discuss the results of their review in the context of existing evidence regarding other interventions.

We advise authors also to make explicit recommendations for future research. In a sample of 2535 Cochrane reviews, 82% included recommendations for research with specific interventions, 30% suggested the appropriate type of participants, and 52% suggested outcome measures for future research. 161 There is no corresponding assessment about systematic reviews published in medical journals, but we believe that such recommendations are much less common in those reviews.

Clinical research should not be planned without a thorough knowledge of similar, existing research. 162 There is evidence that this still does not occur as it should and that authors of primary studies do not consider a systematic review when they design their studies. 163 We believe systematic reviews have great potential for guiding future clinical research.

Item 27: Funding

Describe sources of funding or other support (such as supply of data) for the systematic review, and the role of funders for the systematic review.

Examples “The evidence synthesis upon which this article was based was funded by the Centers for Disease Control and Prevention for the Agency for Healthcare Research and Quality and the U.S. Prevention Services Task Force.” 164

“Role of funding source: The funders played no role in study design, collection, analysis, interpretation of data, writing of the report, or in the decision to submit the paper for publication. They accept no responsibility for the contents.” 165

Explanation Authors of systematic reviews, like those of any other research study, should disclose any funding they received to carry out the review, or state if the review was not funded. Lexchin and colleagues 166 observed that outcomes of reports of randomised trials and meta-analyses of clinical trials funded by the pharmaceutical industry are more likely to favor the sponsor’s product compared with studies with other sources of funding. Similar results have been reported elsewhere. 167 168 Analogous data suggest that similar biases may affect the conclusions of systematic reviews. 169

Given the potential role of systematic reviews in decision making, we believe authors should be transparent about the funding and the role of funders, if any. Sometimes the funders will provide services, such as those of a librarian to complete the searches for relevant literature or access to commercial databases not available to the reviewers. Any level of funding or services provided to the systematic review team should be reported. Authors should also report whether the funder had any role in the conduct or report of the review. Beyond funding issues, authors should report any real or perceived conflicts of interest related to their role or the role of the funder in the reporting of the systematic review. 170

In a survey of 300 systematic reviews published in November 2004, funding sources were not reported in 41% of the reviews. 3 Only a minority of reviews (2%) reported being funded by for-profit sources, but the true proportion may be higher. 171

Additional considerations for systematic reviews of non-randomised intervention studies or for other types of systematic reviews

The PRISMA statement and this document have focused on systematic reviews of reports of randomised trials. Other study designs, including non-randomised studies, quasi-experimental studies, and interrupted time series, are included in some systematic reviews that evaluate the effects of healthcare interventions. 172 173 The methods of these reviews may differ to varying degrees from the typical intervention review, for example regarding the literature search, data abstraction, assessment of risk of bias, and analysis methods. As such, their reporting demands might also differ from what we have described here. A useful principle is for systematic review authors to ensure that their methods are reported with adequate clarity and transparency to enable readers to critically judge the available evidence and replicate or update the research.

In some systematic reviews, the authors will seek the raw data from the original researchers to calculate the summary statistics. These systematic reviews are called individual patient (or participant) data reviews. 40 41 Individual patient data meta-analyses may also be conducted with prospective accumulation of data rather than retrospective accumulation of existing data. Here too, extra information about the methods will need to be reported.

Other types of systematic reviews exist. Realist reviews aim to determine how complex programmes work in specific contexts and settings. 174 Meta-narrative reviews aim to explain complex bodies of evidence through mapping and comparing different overarching storylines. 175 Network meta-analyses, also known as multiple treatments meta-analyses, can be used to analyse data from comparisons of many different treatments. 176 177 They use both direct and indirect comparisons and can be used to compare interventions that have not been directly compared.

We believe that the issues we have highlighted in this paper are relevant to ensure transparency and understanding of the processes adopted and the limitations of the information presented in systematic reviews of different types. We hope that PRISMA can be the basis for more detailed guidance on systematic reviews of other types of research, including diagnostic accuracy and epidemiological studies.

We developed the PRISMA statement using an approach for developing reporting guidelines that has evolved over several years. 178 The overall aim of PRISMA is to help ensure the clarity and transparency of reporting of systematic reviews, and recent data indicate that this reporting guidance is much needed. 3 PRISMA is not intended to be a quality assessment tool and it should not be used as such.

This PRISMA explanation and elaboration document was developed to facilitate the understanding, uptake, and dissemination of the PRISMA statement and hopefully provide a pedagogical framework for those interested in conducting and reporting systematic reviews. It follows a format similar to that used in other explanatory documents. 17 18 19 Following the recommendations in the PRISMA checklist may increase the word count of a systematic review report. We believe, however, that the benefit of readers being able to critically appraise a clear, complete, and transparent systematic review report outweighs the possible slight increase in the length of the report.

While the aims of PRISMA are to reduce the risk of flawed reporting of systematic reviews and improve the clarity and transparency in how reviews are conducted, we have little data to state more definitively whether this “intervention” will achieve its intended goal. A previous effort to evaluate QUOROM was not successfully completed. 178 Publication of the QUOROM statement was delayed for two years while a research team attempted to evaluate its effectiveness by conducting a randomised controlled trial with the participation of eight major medical journals. Unfortunately that trial was not completed due to accrual problems (David Moher, personal communication). Other evaluation methods might be easier to conduct. At least one survey of 139 published systematic reviews in the critical care literature 179 suggests that their quality improved after the publication of QUOROM.

If the PRISMA statement is endorsed by and adhered to in journals, as other reporting guidelines have been, 17 18 19 180 there should be evidence of improved reporting of systematic reviews. For example, there have been several evaluations of whether the use of CONSORT improves reports of randomised controlled trials. A systematic review of these studies 181 indicates that use of CONSORT is associated with improved reporting of certain items, such as allocation concealment. We aim to evaluate the benefits (that is, improved reporting) and possible adverse effects (such as increased word length) of PRISMA and we encourage others to consider doing likewise.

Even though we did not carry out a systematic literature search to produce our checklist, and this is indeed a limitation of our effort, PRISMA was developed using an evidence based approach whenever possible. Checklist items were included if there was evidence that not reporting the item was associated with increased risk of bias, or where it was clear that information was necessary to appraise the reliability of a review. To keep PRISMA up to date and as evidence based as possible requires regular vigilance of the literature, which is growing rapidly. Currently the Cochrane Methodology Register has more than 11 000 records pertaining to the conduct and reporting of systematic reviews and other evaluations of health and social care. For some checklist items, such as reporting the abstract (item 2), we have used evidence from elsewhere in the belief that the issue applies equally well to reporting of systematic reviews. Yet for other items, evidence does not exist; for example, whether a training exercise improves the accuracy and reliability of data extraction. We hope PRISMA will act as a catalyst to help generate further evidence that can be considered when further revising the checklist in the future.

More than 10 years have passed between the development of the QUOROM statement and its update, the PRISMA statement. We aim to update PRISMA more frequently. We hope that the implementation of PRISMA will be better than it has been for QUOROM. There are at least two reasons to be optimistic. First, systematic reviews are increasingly used by healthcare providers to inform “best practice” patient care. Policy analysts and managers are using systematic reviews to inform healthcare decision making and to better target future research. Second, we anticipate benefits from the development of the EQUATOR Network, described below.

Developing any reporting guideline requires considerable effort, experience, and expertise. While reporting guidelines have been successful for some individual efforts, 17 18 19 there are likely others who want to develop reporting guidelines who possess little time, experience, or knowledge as to how to do so appropriately. The EQUATOR (enhancing the quality and transparency of health research) Network aims to help such individuals and groups by serving as a global resource for anybody interested in developing reporting guidelines, regardless of the focus. 7 180 182 The overall goal of EQUATOR is to improve the quality of reporting of all health science research through the development and translation of reporting guidelines. Beyond this aim, the network plans to develop a large web presence by developing and maintaining a resource centre of reporting tools, and other information for reporting research ( www.equator-network.org/ ).

We encourage healthcare journals and editorial groups, such as the World Association of Medical Editors and the International Committee of Medical Journal Editors, to endorse PRISMA in much the same way as they have endorsed other reporting guidelines, such as CONSORT. We also encourage editors of healthcare journals to support PRISMA by updating their “instructions to authors” and including the PRISMA web address, and by raising awareness through specific editorial actions.

Box 1: Terminology

The terminology used to describe systematic reviews and meta-analyses has evolved over time and varies between fields. Different terms have been used by different groups, such as educators and psychologists. The conduct of a systematic review comprises several explicit and reproducible steps, such as identifying all likely relevant records, selecting eligible studies, assessing the risk of bias, extracting data, qualitative synthesis of the included studies, and possibly meta-analyses.

Initially this entire process was termed a meta-analysis and was so defined in the QUOROM statement. 8 More recently, especially in healthcare research, there has been a trend towards preferring the term systematic review. If quantitative synthesis is performed, this last stage alone is referred to as a meta-analysis. The Cochrane Collaboration uses this terminology, 9 under which a meta-analysis, if performed, is a component of a systematic review. Regardless of the question addressed and the complexities involved, it is always possible to complete a systematic review of existing data, but not always possible or desirable, to quantitatively synthesise results because of clinical, methodological, or statistical differences across the included studies. Conversely, with prospective accumulation of studies and datasets where the plan is eventually to combine them, the term “(prospective) meta-analysis” may make more sense than “systematic review.”

For retrospective efforts, one possibility is to use the term systematic review for the whole process up to the point when one decides whether to perform a quantitative synthesis. If a quantitative synthesis is performed, some researchers refer to this as a meta-analysis. This definition is similar to that found in the current edition of the Dictionary of Epidemiology . 183

While we recognise that the use of these terms is inconsistent and there is residual disagreement among the members of the panel working on PRISMA, we have adopted the definitions used by the Cochrane Collaboration. 9

Systematic review A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria to answer a specific research question. It uses explicit, systematic methods that are selected with a view to minimising bias, thus providing reliable findings from which conclusions can be drawn and decisions made. 184 185 The key characteristics of a systematic review are ( a ) a clearly stated set of objectives with an explicit, reproducible methodology; ( b ) a systematic search that attempts to identify all studies that would meet the eligibility criteria; ( c ) an assessment of the validity of the findings of the included studies, such as through the assessment of risk of bias; and ( d ) systematic presentation and synthesis of the characteristics and findings of the included studies.

Meta-analysis Meta-analysis is the use of statistical techniques to integrate and summarise the results of included studies. Many systematic reviews contain meta-analyses, but not all. By combining information from all relevant studies, meta-analyses can provide more precise estimates of the effects of health care than those derived from the individual studies included within a review.

Box 2: Helping to develop the research question(s): the PICOS approach

Formulating relevant and precise questions that can be answered in a systematic review can be complex and time consuming. A structured approach for framing questions that uses five components may help facilitate the process. This approach is commonly known by the acronym “PICOS” where each letter refers to a component: the patient population or the disease being addressed (P), the interventions or exposure (I), the comparator group (C), the outcome or endpoint (O), and the study design chosen (S). 186 Issues relating to PICOS affect several PRISMA items (items 6, 8, 9, 10, 11, and 18).

P— Providing information about the population requires a precise definition of a group of participants (often patients), such as men over the age of 65 years, their defining characteristics of interest (often disease), and possibly the setting of care considered, such as an acute care hospital.

I— The interventions (exposures) under consideration in the systematic review need to be transparently reported. For example, if the reviewers answer a question regarding the association between a woman’s prenatal exposure to folic acid and subsequent offspring’s neural tube defects, reporting the dose, frequency, and duration of folic acid used in different studies is likely to be important for readers to interpret the review’s results and conclusions. Other interventions (exposures) might include diagnostic, preventive, or therapeutic treatments; arrangements of specific processes of care; lifestyle changes; psychosocial or educational interventions; or risk factors.

C— Clearly reporting the comparator (control) group intervention(s)—such as usual care, drug, or placebo—is essential for readers to fully understand the selection criteria of primary studies included in the systematic review, and might be a source of heterogeneity investigators have to deal with. Comparators are often poorly described. Clearly reporting what the intervention is compared with is important and may sometimes have implications for the inclusion of studies in a review—many reviews compare with “standard care,” which is otherwise undefined; this should be properly addressed by authors.

O— The outcomes of the intervention being assessed—such as mortality, morbidity, symptoms, or quality of life improvements—should be clearly specified as they are required to interpret the validity and generalisability of the systematic review’s results.

S— Finally, the type of study design(s) included in the review should be reported. Some reviews include only reports of randomised trials, whereas others have broader design criteria and include randomised trials and certain types of observational studies. Still other reviews, such as those specifically answering questions related to harms, may include a wide variety of designs ranging from cohort studies to case reports. Whatever study designs are included in the review, these should be reported.

Independently from how difficult it is to identify the components of the research question, the important point is that a structured approach is preferable, and this extends beyond systematic reviews of effectiveness. Ideally the PICOS criteria should be formulated a priori, in the systematic review’s protocol, although some revisions might be required because of the iterative nature of the review process. Authors are encouraged to report their PICOS criteria and whether any modifications were made during the review process. A useful example in this realm is the appendix of the “systematic reviews of water fluoridation” undertaken by the Centre for Reviews and Dissemination. 187

Box 3: Identification of study reports and data extraction

Comprehensive searches usually result in a large number of identified records, a much smaller number of studies included in the systematic review, and even fewer of these studies included in any meta-analyses. Reports of systematic reviews often provide little detail as to the methods used by the review team in this process. Readers are often left with what can be described as the “X-files” phenomenon, as it is unclear what occurs between the initial set of identified records and those finally included in the review.

Sometimes, review authors simply report the number of included studies; more often they report the initial number of identified records and the number of included studies. Rarely, although this is optimal for readers, do review authors report the number of identified records, the smaller number of potentially relevant studies, and the even smaller number of included studies, by outcome. Review authors also need to differentiate between the number of reports and studies. Often there will not be a 1:1 ratio of reports to studies and this information needs to be described in the systematic review report.

Ideally, the identification of study reports should be reported as text in combination with use of the PRISMA flow diagram. While we recommend use of the flow diagram, a small number of reviews might be particularly simple and can be sufficiently described with a few brief sentences of text. More generally, review authors will need to report the process used for each step: screening the identified records; examining the full text of potentially relevant studies (and reporting the number that could not be obtained); and applying eligibility criteria to select the included studies.

Such descriptions should also detail how potentially eligible records were promoted to the next stage of the review (such as full text screening) and to the final stage of this process, the included studies. Often review teams have three response options for excluding records or promoting them to the next stage of the winnowing process: “yes,” “no,” and “maybe.”

Similarly, some detail should be reported on who participated and how such processes were completed. For example, a single person may screen the identified records while a second person independently examines a small sample of them. The entire winnowing process is one of “good bookkeeping” whereby interested readers should be able to work backwards from the included studies to come up with the same numbers of identified records.

There is often a paucity of information describing the data extraction processes in reports of systematic reviews. Authors may simply report that “relevant” data were extracted from each included study with little information about the processes used for data extraction. It may be useful for readers to know whether a systematic review’s authors developed, a priori or not, a data extraction form, whether multiple forms were used, the number of questions, whether the form was pilot tested, and who completed the extraction. For example, it is important for readers to know whether one or more people extracted data, and if so, whether this was completed independently, whether “consensus” data were used in the analyses, and if the review team completed an informal training exercise or a more formal reliability exercise.

Box 4: Study quality and risk of bias

In this paper, and elsewhere, 11 we sought to use a new term for many readers, namely, risk of bias, for evaluating each included study in a systematic review. Previous papers 89 188 tended to use the term “quality.” When carrying out a systematic review we believe it is important to distinguish between quality and risk of bias and to focus on evaluating and reporting the latter. Quality is often the best the authors have been able to do. For example, authors may report the results of surgical trials in which blinding of the outcome assessors was not part of the trial’s conduct. Even though this may have been the best methodology the researchers were able to do, there are still theoretical grounds for believing that the study was susceptible to (risk of) bias.

Assessing the risk of bias should be part of the conduct and reporting of any systematic review. In all situations, we encourage systematic reviewers to think ahead carefully about what risks of bias (methodological and clinical) may have a bearing on the results of their systematic reviews.

For systematic reviewers, understanding the risk of bias on the results of studies is often difficult, because the report is only a surrogate of the actual conduct of the study. There is some suggestion 189 190 that the report may not be a reasonable facsimile of the study, although this view is not shared by all. 88 191 There are three main ways to assess risk of bias—individual components, checklists, and scales. There are a great many scales available, 192 although we caution against their use based on theoretical grounds 193 and emerging empirical evidence. 194 Checklists are less frequently used and potentially have the same problems as scales. We advocate using a component approach and one that is based on domains for which there is good empirical evidence and perhaps strong clinical grounds. The new Cochrane risk of bias tool 11 is one such component approach.

The Cochrane risk of bias tool consists of five items for which there is empirical evidence for their biasing influence on the estimates of an intervention’s effectiveness in randomised trials (sequence generation, allocation concealment, blinding, incomplete outcome data, and selective outcome reporting) and a catch-all item called “other sources of bias”. 11 There is also some consensus that these items can be applied for evaluation of studies across diverse clinical areas. 93 Other risk of bias items may be topic or even study specific—that is, they may stem from some peculiarity of the research topic or some special feature of the design of a specific study. These peculiarities need to be investigated on a case-by-case basis, based on clinical and methodological acumen, and there can be no general recipe. In all situations, systematic reviewers need to think ahead carefully about what aspects of study quality may have a bearing on the results.

Box 5: Whether to combine data

Deciding whether to combine data involves statistical, clinical, and methodological considerations. The statistical decisions are perhaps the most technical and evidence-based. These are more thoroughly discussed in box 6. The clinical and methodological decisions are generally based on discussions within the review team and may be more subjective.

Clinical considerations will be influenced by the question the review is attempting to address. Broad questions might provide more “license” to combine more disparate studies, such as whether “Ritalin is effective in increasing focused attention in people diagnosed with attention deficit hyperactivity disorder (ADHD).” Here authors might elect to combine reports of studies involving children and adults. If the clinical question is more focused, such as whether “Ritalin is effective in increasing classroom attention in previously undiagnosed ADHD children who have no comorbid conditions,” it is likely that different decisions regarding synthesis of studies are taken by authors. In any case authors should describe their clinical decisions in the systematic review report.

Deciding whether to combine data also has a methodological component. Reviewers may decide not to combine studies of low risk of bias with those of high risk of bias (see items 12 and 19). For example, for subjective outcomes, systematic review authors may not wish to combine assessments that were completed under blind conditions with those that were not.

For any particular question there may not be a “right” or “wrong” choice concerning synthesis, as such decisions are likely complex. However, as the choice may be subjective, authors should be transparent as to their key decisions and describe them for readers.

Box 6: Meta-analysis and assessment of consistency (heterogeneity)

Meta-analysis: statistical combination of the results of multiple studies.

If it is felt that studies should have their results combined statistically, other issues must be considered because there are many ways to conduct a meta-analysis. Different effect measures can be used for both binary and continuous outcomes (see item 13). Also, there are two commonly used statistical models for combining data in a meta-analysis. 195 The fixed-effect model assumes that there is a common treatment effect for all included studies; 196 it is assumed that the observed differences in results across studies reflect random variation. 196 The random-effects model assumes that there is no common treatment effect for all included studies but rather that the variation of the effects across studies follows a particular distribution. 197 In a random-effects model it is believed that the included studies represent a random sample from a larger population of studies addressing the question of interest. 198

There is no consensus about whether to use fixed- or random-effects models, and both are in wide use. The following differences have influenced some researchers regarding their choice between them. The random-effects model gives more weight to the results of smaller trials than does the fixed-effect analysis, which may be undesirable as small trials may be inferior and most prone to publication bias. The fixed-effect model considers only within-study variability, whereas the random-effects model considers both within- and between-study variability. This is why a fixed-effect analysis tends to give narrower confidence intervals (that is, provides greater precision) than a random-effects analysis. 110 196 199 In the absence of any between-study heterogeneity, the fixed- and random-effects estimates will coincide.

In addition, there are different methods for performing both types of meta-analysis. 200 Common fixed-effect approaches are Mantel-Haenszel and inverse variance, whereas random-effects analyses usually use the DerSimonian and Laird approach, although other methods exist, including Bayesian meta-analysis. 201

In the presence of demonstrable between-study heterogeneity (see below), some consider that the use of a fixed-effect analysis is counterintuitive because their main assumption is violated. Others argue that it is inappropriate to conduct any meta-analysis when there is unexplained variability across trial results. If the reviewers decide not to combine the data quantitatively, a danger is that eventually they may end up using quasi-quantitative rules of poor validity (such as vote counting of how many studies have nominally significant results) for interpreting the evidence. Statistical methods to combine data exist for almost any complex situation that may arise in a systematic review, but one has to be aware of their assumptions and limitations to avoid misapplying or misinterpreting these methods.

Assessment of consistency (heterogeneity)

We expect some variation (inconsistency) in the results of different studies due to chance alone. Variability in excess of that due to chance reflects true differences in the results of the trials, and is called “heterogeneity.” The conventional statistical approach to evaluating heterogeneity is a χ 2 test (Cochran’s Q), but it has low power when there are few studies and excessive power when there are many studies. 202 By contrast, the I 2 statistic quantifies the amount of variation in results across studies beyond that expected by chance and so is preferable to Q. 202 203 I 2 represents the percentage of the total variation in estimated effects across studies that is due to heterogeneity rather than to chance; some authors consider an I 2 value less than 25% as low. 202 However, I 2 also suffers from large uncertainty in the common situation where only a few studies are available, 204 and reporting the uncertainty in I 2 (such as 95% confidence interval) may be helpful. 145 When there are few studies, inferences about heterogeneity should be cautious.

When considerable heterogeneity is observed, it is advisable to consider possible reasons. 205 In particular, the heterogeneity may be due to differences between subgroups of studies (see item 16). Also, data extraction errors are a common cause of substantial heterogeneity in results with continuous outcomes. 139

Box 7: Bias caused by selective publication of studies or results within studies

Systematic reviews aim to incorporate information from all relevant studies. The absence of information from some studies may pose a serious threat to the validity of a review. Data may be incomplete because some studies were not published, or because of incomplete or inadequate reporting within a published article. These problems are often summarised as “publication bias,” although the bias arises from non-publication of full studies and selective publication of results in relation to their findings. Non-publication of research findings dependent on the actual results is an important risk of bias to a systematic review and meta-analysis.

Missing studies

Several empirical investigations have shown that the findings from clinical trials are more likely to be published if the results are statistically significant (P<0.05) than if they are not. 125 206 207 For example, of 500 oncology trials with more than 200 participants for which preliminary results were presented at a conference of the American Society of Clinical Oncology, 81% with P<0.05 were published in full within five years compared with only 68% of those with P>0.05. 208

Also, among published studies, those with statistically significant results are published sooner than those with non-significant findings. 209 When some studies are missing for these reasons, the available results will be biased towards exaggerating the effect of an intervention.

Missing outcomes

In many systematic reviews only some of the eligible studies (often a minority) can be included in a meta-analysis for a specific outcome. For some studies, the outcome may not be measured or may be measured but not reported. The former will not lead to bias, but the latter could.

Evidence is accumulating that selective reporting bias is widespread and of considerable importance. 42 43 In addition, data for a given outcome may be analysed in multiple ways and the choice of presentation influenced by the results obtained. In a study of 102 randomised trials, comparison of published reports with trial protocols showed that a median of 38% efficacy and 50% safety outcomes per trial, respectively, were not available for meta-analysis. Statistically significant outcomes had higher odds of being fully reported in publications when compared with non-significant outcomes for both efficacy (pooled odds ratio 2.4 (95% confidence interval 1.4 to 4.0)) and safety (4.7 (1.8 to 12)) data. Several other studies have had similar findings. 210 211

Detection of missing information

Missing studies may increasingly be identified from trials registries. Evidence of missing outcomes may come from comparison with the study protocol, if available, or by careful examination of published articles. 11 Study publication bias and selective outcome reporting are difficult to exclude or verify from the available results, especially when few studies are available.

If the available data are affected by either (or both) of the above biases, smaller studies would tend to show larger estimates of the effects of the intervention. Thus one possibility is to investigate the relation between effect size and sample size (or more specifically, precision of the effect estimate). Graphical methods, especially the funnel plot, 212 and analytic methods (such as Egger’s test) are often used, 213 214 215 although their interpretation can be problematic. 216 217 Strictly speaking, such analyses investigate “small study bias”; there may be many reasons why smaller studies have systematically different effect sizes than larger studies, of which reporting bias is just one. 218 Several alternative tests for bias have also been proposed, beyond the ones testing small study bias, 215 219 220 but none can be considered a gold standard. Although evidence that smaller studies had larger estimated effects than large ones may suggest the possibility that the available evidence is biased, misinterpretation of such data is common. 123

Cite this as: BMJ 2009;339:b2700

The following people contributed to this paper: Doug Altman, Centre for Statistics in Medicine (Oxford, UK); Gerd Antes, University Hospital Freiburg (Freiburg, Germany); David Atkins, Health Services Research and Development Service, Veterans Health Administration (Washington DC, USA); Virginia Barbour, PLoS Medicine (Cambridge, UK); Nick Barrowman, Children’s Hospital of Eastern Ontario (Ottawa, Canada); Jesse A Berlin, Johnson & Johnson Pharmaceutical Research and Development (Titusville NJ, USA); Jocalyn Clark, PLoS Medicine (at the time of writing, BMJ , London); Mike Clarke, UK Cochrane Centre (Oxford, UK) and School of Nursing and Midwifery, Trinity College (Dublin, Ireland); Deborah Cook, Departments of Medicine, Clinical Epidemiology and Biostatistics, McMaster University (Hamilton, Canada); Roberto D’Amico, Università di Modena e Reggio Emilia (Modena, Italy) and Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri (Milan, Italy); Jonathan J Deeks, University of Birmingham (Birmingham); P J Devereaux, Departments of Medicine, Clinical Epidemiology and Biostatistics, McMaster University (Hamilton, Canada); Kay Dickersin, Johns Hopkins Bloomberg School of Public Health (Baltimore MD, USA); Matthias Egger, Department of Social and Preventive Medicine, University of Bern (Bern, Switzerland); Edzard Ernst, Peninsula Medical School (Exeter, UK); Peter C Gøtzsche, Nordic Cochrane Centre (Copenhagen, Denmark); Jeremy Grimshaw, Ottawa Hospital Research Institute (Ottawa, Canada); Gordon Guyatt, Departments of Medicine, Clinical Epidemiology and Biostatistics, McMaster University; Julian Higgins, MRC Biostatistics Unit (Cambridge, UK); John P A Ioannidis, University of Ioannina Campus (Ioannina, Greece); Jos Kleijnen, Kleijnen Systematic Reviews (York, UK) and School for Public Health and Primary Care (CAPHRI), University of Maastricht (Maastricht, Netherlands); Tom Lang, Tom Lang Communications and Training (Davis CA, USA); Alessandro Liberati, Università di Modena e Reggio Emilia (Modena, Italy) and Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri (Milan, Italy); Nicola Magrini, NHS Centre for the Evaluation of the Effectiveness of Health Care—CeVEAS (Modena, Italy); David McNamee, Lancet (London, UK); David Moher, Ottawa Methods Centre, Ottawa Hospital Research Institute (Ottawa, Canada); Lorenzo Moja, Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri; Maryann Napoli, Center for Medical Consumers (New York, USA); Cynthia Mulrow, Annals of Internal Medicine (Philadelphia, Pennsylvania, US); Andy Oxman, Norwegian Health Services Research Centre (Oslo, Norway); Ba’ Pham, Toronto Health Economics and Technology Assessment Collaborative (Toronto, Canada) (at the time of first meeting of the group, GlaxoSmithKline Canada, Mississauga, Canada); Drummond Rennie, University of California San Francisco (San Francisco CA, USA); Margaret Sampson, Children’s Hospital of Eastern Ontario (Ottawa, Canada); Kenneth F Schulz, Family Health International (Durham NC, USA); Paul G Shekelle, Southern California Evidence Based Practice Center (Santa Monica CA, USA); Jennifer Tetzlaff, Ottawa Methods Centre, Ottawa Hospital Research Institute (Ottawa, Canada); David Tovey, Cochrane Library , Cochrane Collaboration (Oxford, UK) (at the time of first meeting of the group, BMJ , London); Peter Tugwell, Institute of Population Health, University of Ottawa (Ottawa, Canada).

Lorenzo Moja helped with the preparation and the several updates of the manuscript and assisted with the preparation of the reference list. AL is the guarantor of the manuscript.

Competing interests: None declared.

Provenance and peer review: Not commissioned; externally peer reviewed.

In order to encourage dissemination of the PRISMA statement, this article is freely accessible on bmj.com and will also be published in PLoS Medicine , Annals of Internal Medicine , Journal of Clinical Epidemiology , and Open Medicine . The authors jointly hold the copyright of this article. For details on further use, see the PRISMA website ( www.prisma-statement.org/ ).

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  • ↵ Canadian Institutes of Health Research (2006) Randomized controlled trials registration/application checklist (12/2006). Available: http://www.cihr-irsc.gc.ca/e/documents/rct_reg_e.pdf . Accessed 26 May 2009.
  • ↵ Young C, Horton R. Putting clinical trials into context. Lancet 2005 ; 366 : 107 -108. OpenUrl CrossRef PubMed Web of Science
  • ↵ Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med 2007 ; 4 : e78 . doi:10.1371/journal.pmed.0040078 OpenUrl CrossRef PubMed
  • ↵ Dixon E, Hameed M, Sutherland F, Cook DJ, Doig C. Evaluating meta-analyses in the general surgical literature: A critical appraisal. Ann Surg 2005 ; 241 : 450 -459. OpenUrl CrossRef PubMed Web of Science
  • ↵ Hemels ME, Vicente C, Sadri H, Masson MJ, Einarson TR. Quality assessment of meta-analyses of RCTs of pharmacotherapy in major depressive disorder. Curr Med Res Opin 2004 ; 20 : 477 -484. OpenUrl CrossRef PubMed Web of Science
  • ↵ Jin W, Yu R, Li W, Youping L, Ya L, et al. The reporting quality of meta-analyses improves: A random sampling study. J Clin Epidemiol 2008 ; 61 : 770 -775. OpenUrl CrossRef PubMed Web of Science
  • ↵ Moher D, Simera I, Schulz KF, Hoey J, Altman DG. Helping editors, peer reviewers and authors improve the clarity, completeness and transparency of reporting health research. BMC Med 2008 ; 6 : 13 . OpenUrl CrossRef PubMed
  • ↵ Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, et al. Improving the quality of reports of meta-analyses of randomised controlled trials: The QUOROM statement. Quality of Reporting of Meta-analyses. Lancet 1999 ; 354 : 1896 -1900. OpenUrl CrossRef PubMed Web of Science
  • ↵ Green S, Higgins JPT, Alderson P, Clarke M, Mulrow CD, et al. Chapter 1: What is a systematic review? In: Higgins JPT, Green S, eds. Cochrane handbook for systematic reviews of interventions version 5.0.0 [updated February 2008]. The Cochrane Collaboration, 2008. Available: http://www.cochrane-handbook.org/ . Accessed 26 May 2009.
  • ↵ Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, et al. GRADE: An emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008 ; 336 : 924 -926. OpenUrl FREE Full Text
  • ↵ Higgins JPT, Altman DG. Chapter 8: Assessing risk of bias in included studies. In: Higgins JPT, Green S, eds. Cochrane handbook for systematic reviews of interventions version 5.0.0 [updated February 2008]. The Cochrane Collaboration, 2008. Available: http://www.cochrane-handbook.org/ . Accessed 26 May 2009.
  • ↵ Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA Statement. PLoS Med 2008 ; 6 : e1000097 . 10.1371/journal.pmed.1000097 OpenUrl CrossRef
  • ↵ Atkins D, Fink K, Slutsky J. Better information for better health care: The Evidence-based Practice Center program and the Agency for Healthcare Research and Quality. Ann Intern Med 2005 ; 142 : 1035 -1041. OpenUrl CrossRef PubMed Web of Science
  • ↵ Helfand M, Balshem H. Principles for developing guidance: AHRQ and the effective health-care program. J Clin Epidemiol 2009 , In press.
  • ↵ Higgins JPT, Green S. Cochrane handbook for systematic reviews of interventions version 5.0.0 [updated February 2008]. The Cochrane Collaboration, 2008. Available: http://www.cochrane-handbook.org/ . Accessed 26 May 2009.
  • ↵ Centre for Reviews and Dissemination. Systematic reviews: CRD’s guidance for undertaking reviews in health care. York: University of York, 2009. Available: http://www.york.ac.uk/inst/crd/systematic_reviews_book.htm . Accessed 26 May 2009.
  • ↵ Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, et al. The revised CONSORT statement for reporting randomized trials: Explanation and elaboration. Ann Intern Med 2001 ; 134 : 663 -694. OpenUrl CrossRef PubMed Web of Science
  • ↵ Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, et al. The STARD statement for reporting studies of diagnostic accuracy: Explanation and elaboration. Clin Chem 2003 ; 49 : 7 -18. OpenUrl Abstract / FREE Full Text
  • ↵ Vandenbroucke JP, von Elm E, Altman DG, Gøtzsche PC, Mulrow CD, et al. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): Explanation and elaboration. PLoS Med 2007 ; 4 : e297 . doi:10.1371/journal.pmed.0040297 OpenUrl CrossRef PubMed
  • ↵ Barker A, Maratos EC, Edmonds L, Lim E. Recurrence rates of video-assisted thoracoscopic versus open surgery in the prevention of recurrent pneumothoraces: A systematic review of randomised and non-randomised trials. Lancet 2007 ; 370 : 329 -335. OpenUrl CrossRef PubMed Web of Science
  • ↵ Bjelakovic G, Nikolova D, Gluud LL, Simonetti RG, Gluud C. Mortality in randomized trials of antioxidant supplements for primary and secondary prevention: Systematic review and meta-analysis. JAMA 2007 ; 297 : 842 -857. OpenUrl CrossRef PubMed Web of Science
  • ↵ Montori VM, Wilczynski NL, Morgan D, Haynes RB. Optimal search strategies for retrieving systematic reviews from Medline: Analytical survey. BMJ 2005 ; 330 : 68 . OpenUrl Abstract / FREE Full Text
  • ↵ Bischoff-Ferrari HA, Willett WC, Wong JB, Giovannucci E, Dietrich T, et al. Fracture prevention with vitamin D supplementation: A meta-analysis of randomized controlled trials. JAMA 2005 ; 293 : 2257 -2264. OpenUrl CrossRef PubMed Web of Science
  • ↵ Hopewell S, Clarke M, Moher D, Wager E, Middleton P, et al. CONSORT for reporting randomised trials in journal and conference abstracts. Lancet 2008 ; 371 : 281 -283. OpenUrl CrossRef PubMed Web of Science
  • ↵ Hopewell S, Clarke M, Moher D, Wager E, Middleton P, et al. CONSORT for reporting randomized controlled trials in journal and conference abstracts: Explanation and elaboration. PLoS Med 2008 ; 5 : e20 . doi:10.1371/journal.pmed.0050020 OpenUrl CrossRef PubMed
  • ↵ Haynes RB, Mulrow CD, Huth EJ, Altman DG, Gardner MJ. More informative abstracts revisited. Ann Intern Med 1990 ; 113 : 69 -76. OpenUrl CrossRef PubMed Web of Science
  • ↵ Mulrow CD, Thacker SB, Pugh JA. A proposal for more informative abstracts of review articles. Ann Intern Med 1988 ; 108 : 613 -615. OpenUrl CrossRef PubMed Web of Science
  • ↵ Froom P, Froom J. Deficiencies in structured medical abstracts. J Clin Epidemiol 1993 ; 46 : 591 -594. OpenUrl CrossRef PubMed Web of Science
  • ↵ Hartley J. Clarifying the abstracts of systematic literature reviews. Bull Med Libr Assoc 2000 ; 88 : 332 -337. OpenUrl PubMed
  • ↵ Hartley J, Sydes M, Blurton A. Obtaining information accurately and quickly: Are structured abstract more efficient? J Infor Sci 1996 ; 22 : 349 -356. OpenUrl CrossRef
  • ↵ Pocock SJ, Hughes MD, Lee RJ. Statistical problems in the reporting of clinical trials. A survey of three medical journals. N Engl J Med 1987 ; 317 : 426 -432. OpenUrl CrossRef PubMed Web of Science
  • ↵ Taddio A, Pain T, Fassos FF, Boon H, Ilersich AL, et al. Quality of nonstructured and structured abstracts of original research articles in the British Medical Journal, the Canadian Medical Association Journal and the Journal of the American Medical Association. CMAJ 1994 ; 150 : 1611 -1615. OpenUrl Abstract
  • ↵ Harris KC, Kuramoto LK, Schulzer M, Retallack JE. Effect of school-based physical activity interventions on body mass index in children: A meta-analysis. CMAJ 2009 ; 180 : 719 -726. OpenUrl Abstract / FREE Full Text
  • ↵ James MT, Conley J, Tonelli M, Manns BJ, MacRae J, et al. Meta-analysis: Antibiotics for prophylaxis against hemodialysis catheter-related infections. Ann Intern Med 2008 ; 148 : 596 -605. OpenUrl CrossRef PubMed Web of Science
  • ↵ Counsell C. Formulating questions and locating primary studies for inclusion in systematic reviews. Ann Intern Med 1997 ; 127 : 380 -387. OpenUrl CrossRef PubMed Web of Science
  • ↵ Gotzsche PC. Why we need a broad perspective on meta-analysis. It may be crucially important for patients. BMJ 2000 ; 321 : 585 -586. OpenUrl FREE Full Text
  • ↵ Grossman P, Niemann L, Schmidt S, Walach H. Mindfulness-based stress reduction and health benefits. A meta-analysis. J Psychosom Res 2004 ; 57 : 35 -43. OpenUrl CrossRef PubMed Web of Science
  • ↵ Brunton G, Green S, Higgins JPT, Kjeldstrøm M, Jackson N, et al. Chapter 2: Preparing a Cochrane review. In: Higgins JPT, Green S, eds. Cochrane handbook for systematic reviews of interventions version 5.0.0 [updated February 2008]. The Cochrane Collaboration, 2008. Available: http://www.cochrane-handbook.org/ . Accessed 26 May 2009.
  • ↵ Sutton AJ, Abrams KR, Jones DR, Sheldon TA, Song F. Systematic reviews of trials and other studies. Health Technol Assess 1998 ; 2 : 1 -276. OpenUrl PubMed
  • ↵ Ioannidis JP, Rosenberg PS, Goedert JJ, O’Brien TR. Commentary: meta-analysis of individual participants’ data in genetic epidemiology. Am J Epidemiol 2002 ; 156 : 204 -210. OpenUrl FREE Full Text
  • ↵ Stewart LA, Clarke MJ. Practical methodology of meta-analyses (overviews) using updated individual patient data. Cochrane Working Group. Stat Med 1995 ; 14 : 2057 -2079. OpenUrl CrossRef PubMed Web of Science
  • ↵ Chan AW, Hrobjartsson A, Haahr MT, Gøtzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: Comparison of protocols to published articles. JAMA 2004 ; 291 : 2457 -2465. OpenUrl CrossRef PubMed Web of Science
  • ↵ Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE 2008 ; 3 : e3081 . doi:10.1371/journal.pone.0003081 OpenUrl CrossRef PubMed
  • ↵ Silagy CA, Middleton P, Hopewell S. Publishing protocols of systematic reviews: Comparing what was done to what was planned. JAMA 2002 ; 287 : 2831 -2834. OpenUrl CrossRef PubMed Web of Science
  • ↵ Centre for Reviews and Dissemination. Research projects. York: University of York, 2009. Available: http://www.crd.york.ac.uk/crdweb . Accessed 26 May 2009.
  • ↵ The Joanna Briggs Institute. Protocols & work in progress, 2009. Available: http://www.joannabriggs.edu.au/pubs/systematic_reviews_prot.php . Accessed 26 May 2009.
  • ↵ Bagshaw SM, McAlister FA, Manns BJ, Ghali WA. Acetylcysteine in the prevention of contrast-induced nephropathy: A case study of the pitfalls in the evolution of evidence. Arch Intern Med 2006 ; 166 : 161 -166. OpenUrl CrossRef PubMed Web of Science
  • ↵ Biondi-Zoccai GG, Lotrionte M, Abbate A, Testa L, Remigi E, et al. Compliance with QUOROM and quality of reporting of overlapping meta-analyses on the role of acetylcysteine in the prevention of contrast associated nephropathy: Case study. BMJ 2006 ; 332 : 202 -209. OpenUrl Abstract / FREE Full Text
  • ↵ Sacks HS, Berrier J, Reitman D, Ancona-Berk VA, Chalmers TC. Meta-analyses of randomized controlled trials. N Engl J Med 1987 ; 316 : 450 -455. OpenUrl CrossRef PubMed Web of Science
  • ↵ Schroth RJ, Hitchon CA, Uhanova J, Noreddin A, Taback SP, et al. Hepatitis B vaccination for patients with chronic renal failure. Cochrane Database Syst Rev 2004 ;(3):CD003775, doi:10.1002/14651858.CD003775.pub2
  • ↵ Egger M, Zellweger-Zahner T, Schneider M, Junker C, Lengeler C, et al. Language bias in randomised controlled trials published in English and German. Lancet 1997 ; 350 : 326 -329. OpenUrl CrossRef PubMed Web of Science
  • ↵ Gregoire G, Derderian F, Le Lorier J. Selecting the language of the publications included in a meta-analysis: Is there a Tower of Babel bias? J Clin Epidemiol 1995 ; 48 : 159 -163. OpenUrl CrossRef PubMed Web of Science
  • ↵ Jüni P, Holenstein F, Sterne J, Bartlett C, Egger M. Direction and impact of language bias in meta-analyses of controlled trials: Empirical study. Int J Epidemiol 2002 ; 31 : 115 -123. OpenUrl Abstract / FREE Full Text
  • ↵ Moher D, Pham B, Klassen TP, Schulz KF, Berlin JA, et al. What contributions do languages other than English make on the results of meta-analyses? J Clin Epidemiol 2000 ; 53 : 964 -972. OpenUrl CrossRef PubMed Web of Science
  • ↵ Pan Z, Trikalinos TA, Kavvoura FK, Lau J, Ioannidis JP. Local literature bias in genetic epidemiology: an empirical evaluation of the Chinese literature. PLoS Med 2005 ; 2 : e334 . doi:10.1371/journal.pmed.0020334 OpenUrl CrossRef PubMed
  • ↵ Hopewell S, McDonald S, Clarke M, Egger M. Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database Syst Rev 2007 ;(2):MR000010, doi:10.1002/14651858.MR000010.pub3.
  • ↵ Melander H, Ahlqvist-Rastad J, Meijer G, Beermann B. Evidence b(i)ased medicine—Selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications. BMJ 2003 ; 326 : 1171 -1173. OpenUrl Abstract / FREE Full Text
  • ↵ Sutton AJ, Duval SJ, Tweedie RL, Abrams KR, Jones DR. Empirical assessment of effect of publication bias on meta-analyses. BMJ 2000 ; 320 : 1574 -1577. OpenUrl Abstract / FREE Full Text
  • ↵ Gotzsche PC. Believability of relative risks and odds ratios in abstracts: Cross sectional study. BMJ 2006 ; 333 : 231 -234. OpenUrl Abstract / FREE Full Text
  • ↵ Bhandari M, Devereaux PJ, Guyatt GH, Cook DJ, Swiontkowski MF, et al. An observational study of orthopaedic abstracts and subsequent full-text publications. J Bone Joint Surg Am 2002 ;84-A:615-621.
  • ↵ Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: Differences between data presented in conferences and journals. Faseb J 2005 ; 19 : 673 -680. OpenUrl Abstract / FREE Full Text
  • ↵ Toma M, McAlister FA, Bialy L, Adams D, Vandermeer B, et al. Transition from meeting abstract to full-length journal article for randomized controlled trials. JAMA 2006 ; 295 : 1281 -1287. OpenUrl CrossRef PubMed Web of Science
  • ↵ Saunders Y, Ross JR, Broadley KE, Edmonds PM, Patel S. Systematic review of bisphosphonates for hypercalcaemia of malignancy. Palliat Med 2004 ; 18 : 418 -431. OpenUrl Abstract / FREE Full Text
  • ↵ Shojania KG, Sampson M, Ansari MT, Ji J, Doucette S, et al. How quickly do systematic reviews go out of date? A survival analysis. Ann Intern Med 2007 ; 147 : 224 -233. OpenUrl CrossRef PubMed Web of Science
  • ↵ Bergerhoff K, Ebrahim S, Paletta G. Do we need to consider ‘in process citations’ for search strategies? Ottawa, Ontario, Canada: 12th Cochrane Colloquium, 2-6 October 2004. Available: http://www.cochrane.org/colloquia/abstracts/ottawa/P-039.htm . Accessed 26 May 2009.
  • ↵ Zhang L, Sampson M, McGowan J. Reporting of the role of expert searcher in Cochrane reviews. Evid Based Libr Info Pract 2006 ; 1 : 3 -16. OpenUrl
  • ↵ Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 2008 ; 358 : 252 -260. OpenUrl CrossRef PubMed Web of Science
  • ↵ Alejandria MM, Lansang MA, Dans LF, Mantaring JB. Intravenous immunoglobulin for treating sepsis and septic shock. Cochrane Database Syst Rev 2002 ;(1):CD001090, doi:10.1002/14651858.CD001090.
  • ↵ Golder S, McIntosh HM, Duffy S, Glanville J. Developing efficient search strategies to identify reports of adverse effects in MEDLINE and EMBASE. Health Info Libr J 2006 ; 23 : 3 -12. OpenUrl PubMed Web of Science
  • ↵ Sampson M, McGowan J, Cogo E, Grimshaw J, Moher D, et al. An evidence-based practice guideline for the peer review of electronic search strategies. J Clin Epidemiol 2009; E-pub 2009 February 18.
  • ↵ Flores-Mir C, Major MP, Major PW. Search and selection methodology of systematic reviews in orthodontics (2000-2004). Am J Orthod Dentofacial Orthop 2006 ; 130 : 214 -217. OpenUrl CrossRef PubMed Web of Science
  • ↵ Major MP, Major PW, Flores-Mir C. An evaluation of search and selection methods used in dental systematic reviews published in English. J Am Dent Assoc 2006 ; 137 : 1252 -1257. OpenUrl Abstract / FREE Full Text
  • ↵ Major MP, Major PW, Flores-Mir C. Benchmarking of reported search and selection methods of systematic reviews by dental speciality. Evid Based Dent 2007 ; 8 : 66 -70. OpenUrl CrossRef PubMed
  • ↵ Shah MR, Hasselblad V, Stevenson LW, Binanay C, O’Connor CM, et al. Impact of the pulmonary artery catheter in critically ill patients: Meta-analysis of randomized clinical trials. JAMA 2005 ; 294 : 1664 -1670. OpenUrl CrossRef PubMed Web of Science
  • ↵ Edwards P, Clarke M, DiGuiseppi C, Pratap S, Roberts I, et al. Identification of randomized controlled trials in systematic reviews: Accuracy and reliability of screening records. Stat Med 2002 ; 21 : 1635 -1640. OpenUrl CrossRef PubMed Web of Science
  • ↵ Cooper HM, Ribble RG. Influences on the outcome of literature searches for integrative research reviews. Knowledge 1989 ; 10 : 179 -201. OpenUrl
  • ↵ Mistiaen P, Poot E. Telephone follow-up, initiated by a hospital-based health professional, for postdischarge problems in patients discharged from hospital to home. Cochrane Database Syst Rev 2006 (4):CD004510, doi:10.1002/14651858.CD004510.pub3.
  • ↵ Jones AP, Remmington T, Williamson PR, Ashby D, Smyth RL. High prevalence but low impact of data extraction and reporting errors were found in Cochrane systematic reviews. J Clin Epidemiol 2005 ; 58 : 741 -742. OpenUrl CrossRef PubMed Web of Science
  • ↵ Clarke M, Hopewell S, Juszczak E, Eisinga A, Kjeldstrom M. Compression stockings for preventing deep vein thrombosis in airline passengers. Cochrane Database Syst Rev 2006 ;(2):CD004002, doi:10.1002/14651858.CD004002.pub2.
  • ↵ Tramer MR, Reynolds DJ, Moore RA, McQuay HJ. Impact of covert duplicate publication on meta-analysis: A case study. BMJ 1997 ; 315 : 635 -640. OpenUrl Abstract / FREE Full Text
  • ↵ von Elm E, Poglia G, Walder B, Tramer MR. Different patterns of duplicate publication: An analysis of articles used in systematic reviews. JAMA 2004 ; 291 : 974 -980. OpenUrl CrossRef PubMed Web of Science
  • ↵ Gotzsche PC. Multiple publication of reports of drug trials. Eur J Clin Pharmacol 1989 ; 36 : 429 -432. OpenUrl CrossRef PubMed Web of Science
  • ↵ Allen C, Hopewell S, Prentice A. Non-steroidal anti-inflammatory drugs for pain in women with endometriosis. Cochrane Database Syst Rev 2005 ;(4):CD004753, doi:10.1002/14651858.CD004753.pub2.
  • ↵ Glasziou P, Meats E, Heneghan C, Shepperd S. What is missing from descriptions of treatment in trials and reviews? BMJ 2008 ; 336 : 1472 -1474. OpenUrl FREE Full Text
  • ↵ Tracz MJ, Sideras K, Bolona ER, Haddad RM, Kennedy CC, et al. Testosterone use in men and its effects on bone health. A systematic review and meta-analysis of randomized placebo-controlled trials. J Clin Endocrinol Metab 2006 ; 91 : 2011 -2016. OpenUrl CrossRef PubMed Web of Science
  • ↵ Bucher HC, Hengstler P, Schindler C, Guyatt GH. Percutaneous transluminal coronary angioplasty versus medical treatment for non-acute coronary heart disease: Meta-analysis of randomised controlled trials. BMJ 2000 ; 321 : 73 -77. OpenUrl Abstract / FREE Full Text
  • ↵ Gluud LL. Bias in clinical intervention research. Am J Epidemiol 2006 ; 163 : 493 -501. OpenUrl Abstract / FREE Full Text
  • ↵ Pildal J, Hróbjartsson A, Jorgensen KJ, Hilden J, Altman DG, et al. Impact of allocation concealment on conclusions drawn from meta-analyses of randomized trials. Int J Epidemiol 2007 ; 36 : 847 -857. OpenUrl Abstract / FREE Full Text
  • ↵ Moja LP, Telaro E, D’Amico R, Moschetti I, Coe L, et al. Assessment of methodological quality of primary studies by systematic reviews: Results of the metaquality cross sectional study. BMJ 2005 ; 330 : 1053 . OpenUrl Abstract / FREE Full Text
  • ↵ Moher D, Jadad AR, Tugwell P. Assessing the quality of randomized controlled trials. Current issues and future directions. Int J Technol Assess Health Care 1996 ; 12 : 195 -208. OpenUrl CrossRef PubMed Web of Science
  • ↵ Sanderson S, Tatt ID, Higgins JP. Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: A systematic review and annotated bibliography. Int J Epidemiol 2007 ; 36 : 666 -676. OpenUrl Abstract / FREE Full Text
  • ↵ Greenland S. Invited commentary: A critical look at some popular meta-analytic methods. Am J Epidemiol 1994 ; 140 : 290 -296. OpenUrl Abstract / FREE Full Text
  • ↵ Jüni P, Altman DG, Egger M. Systematic reviews in health care: Assessing the quality of controlled clinical trials. BMJ 2001 ; 323 : 42 -46. OpenUrl FREE Full Text
  • ↵ Kunz R, Oxman AD. The unpredictability paradox: Review of empirical comparisons of randomised and non-randomised clinical trials. BMJ 1998 ; 317 : 1185 -1190. OpenUrl Abstract / FREE Full Text
  • ↵ Balk EM, Bonis PA, Moskowitz H, Schmid CH, Ioannidis JP, et al. Correlation of quality measures with estimates of treatment effect in meta-analyses of randomized controlled trials. JAMA 2002 ; 287 : 2973 -2982. OpenUrl CrossRef PubMed Web of Science
  • ↵ Devereaux PJ, Beattie WS, Choi PT, Badner NH, Guyatt GH, et al. How strong is the evidence for the use of perioperative beta blockers in non-cardiac surgery? Systematic review and meta-analysis of randomised controlled trials. BMJ 2005 ; 331 : 313 -321. OpenUrl Abstract / FREE Full Text
  • ↵ Devereaux PJ, Bhandari M, Montori VM, Manns BJ, Ghali WA, et al. Double blind, you are the weakest link—Good-bye! ACP J Club 2002 ; 136 : A11 . OpenUrl PubMed
  • ↵ van Nieuwenhoven CA, Buskens E, van Tiel FH, Bonten MJ. Relationship between methodological trial quality and the effects of selective digestive decontamination on pneumonia and mortality in critically ill patients. JAMA 2001 ; 286 : 335 -340. OpenUrl CrossRef PubMed Web of Science
  • ↵ Guyatt GH, Cook D, Devereaux PJ, Meade M, Straus S. Therapy. Users’ guides to the medical literature. AMA Press, 2002:55-79.
  • ↵ Sackett DL, Gent M. Controversy in counting and attributing events in clinical trials. N Engl J Med 1979 ; 301 : 1410 -1412. OpenUrl CrossRef PubMed Web of Science
  • ↵ Montori VM, Devereaux PJ, Adhikari NK, Burns KE, Eggert CH, et al. Randomized trials stopped early for benefit: A systematic review. JAMA 2005 ; 294 : 2203 -2209. OpenUrl CrossRef PubMed Web of Science
  • ↵ Guyatt GH, Devereaux PJ. Therapy and validity: The principle of intention-to-treat. In: Guyatt GH, Rennie DR, eds. Users’ guides to the medical literature. AMA Press, 2002:267-273.
  • ↵ Berlin JA. Does blinding of readers affect the results of meta-analyses? University of Pennsylvania Meta-analysis Blinding Study Group. Lancet 1997 ; 350 : 185 -186. OpenUrl PubMed Web of Science
  • ↵ Jadad AR, Moore RA, Carroll D, Jenkinson C, Reynolds DJ, et al. Assessing the quality of reports of randomized clinical trials: Is blinding necessary? Control Clin Trials 1996 ; 17 : 1 -12. OpenUrl CrossRef PubMed Web of Science
  • ↵ Pittas AG, Siegel RD, Lau J. Insulin therapy for critically ill hospitalized patients: A meta-analysis of randomized controlled trials. Arch Intern Med 2004 ; 164 : 2005 -2011. OpenUrl CrossRef PubMed Web of Science
  • ↵ Lakhdar R, Al-Mallah MH, Lanfear DE. Safety and tolerability of angiotensin-converting enzyme inhibitor versus the combination of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker in patients with left ventricular dysfunction: A systematic review and meta-analysis of randomized controlled trials. J Card Fail 2008 ; 14 : 181 -188. OpenUrl CrossRef PubMed Web of Science
  • ↵ Bobat R, Coovadia H, Stephen C, Naidoo KL, McKerrow N, et al. Safety and efficacy of zinc supplementation for children with HIV-1 infection in South Africa: A randomised double-blind placebo-controlled trial. Lancet 2005 ; 366 : 1862 -1867. OpenUrl CrossRef PubMed Web of Science
  • ↵ Deeks JJ, Altman DG. Effect measures for meta-analysis of trials with binary outcomes. In: Egger M, Smith GD, Altman DG, eds. Systematic reviews in healthcare: Meta-analysis in context. 2nd edn. London: BMJ Publishing Group, 2001.
  • ↵ Deeks JJ. Issues in the selection of a summary statistic for meta-analysis of clinical trials with binary outcomes. Stat Med 2002 ; 21 : 1575 -1600. OpenUrl CrossRef PubMed Web of Science
  • ↵ Engels EA, Schmid CH, Terrin N, Olkin I, Lau J. Heterogeneity and statistical significance in meta-analysis: An empirical study of 125 meta-analyses. Stat Med 2000 ; 19 : 1707 -1728. OpenUrl CrossRef PubMed Web of Science
  • ↵ Tierney JF, Stewart LA, Ghersi D, Burdett S, Sydes MR. Practical methods for incorporating summary time-to-event data into meta-analysis. Trials 2007 ; 8 : 16 . OpenUrl CrossRef PubMed
  • ↵ Michiels S, Piedbois P, Burdett S, Syz N, Stewart L, et al. Meta-analysis when only the median survival times are known: A comparison with individual patient data results. Int J Technol Assess Health Care 2005 ; 21 : 119 -125. OpenUrl PubMed Web of Science
  • ↵ Briel M, Studer M, Glass TR, Bucher HC. Effects of statins on stroke prevention in patients with and without coronary heart disease: A meta-analysis of randomized controlled trials. Am J Med 2004 ; 117 : 596 -606. OpenUrl CrossRef PubMed Web of Science
  • ↵ Jones M, Schenkel B, Just J, Fallowfield L. Epoetin alfa improves quality of life in patients with cancer: Results of metaanalysis. Cancer 2004 ; 101 : 1720 -1732. OpenUrl CrossRef PubMed Web of Science
  • ↵ Elbourne DR, Altman DG, Higgins JP, Curtin F, Worthington HV, et al. Meta-analyses involving cross-over trials: Methodological issues. Int J Epidemiol 2002 ; 31 : 140 -149. OpenUrl Abstract / FREE Full Text
  • ↵ Follmann D, Elliott P, Suh I, Cutler J. Variance imputation for overviews of clinical trials with continuous response. J Clin Epidemiol 1992 ; 45 : 769 -773. OpenUrl CrossRef PubMed Web of Science
  • ↵ Wiebe N, Vandermeer B, Platt RW, Klassen TP, Moher D, et al. A systematic review identifies a lack of standardization in methods for handling missing variance data. J Clin Epidemiol 2006 ; 59 : 342 -353. OpenUrl CrossRef PubMed Web of Science
  • ↵ Hrobjartsson A, Gotzsche PC. Placebo interventions for all clinical conditions. Cochrane Database Syst Rev 2004 ;(2):CD003974, doi:10.1002/14651858.CD003974.pub2.
  • ↵ Shekelle PG, Morton SC, Maglione M, Suttorp M, Tu W, et al. Pharmacological and surgical treatment of obesity. Evid Rep Technol Assess (Summ) 2004 :1-6.
  • ↵ Chan AW, Altman DG. Identifying outcome reporting bias in randomised trials on PubMed: Review of publications and survey of authors. BMJ 2005 ; 330 : 753 . OpenUrl Abstract / FREE Full Text
  • ↵ Williamson PR, Gamble C. Identification and impact of outcome selection bias in meta-analysis. Stat Med 2005 ; 24 : 1547 -1561. OpenUrl CrossRef PubMed Web of Science
  • ↵ Williamson PR, Gamble C, Altman DG, Hutton JL. Outcome selection bias in meta-analysis. Stat Methods Med Res 2005 ; 14 : 515 -524. OpenUrl Abstract / FREE Full Text
  • ↵ Ioannidis JP, Trikalinos TA. The appropriateness of asymmetry tests for publication bias in meta-analyses: A large survey. CMAJ 2007 ; 176 : 1091 -1096. OpenUrl Abstract / FREE Full Text
  • ↵ Briel M, Schwartz GG, Thompson PL, de Lemos JA, Blazing MA, et al. Effects of early treatment with statins on short-term clinical outcomes in acute coronary syndromes: A meta-analysis of randomized controlled trials. JAMA 2006 ; 295 : 2046 -2056. OpenUrl CrossRef PubMed Web of Science
  • ↵ Song F, Eastwood AJ, Gilbody S, Duley L, Sutton AJ. Publication and related biases. Health Technol Assess 2000 ; 4 : 1 -115. OpenUrl PubMed
  • ↵ Schmid CH, Stark PC, Berlin JA, Landais P, Lau J. Meta-regression detected associations between heterogeneous treatment effects and study-level, but not patient-level, factors. J Clin Epidemiol 2004 ; 57 : 683 -697. OpenUrl CrossRef PubMed Web of Science
  • ↵ Higgins JP, Thompson SG. Controlling the risk of spurious findings from meta-regression. Stat Med 2004 ; 23 : 1663 -1682. OpenUrl CrossRef PubMed Web of Science
  • ↵ Thompson SG, Higgins JP. Treating individuals 4: Can meta-analysis help target interventions at individuals most likely to benefit? Lancet 2005 ; 365 : 341 -346. OpenUrl CrossRef PubMed Web of Science
  • ↵ Uitterhoeve RJ, Vernooy M, Litjens M, Potting K, Bensing J, et al. Psychosocial interventions for patients with advanced cancer—A systematic review of the literature. Br J Cancer 2004 ; 91 : 1050 -1062. OpenUrl PubMed Web of Science
  • ↵ Fuccio L, Minardi ME, Zagari RM, Grilli D, Magrini N, et al. Meta-analysis: Duration of first-line proton-pump inhibitor based triple therapy for Helicobacter pylori eradication. Ann Intern Med 2007 ; 147 : 553 -562. OpenUrl CrossRef PubMed Web of Science
  • ↵ Egger M, Smith GD. Bias in location and selection of studies. BMJ 1998 ; 316 : 61 -66. OpenUrl FREE Full Text
  • ↵ Ravnskov U. Cholesterol lowering trials in coronary heart disease: Frequency of citation and outcome. BMJ 1992 ; 305 : 15 -19. OpenUrl Abstract / FREE Full Text
  • ↵ Hind D, Booth A. Do health technology assessments comply with QUOROM diagram guidance? An empirical study. BMC Med Res Methodol 2007 ; 7 : 49 . OpenUrl CrossRef PubMed
  • ↵ Curioni C, Andre C. Rimonabant for overweight or obesity. Cochrane Database Syst Rev 2006 ;(4):CD006162, doi:10.1002/14651858.CD006162.pub2.
  • DeCamp LR, Byerley JS, Doshi N, Steiner MJ. Use of antiemetic agents in acute gastroenteritis: A systematic review and meta-analysis. Arch Pediatr Adolesc Med 2008 ; 162 : 858 -865. OpenUrl CrossRef PubMed
  • Pakos EE, Ioannidis JP. Radiotherapy vs. nonsteroidal anti-inflammatory drugs for the prevention of heterotopic ossification after major hip procedures: A meta-analysis of randomized trials. Int J Radiat Oncol Biol Phys 2004 ; 60 : 888 -895. OpenUrl CrossRef PubMed Web of Science
  • ↵ Skalsky K, Yahav D, Bishara J, Pitlik S, Leibovici L, et al. Treatment of human brucellosis: Systematic review and meta-analysis of randomised controlled trials. BMJ 2008 ; 336 : 701 -704. OpenUrl Abstract / FREE Full Text
  • ↵ Altman DG, Cates C. The need for individual trial results in reports of systematic reviews. BMJ 2001 . Rapid response.
  • ↵ Gotzsche PC, Hrobjartsson A, Maric K, Tendal B. Data extraction errors in meta-analyses that use standardized mean differences. JAMA 2007 ; 298 : 430 -437. OpenUrl CrossRef PubMed Web of Science
  • ↵ Lewis S, Clarke M. Forest plots: Trying to see the wood and the trees. BMJ 2001 ; 322 : 1479 -1480. OpenUrl FREE Full Text
  • ↵ Papanikolaou PN, Ioannidis JP. Availability of large-scale evidence on specific harms from systematic reviews of randomized trials. Am J Med 2004 ; 117 : 582 -589. OpenUrl CrossRef PubMed Web of Science
  • ↵ Duffett M, Choong K, Ng V, Randolph A, Cook DJ. Surfactant therapy for acute respiratory failure in children: A systematic review and meta-analysis. Crit Care 2007 ; 11 : R66 . OpenUrl CrossRef PubMed
  • ↵ Balk E, Raman G, Chung M, Ip S, Tatsioni A, et al. Effectiveness of management strategies for renal artery stenosis: A systematic review. Ann Intern Med 2006 ; 145 : 901 -912. OpenUrl PubMed Web of Science
  • ↵ Palfreyman S, Nelson EA, Michaels JA. Dressings for venous leg ulcers: Systematic review and meta-analysis. BMJ 2007 ; 335 : 244 . OpenUrl Abstract / FREE Full Text
  • ↵ Ioannidis JP, Patsopoulos NA, Evangelou E. Uncertainty in heterogeneity estimates in meta-analyses. BMJ 2007 ; 335 : 914 -916. OpenUrl FREE Full Text
  • ↵ Appleton KM, Hayward RC, Gunnell D, Peters TJ, Rogers PJ, et al. Effects of n-3 long-chain polyunsaturated fatty acids on depressed mood: systematic review of published trials. Am J Clin Nutr 2006 ; 84 : 1308 -1316. OpenUrl Abstract / FREE Full Text
  • ↵ Kirsch I, Deacon BJ, Huedo-Medina TB, Scoboria A, Moore TJ, et al. Initial severity and antidepressant benefits: A meta-analysis of data submitted to the Food and Drug Administration. PLoS Med 2008 ; 5 : e45 . doi:10.1371/journal.pmed.0050045 OpenUrl CrossRef PubMed
  • ↵ Reichenbach S, Sterchi R, Scherer M, Trelle S, Burgi E, et al. Meta-analysis: Chondroitin for osteoarthritis of the knee or hip. Ann Intern Med 2007 ; 146 : 580 -590. OpenUrl CrossRef PubMed Web of Science
  • ↵ Hodson EM, Craig JC, Strippoli GF, Webster AC. Antiviral medications for preventing cytomegalovirus disease in solid organ transplant recipients. Cochrane Database Syst Rev 2008 ;(2):CD003774, doi:10.1002/14651858.CD003774.pub3.
  • ↵ Thompson SG, Higgins JP. How should meta-regression analyses be undertaken and interpreted? Stat Med 2002 ; 21 : 1559 -1573. OpenUrl CrossRef PubMed Web of Science
  • ↵ Chan AW, Krleza-Jeric K, Schmid I, Altman DG. Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. CMAJ 2004 ; 171 : 735 -740. OpenUrl Abstract / FREE Full Text
  • ↵ Hahn S, Williamson PR, Hutton JL, Garner P, Flynn EV. Assessing the potential for bias in meta-analysis due to selective reporting of subgroup analyses within studies. Stat Med 2000 ; 19 : 3325 -3336. OpenUrl CrossRef PubMed Web of Science
  • ↵ Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicability of research: Issues in external validation and translation methodology. Eval Health Prof 2006 ; 29 : 126 -153. OpenUrl Abstract / FREE Full Text
  • ↵ Liberati A, D’Amico R, Pifferi, Torri V, Brazzi L. Antibiotic prophylaxis to reduce respiratory tract infections and mortality in adults receiving intensive care. Cochrane Database Syst Rev 2004 ;(1):CD000022, doi:10.1002/14651858.CD000022.pub2.
  • ↵ Gonzalez R, Zamora J, Gomez-Camarero J, Molinero LM, Banares R, et al. Meta-analysis: Combination endoscopic and drug therapy to prevent variceal rebleeding in cirrhosis. Ann Intern Med 2008 ; 149 : 109 -122. OpenUrl CrossRef PubMed Web of Science
  • ↵ D’Amico R, Pifferi S, Leonetti C, Torri V, Tinazzi A, et al. Effectiveness of antibiotic prophylaxis in critically ill adult patients: Systematic review of randomised controlled trials. BMJ 1998 ; 316 : 1275 -1285. OpenUrl Abstract / FREE Full Text
  • ↵ Olsen O, Middleton P, Ezzo J, Gotzsche PC, Hadhazy V, et al. Quality of Cochrane reviews: Assessment of sample from 1998. BMJ 2001 ; 323 : 829 -832. OpenUrl Abstract / FREE Full Text
  • ↵ Hopewell S, Wolfenden L, Clarke M. Reporting of adverse events in systematic reviews can be improved: Survey results. J Clin Epidemiol 2008 ; 61 : 597 -602. OpenUrl CrossRef PubMed Web of Science
  • ↵ Cook DJ, Reeve BK, Guyatt GH, Heyland DK, Griffith LE, et al. Stress ulcer prophylaxis in critically ill patients. Resolving discordant meta-analyses. JAMA 1996 ; 275 : 308 -314. OpenUrl CrossRef PubMed Web of Science
  • ↵ Jadad AR, Cook DJ, Browman GP. A guide to interpreting discordant systematic reviews. CMAJ 1997 ; 156 : 1411 -1416. OpenUrl Abstract / FREE Full Text
  • ↵ Clarke L, Clarke M, Clarke T. How useful are Cochrane reviews in identifying research needs? J Health Serv Res Policy 2007 ; 12 : 101 -103. OpenUrl Abstract / FREE Full Text
  • ↵ [No authors listed]. World Medical Association Declaration of Helsinki: Ethical principles for medical research involving human subjects. JAMA 2000 ; 284 : 3043 -3045. OpenUrl CrossRef PubMed Web of Science
  • ↵ Clarke M, Hopewell S, Chalmers I. Reports of clinical trials should begin and end with up-to-date systematic reviews of other relevant evidence: A status report. J R Soc Med 2007 ; 100 : 187 -190. OpenUrl Abstract / FREE Full Text
  • ↵ Dube C, Rostom A, Lewin G, Tsertsvadze A, Barrowman N, et al. The use of aspirin for primary prevention of colorectal cancer: A systematic review prepared for the U.S. Preventive Services Task Force. Ann Intern Med 2007 ; 146 : 365 -375. OpenUrl CrossRef PubMed Web of Science
  • ↵ Critchley J, Bates I. Haemoglobin colour scale for anaemia diagnosis where there is no laboratory: A systematic review. Int J Epidemiol 2005 ; 34 : 1425 -1434. OpenUrl Abstract / FREE Full Text
  • ↵ Lexchin J, Bero LA, Djulbegovic B, Clark O. Pharmaceutical industry sponsorship and research outcome and quality: Systematic review. BMJ 2003 ; 326 : 1167 -1170. OpenUrl Abstract / FREE Full Text
  • ↵ Als-Nielsen B, Chen W, Gluud C, Kjaergard LL. Association of funding and conclusions in randomized drug trials: A reflection of treatment effect or adverse events? JAMA 2003 ; 290 : 921 -928. OpenUrl CrossRef PubMed Web of Science
  • ↵ Peppercorn J, Blood E, Winer E, Partridge A. Association between pharmaceutical involvement and outcomes in breast cancer clinical trials. Cancer 2007 ; 109 : 1239 -1246. OpenUrl CrossRef PubMed Web of Science
  • ↵ Yank V, Rennie D, Bero LA. Financial ties and concordance between results and conclusions in meta-analyses: Retrospective cohort study. BMJ 2007 ; 335 : 1202 -1205. OpenUrl Abstract / FREE Full Text
  • ↵ Jorgensen AW, Hilden J, Gøtzsche PC. Cochrane reviews compared with industry supported meta-analyses and other meta-analyses of the same drugs: Systematic review. BMJ 2006 ; 333 : 782 . OpenUrl Abstract / FREE Full Text
  • ↵ Gotzsche PC, Hrobjartsson A, Johansen HK, Haahr MT, Altman DG, et al. Ghost authorship in industry-initiated randomised trials. PLoS Med 2007 ; 4 : e19 . doi:10.1371/journal.pmed.0040019 OpenUrl CrossRef PubMed
  • ↵ Akbari A, Mayhew A, Al-Alawi M, Grimshaw J, Winkens R, et al. Interventions to improve outpatient referrals from primary care to secondary care. Cochrane Database of Syst Rev 2008 ;(2):CD005471, doi:10.1002/14651858.CD005471.pub2.
  • ↵ Davies P, Boruch R. The Campbell Collaboration. BMJ 2001 ; 323 : 294 -295. OpenUrl FREE Full Text
  • ↵ Pawson R, Greenhalgh T, Harvey G, Walshe K. Realist review—A new method of systematic review designed for complex policy interventions. J Health Serv Res Policy 2005 ; 10 (Suppl 1): 21 -34. OpenUrl Abstract / FREE Full Text
  • ↵ Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O, et al. Storylines of research in diffusion of innovation: A meta-narrative approach to systematic review. Soc Sci Med 2005 ; 61 : 417 -430. OpenUrl CrossRef PubMed Web of Science
  • ↵ Lumley T. Network meta-analysis for indirect treatment comparisons. Stat Med 2002 ; 21 : 2313 -2324. OpenUrl CrossRef PubMed Web of Science
  • ↵ Salanti G, Higgins JP, Ades AE, Ioannidis JP. Evaluation of networks of randomized trials. Stat Methods Med Res 2008 ; 17 : 279 -301. OpenUrl Abstract / FREE Full Text
  • ↵ Altman DG, Moher D. [Developing guidelines for reporting healthcare research: scientific rationale and procedures.]. Med Clin (Barc) 2005 ; 125 (Suppl 1): 8 -13. OpenUrl CrossRef PubMed
  • ↵ Delaney A, Bagshaw SM, Ferland A, Manns B, Laupland KB, et al. A systematic evaluation of the quality of meta-analyses in the critical care literature. Crit Care 2005 ; 9 : R575 -582. OpenUrl CrossRef PubMed Web of Science
  • ↵ Altman DG, Simera I, Hoey J, Moher D, Schulz K. EQUATOR: Reporting guidelines for health research. Lancet 2008 ; 371 : 1149 -1150. OpenUrl CrossRef PubMed Web of Science
  • ↵ Plint AC, Moher D, Morrison A, Schulz K, Altman DG, et al. Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust 2006 ; 185 : 263 -267. OpenUrl PubMed Web of Science
  • ↵ Simera I, Altman DG, Moher D, Schulz KF, Hoey J. Guidelines for reporting health research: The EQUATOR network’s survey of guideline authors. PLoS Med 2008 ; 5 : e139 . doi:10.1371/journal.pmed.0050139 OpenUrl CrossRef PubMed
  • ↵ Last JM. A dictionary of epidemiology. Oxford: Oxford University Press & International Epidemiological Association, 2001.
  • ↵ Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC. A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. Treatments for myocardial infarction. JAMA 1992 ; 268 : 240 -248. OpenUrl CrossRef PubMed Web of Science
  • ↵ Oxman AD, Guyatt GH. The science of reviewing research. Ann N Y Acad Sci 1993 ; 703 : 125 -133; discussion 133-124. OpenUrl CrossRef PubMed Web of Science
  • ↵ O’Connor D, Green S, Higgins JPT. Chapter 5: Defining the review question and developing criteria for including studies. In: Higgins JPT, Green S, editors. Cochrane handbook for systematic reviews of interventions version 5.0.0 [updated February 2008]. The Cochrane Collaboration, 2008. Available: http://www.cochrane-handbook.org/ . Accessed 26 May 2009.
  • ↵ McDonagh M, Whiting P, Bradley M, Cooper J, Sutton A, et al. A systematic review of public water fluoridation. Protocol changes (Appendix M). NHS Centre for Reviews and Dissemination. York: University of York, 2000. Available: http://www.york.ac.uk/inst/crd/pdf/appm.pdf .. Accessed 26 May 2009.
  • ↵ Moher D, Cook DJ, Jadad AR, Tugwell P, Moher M, et al. Assessing the quality of reports of randomised trials: Implications for the conduct of meta-analyses. Health Technol Assess 1999 ; 3 : i -iv, 1-98. OpenUrl PubMed
  • ↵ Devereaux PJ, Choi PT, El-Dika S, Bhandari M, Montori VM, et al. An observational study found that authors of randomized controlled trials frequently use concealment of randomization and blinding, despite the failure to report these methods. J Clin Epidemiol 2004 ; 57 : 1232 -1236. OpenUrl CrossRef PubMed Web of Science
  • ↵ Soares HP, Daniels S, Kumar A, Clarke M, Scott C, et al. Bad reporting does not mean bad methods for randomised trials: Observational study of randomised controlled trials performed by the Radiation Therapy Oncology Group. BMJ 2004 ; 328 : 22 -24. OpenUrl Abstract / FREE Full Text
  • ↵ Liberati A, Himel HN, Chalmers TC. A quality assessment of randomized control trials of primary treatment of breast cancer. J Clin Oncol 1986 ; 4 : 942 -951. OpenUrl Abstract / FREE Full Text
  • ↵ Moher D, Jadad AR, Nichol G, Penman M, Tugwell P, et al. Assessing the quality of randomized controlled trials: An annotated bibliography of scales and checklists. Control Clin Trials 1995 ; 16 : 62 -73. OpenUrl CrossRef PubMed Web of Science
  • ↵ Greenland S, O’Rourke K. On the bias produced by quality scores in meta-analysis, and a hierarchical view of proposed solutions. Biostatistics 2001 ; 2 : 463 -471. OpenUrl Abstract
  • ↵ Jüni P, Witschi A, Bloch R, Egger M. The hazards of scoring the quality of clinical trials for meta-analysis. JAMA 1999 ; 282 : 1054 -1060. OpenUrl CrossRef PubMed Web of Science
  • ↵ Fleiss JL. The statistical basis of meta-analysis. Stat Methods Med Res 1993 ; 2 : 121 -145. OpenUrl Abstract / FREE Full Text
  • ↵ Villar J, Mackey ME, Carroli G, Donner A. Meta-analyses in systematic reviews of randomized controlled trials in perinatal medicine: Comparison of fixed and random effects models. Stat Med 2001 ; 20 : 3635 -3647. OpenUrl CrossRef PubMed Web of Science
  • ↵ Lau J, Ioannidis JP, Schmid CH. Summing up evidence: One answer is not always enough. Lancet 1998 ; 351 : 123 -127. OpenUrl CrossRef PubMed Web of Science
  • ↵ DerSimonian R, Laird N. Meta-analysis in clinical trials. Control Clin Trials 1986 ; 7 : 177 -188. OpenUrl CrossRef PubMed Web of Science
  • ↵ Hunter JE, Schmidt FL. Fixed effects vs. random effects meta-analysis models: Implications for cumulative research knowledge. Int J Sel Assess 2000 ; 8 : 275 -292. OpenUrl CrossRef Web of Science
  • ↵ Deeks JJ, Altman DG, Bradburn MJ. Statistical methods for examining heterogeneity and combining results from several studies in meta-analysis. In: Egger M, Davey Smith G, Altman DG, eds. Systematic reviews in healthcare: Meta-analysis in context. London: BMJ Publishing Group, 2001:285-312.
  • ↵ Warn DE, Thompson SG, Spiegelhalter DJ. Bayesian random effects meta-analysis of trials with binary outcomes: Methods for the absolute risk difference and relative risk scales. Stat Med 2002 ; 21 : 1601 -1623. OpenUrl CrossRef PubMed Web of Science
  • ↵ Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ 2003 ; 327 : 557 -560. OpenUrl FREE Full Text
  • ↵ Higgins JP, Thompson SG. Quantifying heterogeneity in a meta-analysis. Stat Med 2002 ; 21 : 1539 -1558. OpenUrl CrossRef PubMed Web of Science
  • ↵ Huedo-Medina TB, Sanchez-Meca J, Marin-Martinez F, Botella J. Assessing heterogeneity in meta-analysis: Q statistic or I2 index? Psychol Methods 2006 ; 11 : 193 -206. OpenUrl CrossRef PubMed Web of Science
  • ↵ Thompson SG, Turner RM, Warn DE. Multilevel models for meta-analysis, and their application to absolute risk differences. Stat Methods Med Res 2001 ; 10 : 375 -392. OpenUrl Abstract / FREE Full Text
  • ↵ Dickersin K. Publication bias: Recognising the problem, understanding its origin and scope, and preventing harm. In: Rothstein HR, Sutton AJ, Borenstein M, eds. Publication bias in meta-analysis—Prevention, assessment and adjustments. West Sussex: John Wiley & Sons, 2005:356.
  • ↵ Scherer RW, Langenberg P, von Elm E. Full publication of results initially presented in abstracts. Cochrane Database Syst Rev 2007 ;(2):MR000005, doi:10.1002/14651858.MR000005.pub3.
  • ↵ Krzyzanowska MK, Pintilie M, Tannock IF. Factors associated with failure to publish large randomized trials presented at an oncology meeting. JAMA 2003 ; 290 : 495 -501. OpenUrl CrossRef PubMed Web of Science
  • ↵ Hopewell S, Clarke M. Methodologists and their methods. Do methodologists write up their conference presentations or is it just 15 minutes of fame? Int J Technol Assess Health Care 2001 ; 17 : 601 -603. OpenUrl PubMed Web of Science
  • ↵ Ghersi D. Issues in the design, conduct and reporting of clinical trials that impact on the quality of decision making. PhD thesis. Sydney: School of Public Health, Faculty of Medicine, University of Sydney, 2006.
  • ↵ von Elm E, Rollin A, Blumle A, Huwiler K, Witschi M, et al. Publication and non-publication of clinical trials: Longitudinal study of applications submitted to a research ethics committee. Swiss Med Wkly 2008 ; 138 : 197 -203. OpenUrl PubMed Web of Science
  • ↵ Sterne JA, Egger M. Funnel plots for detecting bias in meta-analysis: guidelines on choice of axis. J Clin Epidemiol 2001 ; 54 : 1046 -1055. OpenUrl CrossRef PubMed Web of Science
  • ↵ Harbord RM, Egger M, Sterne JA. A modified test for small-study effects in meta-analyses of controlled trials with binary endpoints. Stat Med 2006 ; 25 : 3443 -3457. OpenUrl CrossRef PubMed Web of Science
  • ↵ Peters JL, Sutton AJ, Jones DR, Abrams KR, Rushton L. Comparison of two methods to detect publication bias in meta-analysis. JAMA 2006 ; 295 : 676 -680. OpenUrl CrossRef PubMed Web of Science
  • ↵ Rothstein HR, Sutton AJ, Borenstein M. Publication bias in meta-analysis: Prevention, assessment and adjustments. West Sussex: John Wiley & Sons, 2005.
  • ↵ Lau J, Ioannidis JP, Terrin N, Schmid CH, Olkin I. The case of the misleading funnel plot. BMJ 2006 ; 333 : 597 -600. OpenUrl FREE Full Text
  • ↵ Terrin N, Schmid CH, Lau J. In an empirical evaluation of the funnel plot, researchers could not visually identify publication bias. J Clin Epidemiol 2005 ; 58 : 894 -901. OpenUrl CrossRef PubMed Web of Science
  • ↵ Egger M, Davey Smith G, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test. BMJ 1997 ; 315 : 629 -634. OpenUrl Abstract / FREE Full Text
  • ↵ Ioannidis JP, Trikalinos TA. An exploratory test for an excess of significant findings. Clin Trials 2007 ; 4 : 245 -253. OpenUrl Abstract / FREE Full Text
  • ↵ Sterne JAC, Egger M, Moher D. Chapter 10: Addressing reporting biases. In: Higgins JPT, Green S, eds. Cochrane handbook for systematic reviews of interventions version 5.0.0 [updated February 2008]. The Cochrane Collaboration, 2008. Available: http://www.cochrane-handbook.org/ . Accessed 26 May 2009.

literature review flow research

  • Open access
  • Published: 02 July 2024

Unravelling the complexity of ventilator-associated pneumonia: a systematic methodological literature review of diagnostic criteria and definitions used in clinical research

  • Markus Fally 1 ,
  • Faiuna Haseeb 2 , 3 ,
  • Ahmed Kouta 2 , 3 ,
  • Jan Hansel 3 , 4 ,
  • Rebecca C. Robey 2 , 3 ,
  • Thomas Williams 5 ,
  • Tobias Welte 6 ,
  • Timothy Felton 2 , 3 , 5 &
  • Alexander G. Mathioudakis 2 , 3  

Critical Care volume  28 , Article number:  214 ( 2024 ) Cite this article

2826 Accesses

60 Altmetric

Metrics details

Ventilator-associated pneumonia (VAP) is a prevalent and grave hospital-acquired infection that affects mechanically ventilated patients. Diverse diagnostic criteria can significantly affect VAP research by complicating the identification and management of the condition, which may also impact clinical management.

We conducted this review to assess the diagnostic criteria and the definitions of the term “ventilator-associated” used in randomised controlled trials (RCTs) of VAP management.

Search methods

Based on the protocol (PROSPERO 2019 CRD42019147411), we conducted a systematic search on MEDLINE/PubMed and Cochrane CENTRAL for RCTs, published or registered between 2010 and 2024.

Selection criteria

We included completed and ongoing RCTs that assessed pharmacological or non-pharmacological interventions in adults with VAP.

Data collection and synthesis

Data were collected using a tested extraction sheet, as endorsed by the Cochrane Collaboration. After cross-checking, data were summarised in a narrative and tabular form.

In total, 7,173 records were identified through the literature search. Following the exclusion of records that did not meet the eligibility criteria, 119 studies were included. Diagnostic criteria were provided in 51.2% of studies, and the term “ventilator-associated” was defined in 52.1% of studies. The most frequently included diagnostic criteria were pulmonary infiltrates (96.7%), fever (86.9%), hypothermia (49.1%), sputum (70.5%), and hypoxia (32.8%). The different criteria were used in 38 combinations across studies. The term “ventilator-associated” was defined in nine different ways.

Conclusions

When provided, diagnostic criteria and definitions of VAP in RCTs display notable variability. Continuous efforts to harmonise VAP diagnostic criteria in future clinical trials are crucial to improve quality of care, enable accurate epidemiological assessments, and guide effective antimicrobial stewardship.

Ventilator-associated pneumonia (VAP) stands as the most prevalent and serious hospital-acquired infection observed in intensive care units [ 1 ]. VAP prolongs hospital stays, durations of mechanical ventilation, and is associated with considerable mortality and an increase in healthcare costs [ 2 , 3 ].

Diagnosing VAP can be challenging for clinicians as it shares clinical signs and symptoms with other forms of pneumonia as well as non-infectious conditions [ 4 ]. The most recent international clinical guidelines define VAP as the presence of respiratory infection signs combined with new radiographic infiltrates in a patient who has been ventilated for at least 48 h [ 5 , 6 ]. While the guidelines developed by ERS/ESICM/ESCMID/ALAT do not provide a detailed definition of signs of respiratory infection [ 5 ], the ATS/IDSA guidelines mention that clinical signs may include the new onset of fever, purulent sputum, leucocytosis, and decline in oxygenation [ 6 ]. However, the ATS/IDSA guideline panel also acknowledges that there is no gold standard for the diagnosis of VAP [ 6 ]. This lack of a standardised definition is further highlighted by the varying, surveillance-based definitions of VAP provided by the Centre for Disease Control (CDC) and the European Centre for Disease Control (ECDC) [ 7 , 8 ]. These definitions, focusing on a combination of clinical, radiological, and microbiological signs to identify cases of VAP, were established to standardise reporting and facilitate the monitoring of infections in healthcare settings. However, the criteria given by the CDC and ECDC may not always align with the diagnostic criteria used by clinicians to confirm or rule out the condition [ 9 , 10 , 11 ].

Variations in the eligibility criteria applied to VAP can have a significant impact on systematic reviews and meta-analyses that assess different interventions, primarily due to the potential lack of comparability among the studied populations [ 12 ]. Furthermore, the incidence of VAP may be underestimated when excessively strict diagnostic criteria are employed [ 13 , 14 ].

A recent systematic review conducted by Weiss et al. focused on inclusion and judgment criteria used in randomised controlled trials (RCTs) on nosocomial pneumonia and found considerable heterogeneity [ 15 ]. However, the authors only considered RCTs evaluating antimicrobial treatment as interventions, did not distinguish between hospital-acquired pneumonia (HAP) and VAP, and did not evaluate definitions of the term "ventilator-associated".

The objective of this systematic review was to provide a concise overview of the diagnostic criteria for VAP recently used in RCTs, as well as the definitions attributed to the term "ventilator-associated". Its findings will provide valuable insights to a forthcoming task force, which aims to establish a uniform definition and diagnostic criteria for VAP in clinical trials. The task force will be made up of representatives from prominent international societies with an interest in VAP, as well as patient partners with lived experience. The harmonisation of the diagnostic criteria for VAP in upcoming clinical research are vital for enhancing patient care, enabling accurate epidemiological studies, and guiding successful antimicrobial stewardship programs.

Protocol and registration

The protocol for this systematic review was registered in advance with the International Prospective Register of Systematic Reviews (PROSPERO 2019 CRD42019147411), encompassing a broad review focusing on pneumonia outcomes and diagnostic criteria in RCTs. Recognising the limitations of discussing all findings in one manuscript, we opted to produce several focused and comprehensive manuscripts, all employing the same fundamental methodology, as registered with PROSPERO. While a previous publication focused on outcomes reported in RCTs on pneumonia management [ 16 ], the current submission specifically addresses diagnostic criteria for VAP.

Eligibility criteria

We included RCTs that were registered, planned, and/or completed that: (1) enrolled adults with VAP; and (2) assessed the safety, efficacy and/or effectiveness of pharmacological or non-pharmacological interventions for treating VAP.

We have excluded systematic reviews, meta-analyses, narrative reviews, post hoc analyses from RCTs, observational studies, case reports, editorials, conference proceedings, and studies that do not exclusively focus on pneumonia (such as trials including patients with pneumonia alongside other diseases). Additionally, studies on pneumonia subtypes other than VAP, such as pneumonia without specifying a subtype, community-acquired pneumonia (CAP), healthcare-associated pneumonia (HCAP), and HAP, have also been excluded. To maintain focus and relevance, studies on Coronavirus Disease 2019 (COVID-19) were excluded from this systematic review, as the viral aetiology and distinct clinical management protocols differ significantly from the nature and treatment strategies of VAP. RCT protocols were only included if the results have not been previously published in another article included in this systematic review. Due to resource constraints and the lack of multilingual expertise within the review team, this systematic review was restricted to English-language RCTs.

Information sources and search

On 20 May 2024, we searched MEDLINE/PubMed, and the Cochrane Register of Controlled Trials (CENTRAL) for RCTs published between 1 January 2010 and 19 May 2024. We used electronic algorithms introducing a combination of controlled vocabulary and search terms as reported in the Appendix.

Study selection

Two reviewers (FH, MF) independently screened titles and abstracts to identify eligible studies using Rayyan [ 17 ]. In case of disagreement, a third reviewer was consulted (AGM). After immediate exclusion of duplicates using EndNote X9, four reviewers (AGM, FH, JH, MF) independently checked for eligibility at full-text level. The results of the selection process are reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [ 18 ].

Data collection process

We developed an extraction sheet as endorsed by the Cochrane Collaboration [ 19 ]. The extraction sheet was independently tested by three reviewers (AGM, FH, MF) on five randomly selected studies and adapted to ensure good inter-reviewer agreement. The extraction sheet contained the following elements: (1) study ID, name, reference and NCT number; (2) type of pneumonia: CAP, HCAP, HAP and/or VAP; (3) diagnostic criteria for pneumonia; (4) definition of setting; (5) study origin, design, populations, interventions, and outcomes.

Four reviewers (AGM, FH, JH, MF) extracted data from the eligible studies. Data were extracted sequentially from either a manuscript containing published results, a published protocol, or, upon obtaining a trial registration number from CENTRAL, from one of the designated trial registries, such as ClinicalTrials.gov, the Clinical Trials Registry India (CTRI), the Chinese Clinical Trial Registry (ChiCTR), the European Clinical Trials Database (EudraCT), the Iranian Registry of Clinical Trials (IRCT), the Japan Primary Registries Network (JPRN), and the Japanese University Hospital Medical Information Network Clinical Trials Registry (UMIN-CTR). Cross-checking of all extracted data was performed by a second reviewer (AGM, AK, MF, RR, TW). Disagreements regarding data collection were resolved by discussion between all reviewers.

Synthesis of results

The findings were consolidated through a combination of narrative and tabular formats. The presentation encompassed the quantitative representation of each diagnostic criterion in terms of numerical values and proportions. Additionally, we provide an analysis of the various combinations of diagnostic criteria employed in RCTs in a sunburst diagram and a tabular format, along with an examination of the definitions attributed to the term "ventilator-associated".

Risk of bias

The main goal of this systematic review was to explore the diagnostic criteria used in clinical trials for diagnosing VAP. It covered trials with published protocols and/or results, as well as those only registered in a trial database. The varying levels and gaps in the information provided by the various sources made it difficult to conduct a reliable and meaningful risk of bias assessment for all included studies. However, for RCTs with published data, risk of bias was evaluated by four reviewers (AGM, JH, MF, RR) using the Risk of Bias in Randomized Trials 2 tool (RoB-2 tool), as endorsed by the Cochrane Collaboration [ 20 ].

Study selection and characteristics

A total of 7173 records were identified through the databases MEDLINE and CENTRAL, as illustrated in Fig.  1 . Following the removal of duplicate entries, a screening process involving the evaluation of titles and abstracts was conducted on 5652 records. Among these, 650 records were deemed potentially eligible for inclusion. Ultimately, our review included 119 studies that specifically focused on VAP (Table S1 in the Appendix, the full dataset is available online [ 21 ]).

figure 1

PRISMA flowchart showing study selection

The total number of patients in the 119 identified studies was 21,289. Among these studies, 83 focused exclusively on VAP, while the remaining studies encompassed various subtypes of pneumonia in addition to VAP (see Table  1 ). The majority of these studies were registered, and their protocols were accessible either through publication in a journal article or on a clinical trial platform. Results were accessible in 56.3% of cases, while both results and the protocol were accessible in 36.9% of cases. In 40.3% of the included studies, data could only be obtained from a trial registry platform, with ClinicalTrials.gov being the primary platform in 36 out of 48 cases, and ChiCTR (n = 2), CTRI (n = 3), EudraCT (n = 3), IRCT (n = 2), JPRN (n = 1) and UMIN-CTR (n = 1) in the remaining cases.

Diagnostic criteria were provided in 51.2% and the term “ventilator-associated” was defined in 52.1% of the studies, respectively. Of the 20 studies (16.8%) that referred to previously published diagnostic criteria, 13 cited the Clinical Pulmonary Infection Score (CPIS) [ 22 ], while the remaining referred to national and international guidelines.

We evaluated the risk of bias in 67 studies with published results using the RoB-2 tool. The overall assessment showed that 25% of the studies were at high risk of bias, 30% were at low risk of bias, and the remaining 45% had some concerns about potential bias. These results indicate variability in the methodological quality of the studies included in the review. The overall risk of bias and the detailed results of our assessments for the 67 studies are displayed in the Appendix (Figures SF1-SF2).

Diagnostic criteria for VAP

Pulmonary infiltrates.

Of the 61 studies on VAP that provided diagnostic criteria, 59 (96.7%) included the radiological evidence of a new or progressive pulmonary infiltrate.

Clinical signs and symptoms

The most frequently included clinical signs and symptoms were fever (86.9%), hypothermia (49.1%), sputum (70.5%), and hypoxia (32.8%). Different cut-off values were employed to define fever and hypothermia, as indicated in Table  2 . The majority of studies, accounting for 45.2%, utilised a cut-off of > 38 degrees Celsius (°C) to define fever, while 13.2% of studies used a cut-off of ≥ 38°C. In the case of hypothermia, the most commonly employed cut-off value was < 35°C, which was utilised in 43.3% of studies that included hypothermia as a criterion. Only a minority of studies provided information on the site of temperature measurement. Oral measurement was the most frequently employed method, followed by axillary and core temperature measurements (further details are displayed in Table S2 in the Appendix).

Biochemistry criteria

Fifty-four studies (88.5%) incorporated white blood count abnormalities as part of their diagnostic criteria for VAP. Conversely, only one study included an elevation of procalcitonin (PCT) as a diagnostic factor, and none of the identified studies included C-reactive protein (CRP). The specific thresholds for leucocytosis and leucopoenia varied across studies, with leucocyte counts ranging from greater than 10,000/mm3 to greater than 12,000/mm3 for leucocytosis, and less than 3,500/mm3 to less than 4,500/mm3 for leucopoenia (Table  3 ).

Combinations of diagnostic criteria

All definitions of pneumonia were composite in nature and required the fulfilment of a minimum number of predetermined criteria for the diagnosis to be established. In 90.2% of the studies the presence of a new pulmonary infiltrate was a mandatory criterion. Two studies did not include an infiltrate as criterion, whereas the remaining studies (n = 4) included the presence of an infiltrate in their criteria, it was, however, not required for a diagnosis.

The most commonly employed set of diagnostic criteria (18/61, 29.5%) consisted of a pulmonary infiltrate along with two or more additional criteria. However, these additional criteria varied across studies (Fig.  2 ). A quarter (17/61) of the included studies that provided diagnostic criteria required the fulfilment of all individual criteria for diagnosis, including an infiltrate. An infiltrate and one or more additional criteria were used to establish a diagnosis of VAP in 14.8% of studies (9/61). A total of 38 different combinations of diagnostic criteria for VAP were used in the 61 identified studies. A full set of these criteria is displayed in Table S3 in the Appendix.

figure 2

The different combinations of diagnostic criteria used in VAP RCTs. CXR radiological evidence of a new infiltrate; T temperature criterion; WBC white blood count criterion; dys/tach dyspnoea and/or tachypnoea; O2 hypoxia; auscultation  auscultation abnormalities

Definition of “ventilator-associated”

We noted that 52.1% of included studies incorporated a specific definition of the term “ventilator-associated” (Table  4 ). A total of nine distinct definitions were identified across 62 RCTs. The definition most commonly used was “onset after > 48 h of mechanical ventilation” (82.3%). Other definitions employed varying time thresholds, ranging from 24 h to seven days. Additionally, certain studies introduced supplementary criteria to further delineate the concept of “ventilator-associated”, such as administration of antibiotics prior to mechanical ventilation, duration of hospitalisation, or the timing of extubation.

Summary of evidence

This systematic review provides a concise overview of the diagnostic criteria for VAP used in RCTs and the definitions attributed to the term “ventilator-associated”. A total of 119 studies on VAP, published or registered between 2010 and 2024, were included, spanning a total of 21,289 patients. The majority of studies focused exclusively on VAP, while some also included other subtypes of pneumonia alongside VAP. Diagnostic criteria were provided in only 51.2% of the studies, and the term “ventilator-associated” was defined in only 52.1% of the studies. The most commonly utilised definition for “ventilator-associated” was “onset after > 48 h of mechanical ventilation”, used by 82.3% of studies providing a definition.

In clinical practice, the diagnosis of VAP is often based on a combination of clinical signs, laboratory results, and imaging findings, yet these are not without their limitations [ 8 ]. Our systematic review revealed considerable heterogeneity among diagnostic criteria for VAP in recent RCTs. Various combinations of specific criteria were employed to define VAP, leading to significant variability. Moreover, commonly used criteria were defined in different ways, with variations observed in the thresholds set for fever/hypothermia, as well as leucocytosis/leucopoenia.

Several criteria that were used in the studies included in our review have been shown to be insufficient for confirming a diagnosis of VAP. One of the most important criteria, included in the majority of reviewed RCTs, a new or progressive pulmonary infiltrate, has previously been reported to be of limited diagnostic value due to a lack of specificity [ 14 ]. Additionally, criteria like fever/hypothermia and the measurement of biomarkers such as leukocytes, CRP, and PCT may not be effective in diagnosing or excluding VAP in various clinical settings [ 4 , 23 , 24 ]. Despite this, CRP is widely used and has demonstrated some clinical value in predicting VAP [ 25 ]. It is, therefore, surprising that none of the RCTs included in our review employed CRP as a diagnostic criterion.

Overall, the findings of our systematic review underline the diverse nature of VAP, with different diagnostic criteria increasing the risk of both over- and underdiagnosis of VAP [ 14 , 26 ]. There have been attempts to diagnose VAP more objectively, one of these being the development of the CPIS in 1991, a six-component score that 10.9% of studies included in our review referred to [ 27 ]. This score includes different cut-offs for body temperature, leucocyte counts, tracheal secretion appearances, oxygenation levels and radiographical changes to estimate the risk for VAP. However, the CPIS has been shown not to be superior to other diagnostic criteria, and, therefore, its application remains controversial [ 8 , 11 , 22 , 28 ]. Other commonly applied criteria, such as the surveillance-based criteria by the ECDC and CDC, did not seem to be accurate enough to detect true cases of VAP either [ 9 , 10 , 11 ]. Furthermore, there is limited agreement between the two surveillance-based criteria, which has previously resulted in different estimates of VAP events [ 29 ].

In lieu of definitive diagnostic scores or sets of diagnostic criteria to detect all true cases of VAP, the findings of our systematic review indicate the need for more homogeneous diagnostic criteria in future RCTs, to assure their comparability. Currently, international guidelines avoid providing clear diagnostic criteria for VAP [ 5 , 6 ]. Given the significance of establishing strong consensus definitions for high-risk conditions like VAP, it is essential to emphasise even further that a uniform definition is crucial not only for advancing therapeutic research but also, and perhaps more importantly, for refining diagnostic methods. Together with core outcome sets, these definitions can help to improve the likelihood of attaining robust and reliable findings in forthcoming systematic reviews and meta-analyses [ 16 , 30 ].

Strengths and limitations

We used a comprehensive search strategy which included multiple databases and a wide range of search terms, ensuring broad identification of all potentially relevant trials. Additionally, the inclusion criteria were clearly defined, and the study selection process was conducted independently by multiple reviewers to minimise bias. The extraction sheet used for data collection was tested for inter-reviewer agreement and adapted accordingly. Another strength is the open availability of the complete dataset, maximising the transparency and reproducibility of our findings.

However, the following limitations need to be acknowledged. Firstly, the review only included RCTs conducted in English, which may have introduced language bias. This approach was adopted to ensure feasible and reliable data analysis within the scope of the resources available.

Additionally, the exclusion of studies focusing on pneumonia subtypes other than VAP may limit the generalisability of our findings. Furthermore, the lack of diagnostic criteria and definitions in a significant proportion of included studies suggests a potential reporting bias. This might be reinforced by the fact that 40.3% of data were received from trial registry platforms. Compared to final manuscript publications, reporting of eligibility criteria is often incomplete on registry platforms, therefore this must be highlighted as a limitation [ 31 ].

This systematic review provides an overview of diagnostic criteria for VAP used in RCTs and the definitions attributed to the term “ventilator-associated”. Our findings highlight the heterogeneity and lack of standardisation in commonly used diagnostic criteria, as well as the variability in definitions of "ventilator-associated" across clinical trials. We emphasise the need for a uniform definition of VAP to enable better comparability between studies and interventions. The results of this review will inform the work of an upcoming task force aimed at establishing such standardised criteria.

Availability of data and materials

Raw data are accessible via the Open Science Framework (OSF) at osf.io/v3 × 42. This link is referenced in our manuscript (Ref. 21).

Torres A, Cilloniz C, Niederman MS, et al. Pneumonia. Nat Rev Dis Primers. 2021;7(1):28. https://doi.org/10.1038/s41572-021-00259-0 .

Article   Google Scholar  

Muscedere JG, Day A, Heyland DK. Mortality, attributable mortality, and clinical events as end points for clinical trials of ventilator-associated pneumonia and hospital-acquired pneumonia. Clin Infect Dis. 2010;51:S120–5. https://doi.org/10.1086/653060 .

Article   PubMed   Google Scholar  

Melsen WG, Rovers MM, Groenwold RH, et al. Attributable mortality of ventilator-associated pneumonia: a meta-analysis of individual patient data from randomised prevention studies. Lancet Infect Dis. 2013;13:665–71. https://doi.org/10.1016/S1473-3099(13)70081-1 .

Alagna L, Palomba E, Chatenoud L, et al. Comparison of multiple definitions for ventilator-associated pneumonia in patients requiring mechanical ventilation for non-pulmonary conditions: preliminary data from PULMIVAP, an Italian multi-centre cohort study. J Hosp Infect. 2023;140:90–5. https://doi.org/10.1016/j.jhin.2023.07.023 .

Article   CAS   PubMed   Google Scholar  

Torres A, Niederman MS, Chastre J, et al. International ERS/ESICM/ESCMID/ALAT guidelines for the management of hospital-acquired pneumonia and ventilator-associated pneumonia. Eur Respir J. 2017;50:1700582. https://doi.org/10.1183/13993003.00582-2017 .

Kalil AC, Metersky ML, Klompas M, et al. Management of Adults With Hospital-acquired and Ventilator-associated Pneumonia: 2016 Clinical Practice Guidelines by the Infectious Diseases Society of America and the American Thoracic Society. Clin Infect Dis. 2016;63:e61–111. https://doi.org/10.1093/cid/ciw353 .

Article   PubMed   PubMed Central   Google Scholar  

Plachouras D, Lepape A, Suetens C. ECDC definitions and methods for the surveillance of healthcare-associated infections in intensive care units. Intensive Care Med. 2018;44:2216–8. https://doi.org/10.1007/s00134-018-5113-0 .

Nair GB, Niederman MS. Ventilator-associated pneumonia: present understanding and ongoing debates. Intensive Care Med. 2015;41:34–48. https://doi.org/10.1007/s00134-014-3564-5 .

Ramírez-Estrada S, Lagunes L, Peña-López Y, et al. Assessing predictive accuracy for outcomes of ventilator-associated events in an international cohort: the EUVAE study. Intensive Care Med. 2018;44:1212–20. https://doi.org/10.1007/s00134-018-5269-7 .

Waltrick R, Possamai DS, de Aguiar FP, et al. Comparison between a clinical diagnosis method and the surveillance technique of the Center for Disease Control and Prevention for identification of mechanical ventilator-associated pneumonia. Rev Bras Ter Intensiva. 2015;27:260. https://doi.org/10.5935/0103-507X.20150047 .

Rahimibashar F, Miller AC, Yaghoobi MH, Vahedian-Azimi A. A comparison of diagnostic algorithms and clinical parameters to diagnose ventilator-associated pneumonia: a prospective observational study. BMC Pulm Med. 2021;21:161. https://doi.org/10.1186/s12890-021-01527-1 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Malmivaara A. Methodological considerations of the GRADE method. Ann Med. 2015;47:1–5.

Al-Omari B, McMeekin P, Allen AJ, et al. Systematic review of studies investigating ventilator associated pneumonia diagnostics in intensive care. BMC Pulm Med. 2021;21:196. https://doi.org/10.1186/s12890-021-01560-0 .

Fernando SM, Tran A, Cheng W, et al. Diagnosis of ventilator-associated pneumonia in critically ill adult patients—a systematic review and meta-analysis. Intensive Care Med. 2020;46:1170–9. https://doi.org/10.1007/s00134-020-06036-z .

Weiss E, Essaied W, Adrie C, et al. Treatment of severe hospital-acquired and ventilator-associated pneumonia: a systematic review of inclusion and judgment criteria used in randomized controlled trials. Crit Care. 2017. https://doi.org/10.1186/s13054-017-1755-5 .

Mathioudakis AG, Fally M, Hansel J, et al. Clinical trials of pneumonia management assess heterogeneous outcomes and measurement instruments. J Clin Epidemiol. 2023;164:88–95. https://doi.org/10.1016/j.jclinepi.2023.10.011 .

Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan—a web and mobile app for systematic reviews. Syst Rev. 2016;5:210. https://doi.org/10.1186/s13643-016-0384-4 .

Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann Intern Med. 2009;151:264–9.

Li T, Higgins J, Deeks J (editors) (2019) Chapter 5: Collecting data | Cochrane Training. In: Cochrane Handbook for Systematic Reviews of Interventions version 6.0. https://training.cochrane.org/handbook/current/chapter-05 . Accessed 21 Jul 2020

Sterne JAC, Savović J, Page MJ, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366: l4898. https://doi.org/10.1136/bmj.l4898 .

ERS COS Pneumonia dataset. http://osf.io/v3x42

Zilberberg MD, Shorr AF. Ventilator-associated pneumonia: the clinical pulmonary infection score as a surrogate for diagnostics and outcome. Clin Infect Dis. 2010;51:S131–5. https://doi.org/10.1086/653062 .

Huang H-B, Peng J-M, Weng L, et al. Procalcitonin-guided antibiotic therapy in intensive care unit patients: a systematic review and meta-analysis. Ann Intensive Care. 2017;7:114. https://doi.org/10.1186/s13613-017-0338-6 .

Palazzo SJ, Simpson T, Schnapp L. Biomarkers for ventilator-associated pneumonia: review of the literature. Heart Lung. 2011;40:293–8. https://doi.org/10.1016/j.hrtlng.2010.11.003 .

Póvoa P, Martin-Loeches I, Ramirez P, et al. Biomarker kinetics in the prediction of VAP diagnosis: results from the BioVAP study. Ann Intensive Care. 2016;6:32. https://doi.org/10.1186/s13613-016-0134-8 .

Johnstone J, Muscedere J, Dionne J, et al. Definitions, rates and associated mortality of ICU-acquired pneumonia: a multicenter cohort study. J Crit Care. 2023;75:154284. https://doi.org/10.1016/j.jcrc.2023.154284 .

Pugin J, Auckenthaler R, Mili N, et al. Diagnosis of ventilator-associated pneumonia by bacteriologic analysis of bronchoscopic and nonbronchoscopic “blind” bronchoalveolar lavage fluid. Am Rev Respir Dis. 1991;143:1121–9. https://doi.org/10.1164/ajrccm/143.5_Pt_1.1121 .

Fàbregas N, Ewig S, Torres A, et al. Clinical diagnosis of ventilator associated pneumonia revisited: comparative validation using immediate post-mortem lung biopsies. Thorax. 1999;54:867–73.

Craven TH, Wojcik G, McCoubrey J, et al. Lack of concordance between ECDC and CDC systems for surveillance of ventilator associated pneumonia. Intensive Care Med. 2018;44:265–6. https://doi.org/10.1007/s00134-017-4993-8 .

Mathioudakis AG, Khaleva E, Fally M, et al. Core outcome sets, developed collaboratively with patients, can improve the relevance and comparability of clinical trials. Eur Respir J. 2023;61:2202107. https://doi.org/10.1183/13993003.02107-2022 .

Speich B, Gloy VL, Klatte K, et al. Reliability of trial information across registries for trials with multiple registrations. JAMA Netw Open. 2021;4:e2128898. https://doi.org/10.1001/jamanetworkopen.2021.28898 .

Download references

Acknowledgements

We would like to acknowledge and honour the contributions of Prof. Tobias Welte, who was a vital member of our research team and co-author of this manuscript. Prof. Welte passed away after the initial submission of this work but before its final acceptance. His insights and expertise were invaluable to the development of this research, and he remains deeply missed by the team. We dedicate this work to his memory.

Open access funding provided by Copenhagen University This study was partly supported by the NIHR Manchester Biomedical Research Centre (BRC, NIHR203308) as well as the Capital Region of Denmark (Region Hovedstaden). The funders had no role in study design, data collection or analysis, decision to publish, nor preparation of the manuscript. Dr Jan Hansel was supported by an NIHR Academic Clinical Fellowship in Intensive Care Medicine. Dr Rebecca Robey was supported by an NIHR Academic Clinical Fellowship in Respiratory Medicine. Dr Alexander G. Mathioudakis was supported by an NIHR Clinical Lectureship in Respiratory Medicine. All authors have completed a ICMJE uniform disclosure form detailing any conflicts of interest outside the submitted work that they may have. None of the authors have conflicts directly related to this work.

Author information

Authors and affiliations.

Department of Respiratory Medicine and Infectious Diseases, Copenhagen University Hospital – Bispebjerg and Frederiksberg, Copenhagen, Denmark

Markus Fally

North West Lung Centre, Wythenshawe Hospital, Manchester University NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, UK

Faiuna Haseeb, Ahmed Kouta, Rebecca C. Robey, Timothy Felton & Alexander G. Mathioudakis

Division of Immunology, Immunity to Infection and Respiratory Medicine, School of Biological Sciences, The University of Manchester, Manchester, UK

Faiuna Haseeb, Ahmed Kouta, Jan Hansel, Rebecca C. Robey, Timothy Felton & Alexander G. Mathioudakis

North West School of Intensive Care Medicine, Health Education England North West, Manchester, UK

Acute Intensive Care Unit, Wythenshawe Hospital, Manchester University NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, UK

Thomas Williams & Timothy Felton

Department of Respiratory Medicine and German Centre of Lung Research (DZL), Hannover Medical School, Hannover, Germany

Tobias Welte

You can also search for this author in PubMed   Google Scholar

Contributions

MF: conceptualisation, methodology, software, formal analysis, investigation, data curation, writing—original draft, visualisation, project administration. FH: conceptualisation, investigation, data curation, validation, writing—review and editing. AK, JH, RCR and TWI: data curation, validation, writing—review and editing. TWE: conceptualisation, investigation, methodology, resources, validation, writing—review and editing. TF: conceptualisation, investigation, methodology, resources, validation, writing—review and editing, supervision. AGM: conceptualisation, investigation, methodology, software, resources, validation, writing—review and editing, project administration, supervision, funding acquisition, project administration.

Corresponding author

Correspondence to Markus Fally .

Ethics declarations

Ethics approval and consent to participate.

Not applicable, as this was a methodological systematic review without patient involvement/participation.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file1 (docx 807 kb), search strategy, medline/pubmed.

#1: pneumonia [mh]

#2: bronchopneumonia [mh]

#3: pleuropneumonia [mh]

#4: Healthcare-Associated Pneumonia [mh]

#5: Ventilator-Associated Pneumonia [mh]

#6: pneumonia [ti]

#7: pneumonia* [ti]

#8: bronchopneumonia [ti]

#9: pleuropneumonia [ti]

#10: #1 OR #2 OR #3 OR #4 OR #5 OR #6 OR #7 OR #8 OR #9

#11: randomized controlled trial [pt]

#12: controlled clinical trial [pt]

#13: randomized [tiab]

#14: placebo [tiab]

#15: clinical trials as topic [mesh: noexp]

#16: randomly [tiab]

#17: trial [ti]

#18: #11 OR #12 OR #13 OR #14 OR #15 OR #16 OR #17

#19: animals [mh] NOT humans [mh]

#20: children [mh] NOT adults [mh]

#21: COVID-19 [mh] or (covid[ti]) or (coronavirus [ti]) or (sars-cov-2[ti]) or (covid-19[ti]) or (pandemic[ti])

#22: #19 OR #20 OR #21

#23: #18 NOT #22

#24: #10 AND #23

#25: Publication date: 2010 –2024

Cochrane library

#1: MeSH descriptor: [Pneumonia] explode all trees

#2: pneumonia*:ti

#3: #1 or #2

#4: MeSH descriptor: [COVID-19] explode all trees

#5: COVID-19:ti

#6: covid:ti

#7: coronavirus:ti

#8: sars-cov-2:ti

#9: #4 or #5 or #6 or #7 or #8

#10: #3 not #9

#11: Limit: Publication Date from 2010–2024

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Fally, M., Haseeb, F., Kouta, A. et al. Unravelling the complexity of ventilator-associated pneumonia: a systematic methodological literature review of diagnostic criteria and definitions used in clinical research. Crit Care 28 , 214 (2024). https://doi.org/10.1186/s13054-024-04991-3

Download citation

Received : 28 February 2024

Accepted : 15 June 2024

Published : 02 July 2024

DOI : https://doi.org/10.1186/s13054-024-04991-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Diagnostic criteria
  • Inclusion criteria
  • Clinical trial
  • Ventilator-associated pneumonia
  • Systematic review

Critical Care

ISSN: 1364-8535

literature review flow research

Drake University - visit www.drake.edu

The Cowles Library website will be unavailable on Tuesday, Aug. 6th from 1:00 p.m.- 5:00 p.m. due to a scheduled migration to a new platform. You can reach our list of Research Databases at https://library.drake.edu/az/databases

  • Research, Study, Learning
  • Archives & Special Collections

literature review flow research

  • Cowles Library

In This Section:

  • Find Journal Articles
  • Find Articles in Related Disciplines
  • Find Streaming Video

Conducting a Literature Review

  • Organizations, Associations, Societies
  • For Faculty

What is a Literature Review?

Description.

A literature review, also called a review article or review of literature, surveys the existing research on a topic. The term "literature" in this context refers to published research or scholarship in a particular discipline, rather than "fiction" (like American Literature) or an individual work of literature. In general, literature reviews are most common in the sciences and social sciences.

Literature reviews may be written as standalone works, or as part of a scholarly article or research paper. In either case, the purpose of the review is to summarize and synthesize the key scholarly work that has already been done on the topic at hand. The literature review may also include some analysis and interpretation. A literature review is  not  a summary of every piece of scholarly research on a topic.

Why are literature reviews useful?

Literature reviews can be very helpful for newer researchers or those unfamiliar with a field by synthesizing the existing research on a given topic, providing the reader with connections and relationships among previous scholarship. Reviews can also be useful to veteran researchers by identifying potentials gaps in the research or steering future research questions toward unexplored areas. If a literature review is part of a scholarly article, it should include an explanation of how the current article adds to the conversation. (From: https://library.drake.edu/englit/criticism)

How is a literature review different from a research article?

Research articles: "are empirical articles that describe one or several related studies on a specific, quantitative, testable research question....they are typically organized into four text sections: Introduction, Methods, Results, Discussion." Source: https://psych.uw.edu/storage/writing_center/litrev.pdf)

Steps for Writing a Literature Review

1. Identify and define the topic that you will be reviewing.

The topic, which is commonly a research question (or problem) of some kind, needs to be identified and defined as clearly as possible.  You need to have an idea of what you will be reviewing in order to effectively search for references and to write a coherent summary of the research on it.  At this stage it can be helpful to write down a description of the research question, area, or topic that you will be reviewing, as well as to identify any keywords that you will be using to search for relevant research.

2. Conduct a Literature Search

Use a range of keywords to search databases such as PsycINFO and any others that may contain relevant articles.  You should focus on peer-reviewed, scholarly articles . In SuperSearch and most databases, you may find it helpful to select the Advanced Search mode and include "literature review" or "review of the literature" in addition to your other search terms.  Published books may also be helpful, but keep in mind that peer-reviewed articles are widely considered to be the “gold standard” of scientific research.  Read through titles and abstracts, select and obtain articles (that is, download, copy, or print them out), and save your searches as needed. Most of the databases you will need are linked to from the Cowles Library Psychology Research guide .

3. Read through the research that you have found and take notes.

Absorb as much information as you can.  Read through the articles and books that you have found, and as you do, take notes.  The notes should include anything that will be helpful in advancing your own thinking about the topic and in helping you write the literature review (such as key points, ideas, or even page numbers that index key information).  Some references may turn out to be more helpful than others; you may notice patterns or striking contrasts between different sources; and some sources may refer to yet other sources of potential interest.  This is often the most time-consuming part of the review process.  However, it is also where you get to learn about the topic in great detail. You may want to use a Citation Manager to help you keep track of the citations you have found. 

4. Organize your notes and thoughts; create an outline.

At this stage, you are close to writing the review itself.  However, it is often helpful to first reflect on all the reading that you have done.  What patterns stand out?  Do the different sources converge on a consensus?  Or not?  What unresolved questions still remain?  You should look over your notes (it may also be helpful to reorganize them), and as you do, to think about how you will present this research in your literature review.  Are you going to summarize or critically evaluate?  Are you going to use a chronological or other type of organizational structure?  It can also be helpful to create an outline of how your literature review will be structured.

5. Write the literature review itself and edit and revise as needed.

The final stage involves writing.  When writing, keep in mind that literature reviews are generally characterized by a  summary style  in which prior research is described sufficiently to explain critical findings but does not include a high level of detail (if readers want to learn about all the specific details of a study, then they can look up the references that you cite and read the original articles themselves).  However, the degree of emphasis that is given to individual studies may vary (more or less detail may be warranted depending on how critical or unique a given study was).   After you have written a first draft, you should read it carefully and then edit and revise as needed.  You may need to repeat this process more than once.  It may be helpful to have another person read through your draft(s) and provide feedback.

6. Incorporate the literature review into your research paper draft. (note: this step is only if you are using the literature review to write a research paper. Many times the literature review is an end unto itself).

After the literature review is complete, you should incorporate it into your research paper (if you are writing the review as one component of a larger paper).  Depending on the stage at which your paper is at, this may involve merging your literature review into a partially complete Introduction section, writing the rest of the paper around the literature review, or other processes.

These steps were taken from: https://psychology.ucsd.edu/undergraduate-program/undergraduate-resources/academic-writing-resources/writing-research-papers/writing-lit-review.html#6.-Incorporate-the-literature-r

  • << Previous: Find Streaming Video
  • Next: Organizations, Associations, Societies >>
  • Last Updated: Aug 8, 2024 11:43 AM

Borrow & Request

Use materials placed on reserve by your instructors

Borrow books directly from other Iowa academic library partners

Borrow material from libraries around the world

Ask the library to purchase books or other research materials

Collections

Drake history and Iowa political papers

Online access to unique items from the University Archives

Books, eBooks, and videos we highlight throughout the year

Research Support

Handpicked by experts for your area of study

Schedule a one-on-one session with a librarian

A guide to the research process

Librarians who specialize in your area of study

Find, organize, and use your citations

Writing Center, Speaking Center, and other Tutoring

Tools & resources to help develop your study skills

Teaching Support

What we teach and how we can help in your courses

Connect with a librarian

Put material on reserve for your courses

Help with course material adoptions and textbook alternatives

Explore, adopt, adapt, and create open educational resources

Resources to help you publish your research

Collections & Exhibits

Research & teaching, records management, about the archives.

What we do and why

Hours, directions, and guidelines for your Archives visit

Reach, follow, and support the Archives

Guides, tutorials, and library expertise to help you succeed as a scholar

Borrowing materials, finding a study space, locating services

Library services and support directed toward Drake Online and other off-campus students

Provide feedback or resolve a problem with the library

Faculty & Staff

Resources and information literacy expertise to support your teaching

Ask a Question

Cowles Library faculty and staff profiles

What's happening at Cowles Library

Student employment at Cowles Library

Where we are and when we're open

Services for Drake alumni and visitors

Library Spaces

Navigate the library

Check availability and reserve a room

Technology in the library

Mission & Planning

Cowles Library mission and vision

Policies governing use of library resources, space, and services

Library support for diversity, equity, inclusion, and social justice

literature review flow research

  • 2507 University Avenue
  • Des Moines, IA 50311
  • (515) 271-2111

Trouble finding something? Try searching , or check out the Get Help page.

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Brown KL, Pagel C, Ridout D, et al. Early morbidities following paediatric cardiac surgery: a mixed-methods study. Southampton (UK): NIHR Journals Library; 2020 Jul. (Health Services and Delivery Research, No. 8.30.)

Cover of Early morbidities following paediatric cardiac surgery: a mixed-methods study

Early morbidities following paediatric cardiac surgery: a mixed-methods study.

Appendix 1 the prisma flow diagram for the literature review.

FIGURE 20. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram for the literature review.

Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram for the literature review. CINAHL, Cumulative Index to Nursing and Allied Health Literature.

  • Cite this Page Brown KL, Pagel C, Ridout D, et al. Early morbidities following paediatric cardiac surgery: a mixed-methods study. Southampton (UK): NIHR Journals Library; 2020 Jul. (Health Services and Delivery Research, No. 8.30.) Appendix 1, The PRISMA flow diagram for the literature review.
  • PDF version of this title (7.7M)

Other titles in this collection

  • Health Services and Delivery Research

Recent Activity

  • The PRISMA flow diagram for the literature review - Early morbidities following ... The PRISMA flow diagram for the literature review - Early morbidities following paediatric cardiac surgery: a mixed-methods study

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

COMMENTS

  1. A Scoping Review of Flow Research

    Our Scoping Review provides a systematic overview on flow research between the years 2000 and 2016. A task force of flow research from the EFRN united their expertise in order to provide a sound scientific summary and discussion of flow research in these years and implications for future research.

  2. How to Write a Literature Review

    What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic.

  3. (PDF) A Scoping Review of Flow Research

    Our review (1) provides a framework to cluster flow research, (2) gives a systematic overview about existing studies and their findings, and (3) provides an overview about implications for future ...

  4. Ten Simple Rules for Writing a Literature Review

    Reviewing the literature requires the ability to juggle multiple tasks, from finding and evaluating relevant material to synthesising information from various sources, from critical thinking to paraphrasing, evaluating, and citation skills [7]. In this contribution, I share ten simple rules I learned working on about 25 literature reviews as a PhD and postdoctoral student. Ideas and insights ...

  5. Investigating the "Flow" Experience: Key Conceptual and Operational

    A Review of Flow Operationalizations in the Psychological Literature Within any field of science, the consensual operationalization of central constructs is a sine qua non for progress. When this is lacking, results across studies cannot be compared, and the potential for progress in the field is severely undermined.

  6. Getting into a "Flow" state: a systematic review of flow experience in

    When developing new therapy games, measuring flow experience can indicate whether the game motivates one to train. The purpose of this study was to identify and systematically review current literature on flow experience assessed in patients with stroke, traumatic brain injury, multiple sclerosis and Parkinson's disease.

  7. Systematic Reviews: Step 8: Write the Review

    The flow diagram depicts the flow of information through the different phases of a systematic review. It maps out the number of records identified, included and excluded, and the reasons for exclusions.

  8. A SYSTEMATIC REVIEW OF STUDIES ON FLOW EXPERIENCE FROM ...

    Purpose: This paper presents a systematic literature review on flow experience to identify the theoretical underpinnings, outcomes, antecedents, and empirical dimensions of the phenomenon used in ...

  9. Literature Review

    A literature review is an account of what has been published on a topic by accredited scholars and researchers. Occasionally you will be asked to write one as a separate assignment, but more often it is part of the introduction to an essay, research report, or thesis. In writing the literature review, your purpose is to convey to your reader what knowledge and ideas have been established on a ...

  10. Writing a Literature Review

    Writing a Literature Review. A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels ...

  11. Introduction

    Narrative review: The purpose of this type of review is to describe the current state of the research on a specific topic/research and to offer a critical analysis of the literature reviewed. Studies are grouped by research/theoretical categories, and themes and trends, strengths and weakness, and gaps are identified. The review ends with a conclusion section which summarizes the findings ...

  12. How to Write a Literature Review: Six Steps to Get You from ...

    Assistant Professor Tanya Golash-Boza summarizes six steps to help you learn how to write a literature review.

  13. Research Guides: Writing in the Health and Social Sciences: Literature

    Guide to writing, citing, and publishing resources for the health and social sciences.

  14. The PRISMA statement for reporting systematic reviews and ...

    The flow diagram originally proposed by QUOROM was also modified to show numbers of identified records, excluded articles, and included studies. After 11 revisions the group approved the checklist, flow diagram, and this explanatory paper. Fig 1 Flow of information through the different phases of a systematic review.

  15. A Scoping Review of Flow Research

    Our review (1) provides a framework to cluster flow research, (2) gives a systematic overview about existing studies and their findings, and (3) provides an overview about implications for future research. The provided framework consists of three levels of flow research.

  16. Flow research in music contexts: A systematic literature review

    Topics covered in the studies reviewed include the psychophysiological aspects of flow, transmission and group experience of flow, the association of flow with a range of positive outcomes, factors that contribute to flow experiences, and flow experiences of young children. Implications for future research were proffered in light of the findings.

  17. PDF Literature Review Flowchart

    Step 1. Select a Topic Task 1. Identify a Subject for Study Step 2. Develop the of Argumentation Tools Concept 1. Building the Case for a Literature Review Step 3. Search the Literature Task 1. Select the Literature to Review Task 2. Translate the Personal Interest or Concern Into a Research Query {{ Activity 1. Focus a Research Interest {{ Activity 2. Limit the Interest {{ Activity 3. Select ...

  18. PRISMA 2020 explanation and elaboration: updated guidance and exemplars

    The methods and results of systematic reviews should be reported in sufficient detail to allow users to assess the trustworthiness and applicability of the review findings. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) ...

  19. PDF Systematic Review Flowchart

    Decide what type of review you are doing. A proper systematic review looks at absolutely every resource to find all the information to answer a very narrow research question. What's In A Name?: the difference between a systematic review and a literature review and why it matters. (1)

  20. A Systematic Literature Review on the Experience of Flow and its

    Flow is a state of entire immersion in the present action, which can lead to effortless and joyful performances. The primary objective of this systematic literature review was directed toward ...

  21. PRISMA 2020 flow diagram

    The flow diagram depicts the flow of information through the different phases of a systematic review. It maps out the number of records identified, included and excluded, and the reasons for exclusions. Different templates are available depending on the type of review (new or updated) and sources used to identify studies: PRISMA 2020 flow ...

  22. Results and PRISMA Flow Diagram

    Steps in a Systematic Review. Searching the Published Literature. Searching the Gray Literature. Methodology and Documentation. Managing the Process. Help. Scoping Reviews. Includes the number of results retrieved from each source. Duplicates are removed.

  23. Biological Sciences: Finding and evaluating resources for your

    This session equips participants with all the fundamental skills that they need to research and begin writing their literature review. This includes building and executing effective search strategies to locate relevant materials for literature reviews, projects and other related research activities, key searching techniques, where to search, and how to keep up to date with the

  24. PDF OFR 2024-1033: A Literature Review and Hypsometric Analysis to Support

    Literature Review 5. Flow Range. Flow range is the difference between maximum and minimum flows over a specific time interval. Higher flow ... More research on the duration of the high-flow period of TMFs in below-average or average water years could reduce uncertainty in

  25. Unravelling the complexity of ventilator-associated pneumonia: a

    Ventilator-associated pneumonia (VAP) is a prevalent and grave hospital-acquired infection that affects mechanically ventilated patients. Diverse diagnostic criteria can significantly affect VAP research by complicating the identification and management of the condition, which may also impact clinical management. We conducted this review to assess the diagnostic criteria and the definitions of ...

  26. Cowles Library: Psychology: Conducting a Literature Review

    Description. A literature review, also called a review article or review of literature, surveys the existing research on a topic. The term "literature" in this context refers to published research or scholarship in a particular discipline, rather than "fiction" (like American Literature) or an individual work of literature.

  27. The PRISMA flow diagram for the literature review

    FIGURE 20 Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram for the literature review. CINAHL, Cumulative Index to Nursing and Allied Health Literature.

  28. Quick Flow Male Enhancement Pills Review

    However, due to changes quick pills review in the political environment and conditions at surgery penile enlargement before after home flow male review and abroad, and influenced by the ultra left trend of thought, political studies have been neglected.

  29. Fast Flow Male Enhancement Pills Reviews

    They fast flow male enhancement reviews believe that the function of the government is to protect individuals from unpredictable circumstances and fast flow enhancement pills reviews to control economic activities based on overall interests, even if the government is required fast reviews to own and use the means of production.

  30. How can I handle unavailable papers in a systematic literature review?

    Well, when handling unavailable papers in a systematic literature review, it is essential to adopt a structured approach to ensure the integrity and comprehensiveness of the review.