Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Prevent plagiarism. Run a free check.

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

systematic review thesis example

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved August 5, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Reference management. Clean and simple.

How to write a systematic literature review [9 steps]

Systematic literature review

What is a systematic literature review?

Where are systematic literature reviews used, what types of systematic literature reviews are there, how to write a systematic literature review, 1. decide on your team, 2. formulate your question, 3. plan your research protocol, 4. search for the literature, 5. screen the literature, 6. assess the quality of the studies, 7. extract the data, 8. analyze the results, 9. interpret and present the results, registering your systematic literature review, frequently asked questions about writing a systematic literature review, related articles.

A systematic literature review is a summary, analysis, and evaluation of all the existing research on a well-formulated and specific question.

Put simply, a systematic review is a study of studies that is popular in medical and healthcare research. In this guide, we will cover:

  • the definition of a systematic literature review
  • the purpose of a systematic literature review
  • the different types of systematic reviews
  • how to write a systematic literature review

➡️ Visit our guide to the best research databases for medicine and health to find resources for your systematic review.

Systematic literature reviews can be utilized in various contexts, but they’re often relied on in clinical or healthcare settings.

Medical professionals read systematic literature reviews to stay up-to-date in their field, and granting agencies sometimes need them to make sure there’s justification for further research in an area. They can even be used as the starting point for developing clinical practice guidelines.

A classic systematic literature review can take different approaches:

  • Effectiveness reviews assess the extent to which a medical intervention or therapy achieves its intended effect. They’re the most common type of systematic literature review.
  • Diagnostic test accuracy reviews produce a summary of diagnostic test performance so that their accuracy can be determined before use by healthcare professionals.
  • Experiential (qualitative) reviews analyze human experiences in a cultural or social context. They can be used to assess the effectiveness of an intervention from a person-centric perspective.
  • Costs/economics evaluation reviews look at the cost implications of an intervention or procedure, to assess the resources needed to implement it.
  • Etiology/risk reviews usually try to determine to what degree a relationship exists between an exposure and a health outcome. This can be used to better inform healthcare planning and resource allocation.
  • Psychometric reviews assess the quality of health measurement tools so that the best instrument can be selected for use.
  • Prevalence/incidence reviews measure both the proportion of a population who have a disease, and how often the disease occurs.
  • Prognostic reviews examine the course of a disease and its potential outcomes.
  • Expert opinion/policy reviews are based around expert narrative or policy. They’re often used to complement, or in the absence of, quantitative data.
  • Methodology systematic reviews can be carried out to analyze any methodological issues in the design, conduct, or review of research studies.

Writing a systematic literature review can feel like an overwhelming undertaking. After all, they can often take 6 to 18 months to complete. Below we’ve prepared a step-by-step guide on how to write a systematic literature review.

  • Decide on your team.
  • Formulate your question.
  • Plan your research protocol.
  • Search for the literature.
  • Screen the literature.
  • Assess the quality of the studies.
  • Extract the data.
  • Analyze the results.
  • Interpret and present the results.

When carrying out a systematic literature review, you should employ multiple reviewers in order to minimize bias and strengthen analysis. A minimum of two is a good rule of thumb, with a third to serve as a tiebreaker if needed.

You may also need to team up with a librarian to help with the search, literature screeners, a statistician to analyze the data, and the relevant subject experts.

Define your answerable question. Then ask yourself, “has someone written a systematic literature review on my question already?” If so, yours may not be needed. A librarian can help you answer this.

You should formulate a “well-built clinical question.” This is the process of generating a good search question. To do this, run through PICO:

  • Patient or Population or Problem/Disease : who or what is the question about? Are there factors about them (e.g. age, race) that could be relevant to the question you’re trying to answer?
  • Intervention : which main intervention or treatment are you considering for assessment?
  • Comparison(s) or Control : is there an alternative intervention or treatment you’re considering? Your systematic literature review doesn’t have to contain a comparison, but you’ll want to stipulate at this stage, either way.
  • Outcome(s) : what are you trying to measure or achieve? What’s the wider goal for the work you’ll be doing?

Now you need a detailed strategy for how you’re going to search for and evaluate the studies relating to your question.

The protocol for your systematic literature review should include:

  • the objectives of your project
  • the specific methods and processes that you’ll use
  • the eligibility criteria of the individual studies
  • how you plan to extract data from individual studies
  • which analyses you’re going to carry out

For a full guide on how to systematically develop your protocol, take a look at the PRISMA checklist . PRISMA has been designed primarily to improve the reporting of systematic literature reviews and meta-analyses.

When writing a systematic literature review, your goal is to find all of the relevant studies relating to your question, so you need to search thoroughly .

This is where your librarian will come in handy again. They should be able to help you formulate a detailed search strategy, and point you to all of the best databases for your topic.

➡️ Read more on on how to efficiently search research databases .

The places to consider in your search are electronic scientific databases (the most popular are PubMed , MEDLINE , and Embase ), controlled clinical trial registers, non-English literature, raw data from published trials, references listed in primary sources, and unpublished sources known to experts in the field.

➡️ Take a look at our list of the top academic research databases .

Tip: Don’t miss out on “gray literature.” You’ll improve the reliability of your findings by including it.

Don’t miss out on “gray literature” sources: those sources outside of the usual academic publishing environment. They include:

  • non-peer-reviewed journals
  • pharmaceutical industry files
  • conference proceedings
  • pharmaceutical company websites
  • internal reports

Gray literature sources are more likely to contain negative conclusions, so you’ll improve the reliability of your findings by including it. You should document details such as:

  • The databases you search and which years they cover
  • The dates you first run the searches, and when they’re updated
  • Which strategies you use, including search terms
  • The numbers of results obtained

➡️ Read more about gray literature .

This should be performed by your two reviewers, using the criteria documented in your research protocol. The screening is done in two phases:

  • Pre-screening of all titles and abstracts, and selecting those appropriate
  • Screening of the full-text articles of the selected studies

Make sure reviewers keep a log of which studies they exclude, with reasons why.

➡️ Visit our guide on what is an abstract?

Your reviewers should evaluate the methodological quality of your chosen full-text articles. Make an assessment checklist that closely aligns with your research protocol, including a consistent scoring system, calculations of the quality of each study, and sensitivity analysis.

The kinds of questions you'll come up with are:

  • Were the participants really randomly allocated to their groups?
  • Were the groups similar in terms of prognostic factors?
  • Could the conclusions of the study have been influenced by bias?

Every step of the data extraction must be documented for transparency and replicability. Create a data extraction form and set your reviewers to work extracting data from the qualified studies.

Here’s a free detailed template for recording data extraction, from Dalhousie University. It should be adapted to your specific question.

Establish a standard measure of outcome which can be applied to each study on the basis of its effect size.

Measures of outcome for studies with:

  • Binary outcomes (e.g. cured/not cured) are odds ratio and risk ratio
  • Continuous outcomes (e.g. blood pressure) are means, difference in means, and standardized difference in means
  • Survival or time-to-event data are hazard ratios

Design a table and populate it with your data results. Draw this out into a forest plot , which provides a simple visual representation of variation between the studies.

Then analyze the data for issues. These can include heterogeneity, which is when studies’ lines within the forest plot don’t overlap with any other studies. Again, record any excluded studies here for reference.

Consider different factors when interpreting your results. These include limitations, strength of evidence, biases, applicability, economic effects, and implications for future practice or research.

Apply appropriate grading of your evidence and consider the strength of your recommendations.

It’s best to formulate a detailed plan for how you’ll present your systematic review results. Take a look at these guidelines for interpreting results from the Cochrane Institute.

Before writing your systematic literature review, you can register it with OSF for additional guidance along the way. You could also register your completed work with PROSPERO .

Systematic literature reviews are often found in clinical or healthcare settings. Medical professionals read systematic literature reviews to stay up-to-date in their field and granting agencies sometimes need them to make sure there’s justification for further research in an area.

The first stage in carrying out a systematic literature review is to put together your team. You should employ multiple reviewers in order to minimize bias and strengthen analysis. A minimum of two is a good rule of thumb, with a third to serve as a tiebreaker if needed.

Your systematic review should include the following details:

A literature review simply provides a summary of the literature available on a topic. A systematic review, on the other hand, is more than just a summary. It also includes an analysis and evaluation of existing research. Put simply, it's a study of studies.

The final stage of conducting a systematic literature review is interpreting and presenting the results. It’s best to formulate a detailed plan for how you’ll present your systematic review results, guidelines can be found for example from the Cochrane institute .

systematic review thesis example

  • A-Z Publications

Annual Review of Psychology

Volume 70, 2019, review article, how to do a systematic review: a best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses.

  • Andy P. Siddaway 1 , Alex M. Wood 2 , and Larry V. Hedges 3
  • View Affiliations Hide Affiliations Affiliations: 1 Behavioural Science Centre, Stirling Management School, University of Stirling, Stirling FK9 4LA, United Kingdom; email: [email protected] 2 Department of Psychological and Behavioural Science, London School of Economics and Political Science, London WC2A 2AE, United Kingdom 3 Department of Statistics, Northwestern University, Evanston, Illinois 60208, USA; email: [email protected]
  • Vol. 70:747-770 (Volume publication date January 2019) https://doi.org/10.1146/annurev-psych-010418-102803
  • First published as a Review in Advance on August 08, 2018
  • Copyright © 2019 by Annual Reviews. All rights reserved

Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems. Although this guide targets psychological scientists, its high level of abstraction makes it potentially relevant to any subject area or discipline. We argue that systematic reviews are a key methodology for clarifying whether and how research findings replicate and for explaining possible inconsistencies, and we call for researchers to conduct systematic reviews to help elucidate whether there is a replication crisis.

Article metrics loading...

Full text loading...

Literature Cited

  • APA Publ. Commun. Board Work. Group J. Artic. Rep. Stand. 2008 . Reporting standards for research in psychology: Why do we need them? What might they be?. Am. Psychol . 63 : 848– 49 [Google Scholar]
  • Baumeister RF 2013 . Writing a literature review. The Portable Mentor: Expert Guide to a Successful Career in Psychology MJ Prinstein, MD Patterson 119– 32 New York: Springer, 2nd ed.. [Google Scholar]
  • Baumeister RF , Leary MR 1995 . The need to belong: desire for interpersonal attachments as a fundamental human motivation. Psychol. Bull. 117 : 497– 529 [Google Scholar]
  • Baumeister RF , Leary MR 1997 . Writing narrative literature reviews. Rev. Gen. Psychol. 3 : 311– 20 Presents a thorough and thoughtful guide to conducting narrative reviews. [Google Scholar]
  • Bem DJ 1995 . Writing a review article for Psychological Bulletin. Psychol . Bull 118 : 172– 77 [Google Scholar]
  • Borenstein M , Hedges LV , Higgins JPT , Rothstein HR 2009 . Introduction to Meta-Analysis New York: Wiley Presents a comprehensive introduction to meta-analysis. [Google Scholar]
  • Borenstein M , Higgins JPT , Hedges LV , Rothstein HR 2017 . Basics of meta-analysis: I 2 is not an absolute measure of heterogeneity. Res. Synth. Methods 8 : 5– 18 [Google Scholar]
  • Braver SL , Thoemmes FJ , Rosenthal R 2014 . Continuously cumulating meta-analysis and replicability. Perspect. Psychol. Sci. 9 : 333– 42 [Google Scholar]
  • Bushman BJ 1994 . Vote-counting procedures. The Handbook of Research Synthesis H Cooper, LV Hedges 193– 214 New York: Russell Sage Found. [Google Scholar]
  • Cesario J 2014 . Priming, replication, and the hardest science. Perspect. Psychol. Sci. 9 : 40– 48 [Google Scholar]
  • Chalmers I 2007 . The lethal consequences of failing to make use of all relevant evidence about the effects of medical treatments: the importance of systematic reviews. Treating Individuals: From Randomised Trials to Personalised Medicine PM Rothwell 37– 58 London: Lancet [Google Scholar]
  • Cochrane Collab. 2003 . Glossary Rep., Cochrane Collab. London: http://community.cochrane.org/glossary Presents a comprehensive glossary of terms relevant to systematic reviews. [Google Scholar]
  • Cohn LD , Becker BJ 2003 . How meta-analysis increases statistical power. Psychol. Methods 8 : 243– 53 [Google Scholar]
  • Cooper HM 2003 . Editorial. Psychol. Bull. 129 : 3– 9 [Google Scholar]
  • Cooper HM 2016 . Research Synthesis and Meta-Analysis: A Step-by-Step Approach Thousand Oaks, CA: Sage, 5th ed.. Presents a comprehensive introduction to research synthesis and meta-analysis. [Google Scholar]
  • Cooper HM , Hedges LV , Valentine JC 2009 . The Handbook of Research Synthesis and Meta-Analysis New York: Russell Sage Found, 2nd ed.. [Google Scholar]
  • Cumming G 2014 . The new statistics: why and how. Psychol. Sci. 25 : 7– 29 Discusses the limitations of null hypothesis significance testing and viable alternative approaches. [Google Scholar]
  • Earp BD , Trafimow D 2015 . Replication, falsification, and the crisis of confidence in social psychology. Front. Psychol. 6 : 621 [Google Scholar]
  • Etz A , Vandekerckhove J 2016 . A Bayesian perspective on the reproducibility project: psychology. PLOS ONE 11 : e0149794 [Google Scholar]
  • Ferguson CJ , Brannick MT 2012 . Publication bias in psychological science: prevalence, methods for identifying and controlling, and implications for the use of meta-analyses. Psychol. Methods 17 : 120– 28 [Google Scholar]
  • Fleiss JL , Berlin JA 2009 . Effect sizes for dichotomous data. The Handbook of Research Synthesis and Meta-Analysis H Cooper, LV Hedges, JC Valentine 237– 53 New York: Russell Sage Found, 2nd ed.. [Google Scholar]
  • Garside R 2014 . Should we appraise the quality of qualitative research reports for systematic reviews, and if so, how. Innovation 27 : 67– 79 [Google Scholar]
  • Hedges LV , Olkin I 1980 . Vote count methods in research synthesis. Psychol. Bull. 88 : 359– 69 [Google Scholar]
  • Hedges LV , Pigott TD 2001 . The power of statistical tests in meta-analysis. Psychol. Methods 6 : 203– 17 [Google Scholar]
  • Higgins JPT , Green S 2011 . Cochrane Handbook for Systematic Reviews of Interventions, Version 5.1.0 London: Cochrane Collab. Presents comprehensive and regularly updated guidelines on systematic reviews. [Google Scholar]
  • John LK , Loewenstein G , Prelec D 2012 . Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol. Sci. 23 : 524– 32 [Google Scholar]
  • Juni P , Witschi A , Bloch R , Egger M 1999 . The hazards of scoring the quality of clinical trials for meta-analysis. JAMA 282 : 1054– 60 [Google Scholar]
  • Klein O , Doyen S , Leys C , Magalhães de Saldanha da Gama PA , Miller S et al. 2012 . Low hopes, high expectations: expectancy effects and the replicability of behavioral experiments. Perspect. Psychol. Sci. 7 : 6 572– 84 [Google Scholar]
  • Lau J , Antman EM , Jimenez-Silva J , Kupelnick B , Mosteller F , Chalmers TC 1992 . Cumulative meta-analysis of therapeutic trials for myocardial infarction. N. Engl. J. Med. 327 : 248– 54 [Google Scholar]
  • Light RJ , Smith PV 1971 . Accumulating evidence: procedures for resolving contradictions among different research studies. Harvard Educ. Rev. 41 : 429– 71 [Google Scholar]
  • Lipsey MW , Wilson D 2001 . Practical Meta-Analysis London: Sage Comprehensive and clear explanation of meta-analysis. [Google Scholar]
  • Matt GE , Cook TD 1994 . Threats to the validity of research synthesis. The Handbook of Research Synthesis H Cooper, LV Hedges 503– 20 New York: Russell Sage Found. [Google Scholar]
  • Maxwell SE , Lau MY , Howard GS 2015 . Is psychology suffering from a replication crisis? What does “failure to replicate” really mean?. Am. Psychol. 70 : 487– 98 [Google Scholar]
  • Moher D , Hopewell S , Schulz KF , Montori V , Gøtzsche PC et al. 2010 . CONSORT explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 340 : c869 [Google Scholar]
  • Moher D , Liberati A , Tetzlaff J , Altman DG PRISMA Group. 2009 . Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 339 : 332– 36 Comprehensive reporting guidelines for systematic reviews. [Google Scholar]
  • Morrison A , Polisena J , Husereau D , Moulton K , Clark M et al. 2012 . The effect of English-language restriction on systematic review-based meta-analyses: a systematic review of empirical studies. Int. J. Technol. Assess. Health Care 28 : 138– 44 [Google Scholar]
  • Nelson LD , Simmons J , Simonsohn U 2018 . Psychology's renaissance. Annu. Rev. Psychol. 69 : 511– 34 [Google Scholar]
  • Noblit GW , Hare RD 1988 . Meta-Ethnography: Synthesizing Qualitative Studies Newbury Park, CA: Sage [Google Scholar]
  • Olivo SA , Macedo LG , Gadotti IC , Fuentes J , Stanton T , Magee DJ 2008 . Scales to assess the quality of randomized controlled trials: a systematic review. Phys. Ther. 88 : 156– 75 [Google Scholar]
  • Open Sci. Collab. 2015 . Estimating the reproducibility of psychological science. Science 349 : 943 [Google Scholar]
  • Paterson BL , Thorne SE , Canam C , Jillings C 2001 . Meta-Study of Qualitative Health Research: A Practical Guide to Meta-Analysis and Meta-Synthesis Thousand Oaks, CA: Sage [Google Scholar]
  • Patil P , Peng RD , Leek JT 2016 . What should researchers expect when they replicate studies? A statistical view of replicability in psychological science. Perspect. Psychol. Sci. 11 : 539– 44 [Google Scholar]
  • Rosenthal R 1979 . The “file drawer problem” and tolerance for null results. Psychol. Bull. 86 : 638– 41 [Google Scholar]
  • Rosnow RL , Rosenthal R 1989 . Statistical procedures and the justification of knowledge in psychological science. Am. Psychol. 44 : 1276– 84 [Google Scholar]
  • Sanderson S , Tatt ID , Higgins JP 2007 . Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. Int. J. Epidemiol. 36 : 666– 76 [Google Scholar]
  • Schreiber R , Crooks D , Stern PN 1997 . Qualitative meta-analysis. Completing a Qualitative Project: Details and Dialogue JM Morse 311– 26 Thousand Oaks, CA: Sage [Google Scholar]
  • Shrout PE , Rodgers JL 2018 . Psychology, science, and knowledge construction: broadening perspectives from the replication crisis. Annu. Rev. Psychol. 69 : 487– 510 [Google Scholar]
  • Stroebe W , Strack F 2014 . The alleged crisis and the illusion of exact replication. Perspect. Psychol. Sci. 9 : 59– 71 [Google Scholar]
  • Stroup DF , Berlin JA , Morton SC , Olkin I , Williamson GD et al. 2000 . Meta-analysis of observational studies in epidemiology (MOOSE): a proposal for reporting. JAMA 283 : 2008– 12 [Google Scholar]
  • Thorne S , Jensen L , Kearney MH , Noblit G , Sandelowski M 2004 . Qualitative meta-synthesis: reflections on methodological orientation and ideological agenda. Qual. Health Res. 14 : 1342– 65 [Google Scholar]
  • Tong A , Flemming K , McInnes E , Oliver S , Craig J 2012 . Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med. Res. Methodol. 12 : 181– 88 [Google Scholar]
  • Trickey D , Siddaway AP , Meiser-Stedman R , Serpell L , Field AP 2012 . A meta-analysis of risk factors for post-traumatic stress disorder in children and adolescents. Clin. Psychol. Rev. 32 : 122– 38 [Google Scholar]
  • Valentine JC , Biglan A , Boruch RF , Castro FG , Collins LM et al. 2011 . Replication in prevention science. Prev. Sci. 12 : 103– 17 [Google Scholar]
  • Article Type: Review Article

Most Read This Month

Most cited most cited rss feed, job burnout, executive functions, social cognitive theory: an agentic perspective, on happiness and human potentials: a review of research on hedonic and eudaimonic well-being, sources of method bias in social science research and recommendations on how to control it, mediation analysis, missing data analysis: making it work in the real world, grounded cognition, personality structure: emergence of the five-factor model, motivational beliefs, values, and goals.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses

Affiliations.

  • 1 Behavioural Science Centre, Stirling Management School, University of Stirling, Stirling FK9 4LA, United Kingdom; email: [email protected].
  • 2 Department of Psychological and Behavioural Science, London School of Economics and Political Science, London WC2A 2AE, United Kingdom.
  • 3 Department of Statistics, Northwestern University, Evanston, Illinois 60208, USA; email: [email protected].
  • PMID: 30089228
  • DOI: 10.1146/annurev-psych-010418-102803

Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems. Although this guide targets psychological scientists, its high level of abstraction makes it potentially relevant to any subject area or discipline. We argue that systematic reviews are a key methodology for clarifying whether and how research findings replicate and for explaining possible inconsistencies, and we call for researchers to conduct systematic reviews to help elucidate whether there is a replication crisis.

Keywords: evidence; guide; meta-analysis; meta-synthesis; narrative; systematic review; theory.

PubMed Disclaimer

Similar articles

  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach. Aromataris E, Fernandez R, Godfrey CM, Holly C, Khalil H, Tungpunkom P. Aromataris E, et al. Int J Evid Based Healthc. 2015 Sep;13(3):132-40. doi: 10.1097/XEB.0000000000000055. Int J Evid Based Healthc. 2015. PMID: 26360830
  • RAMESES publication standards: meta-narrative reviews. Wong G, Greenhalgh T, Westhorp G, Buckingham J, Pawson R. Wong G, et al. BMC Med. 2013 Jan 29;11:20. doi: 10.1186/1741-7015-11-20. BMC Med. 2013. PMID: 23360661 Free PMC article.
  • A Primer on Systematic Reviews and Meta-Analyses. Nguyen NH, Singh S. Nguyen NH, et al. Semin Liver Dis. 2018 May;38(2):103-111. doi: 10.1055/s-0038-1655776. Epub 2018 Jun 5. Semin Liver Dis. 2018. PMID: 29871017 Review.
  • Publication Bias and Nonreporting Found in Majority of Systematic Reviews and Meta-analyses in Anesthesiology Journals. Hedin RJ, Umberham BA, Detweiler BN, Kollmorgen L, Vassar M. Hedin RJ, et al. Anesth Analg. 2016 Oct;123(4):1018-25. doi: 10.1213/ANE.0000000000001452. Anesth Analg. 2016. PMID: 27537925 Review.
  • Nursing Education During the SARS-COVID-19 Pandemic: The Implementation of Information and Communication Technologies (ICT). Soto-Luffi O, Villegas C, Viscardi S, Ulloa-Inostroza EM. Soto-Luffi O, et al. Med Sci Educ. 2024 May 9;34(4):949-959. doi: 10.1007/s40670-024-02056-2. eCollection 2024 Aug. Med Sci Educ. 2024. PMID: 39099870 Review.
  • Surveillance of Occupational Exposure to Volatile Organic Compounds at Gas Stations: A Scoping Review Protocol. Mendes TMC, Soares JP, Salvador PTCO, Castro JL. Mendes TMC, et al. Int J Environ Res Public Health. 2024 Apr 23;21(5):518. doi: 10.3390/ijerph21050518. Int J Environ Res Public Health. 2024. PMID: 38791733 Free PMC article. Review.
  • Association between poor sleep and mental health issues in Indigenous communities across the globe: a systematic review. Fernandez DR, Lee R, Tran N, Jabran DS, King S, McDaid L. Fernandez DR, et al. Sleep Adv. 2024 May 2;5(1):zpae028. doi: 10.1093/sleepadvances/zpae028. eCollection 2024. Sleep Adv. 2024. PMID: 38721053 Free PMC article.
  • Barriers to ethical treatment of patients in clinical environments: A systematic narrative review. Dehkordi FG, Torabizadeh C, Rakhshan M, Vizeshfar F. Dehkordi FG, et al. Health Sci Rep. 2024 May 1;7(5):e2008. doi: 10.1002/hsr2.2008. eCollection 2024 May. Health Sci Rep. 2024. PMID: 38698790 Free PMC article.
  • Can the potential benefit of individualizing treatment be assessed using trial summary statistics alone? Galanter N, Carone M, Kessler RC, Luedtke A. Galanter N, et al. Am J Epidemiol. 2024 Aug 5;193(8):1161-1167. doi: 10.1093/aje/kwae040. Am J Epidemiol. 2024. PMID: 38679458
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ingenta plc
  • Ovid Technologies, Inc.

Other Literature Sources

  • scite Smart Citations

Miscellaneous

  • NCI CPTAC Assay Portal
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Wiley Open Access Collection

Logo of blackwellopen

An overview of methodological approaches in systematic reviews

Prabhakar veginadu.

1 Department of Rural Clinical Sciences, La Trobe Rural Health School, La Trobe University, Bendigo Victoria, Australia

Hanny Calache

2 Lincoln International Institute for Rural Health, University of Lincoln, Brayford Pool, Lincoln UK

Akshaya Pandian

3 Department of Orthodontics, Saveetha Dental College, Chennai Tamil Nadu, India

Mohd Masood

Associated data.

APPENDIX B: List of excluded studies with detailed reasons for exclusion

APPENDIX C: Quality assessment of included reviews using AMSTAR 2

The aim of this overview is to identify and collate evidence from existing published systematic review (SR) articles evaluating various methodological approaches used at each stage of an SR.

The search was conducted in five electronic databases from inception to November 2020 and updated in February 2022: MEDLINE, Embase, Web of Science Core Collection, Cochrane Database of Systematic Reviews, and APA PsycINFO. Title and abstract screening were performed in two stages by one reviewer, supported by a second reviewer. Full‐text screening, data extraction, and quality appraisal were performed by two reviewers independently. The quality of the included SRs was assessed using the AMSTAR 2 checklist.

The search retrieved 41,556 unique citations, of which 9 SRs were deemed eligible for inclusion in final synthesis. Included SRs evaluated 24 unique methodological approaches used for defining the review scope and eligibility, literature search, screening, data extraction, and quality appraisal in the SR process. Limited evidence supports the following (a) searching multiple resources (electronic databases, handsearching, and reference lists) to identify relevant literature; (b) excluding non‐English, gray, and unpublished literature, and (c) use of text‐mining approaches during title and abstract screening.

The overview identified limited SR‐level evidence on various methodological approaches currently employed during five of the seven fundamental steps in the SR process, as well as some methodological modifications currently used in expedited SRs. Overall, findings of this overview highlight the dearth of published SRs focused on SR methodologies and this warrants future work in this area.

1. INTRODUCTION

Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the “gold standard” of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search, appraise, and synthesize the available evidence. 3 Several guidelines, developed by various organizations, are available for the conduct of an SR; 4 , 5 , 6 , 7 among these, Cochrane is considered a pioneer in developing rigorous and highly structured methodology for the conduct of SRs. 8 The guidelines developed by these organizations outline seven fundamental steps required in SR process: defining the scope of the review and eligibility criteria, literature searching and retrieval, selecting eligible studies, extracting relevant data, assessing risk of bias (RoB) in included studies, synthesizing results, and assessing certainty of evidence (CoE) and presenting findings. 4 , 5 , 6 , 7

The methodological rigor involved in an SR can require a significant amount of time and resource, which may not always be available. 9 As a result, there has been a proliferation of modifications made to the traditional SR process, such as refining, shortening, bypassing, or omitting one or more steps, 10 , 11 for example, limits on the number and type of databases searched, limits on publication date, language, and types of studies included, and limiting to one reviewer for screening and selection of studies, as opposed to two or more reviewers. 10 , 11 These methodological modifications are made to accommodate the needs of and resource constraints of the reviewers and stakeholders (e.g., organizations, policymakers, health care professionals, and other knowledge users). While such modifications are considered time and resource efficient, they may introduce bias in the review process reducing their usefulness. 5

Substantial research has been conducted examining various approaches used in the standardized SR methodology and their impact on the validity of SR results. There are a number of published reviews examining the approaches or modifications corresponding to single 12 , 13 or multiple steps 14 involved in an SR. However, there is yet to be a comprehensive summary of the SR‐level evidence for all the seven fundamental steps in an SR. Such a holistic evidence synthesis will provide an empirical basis to confirm the validity of current accepted practices in the conduct of SRs. Furthermore, sometimes there is a balance that needs to be achieved between the resource availability and the need to synthesize the evidence in the best way possible, given the constraints. This evidence base will also inform the choice of modifications to be made to the SR methods, as well as the potential impact of these modifications on the SR results. An overview is considered the choice of approach for summarizing existing evidence on a broad topic, directing the reader to evidence, or highlighting the gaps in evidence, where the evidence is derived exclusively from SRs. 15 Therefore, for this review, an overview approach was used to (a) identify and collate evidence from existing published SR articles evaluating various methodological approaches employed in each of the seven fundamental steps of an SR and (b) highlight both the gaps in the current research and the potential areas for future research on the methods employed in SRs.

An a priori protocol was developed for this overview but was not registered with the International Prospective Register of Systematic Reviews (PROSPERO), as the review was primarily methodological in nature and did not meet PROSPERO eligibility criteria for registration. The protocol is available from the corresponding author upon reasonable request. This overview was conducted based on the guidelines for the conduct of overviews as outlined in The Cochrane Handbook. 15 Reporting followed the Preferred Reporting Items for Systematic reviews and Meta‐analyses (PRISMA) statement. 3

2.1. Eligibility criteria

Only published SRs, with or without associated MA, were included in this overview. We adopted the defining characteristics of SRs from The Cochrane Handbook. 5 According to The Cochrane Handbook, a review was considered systematic if it satisfied the following criteria: (a) clearly states the objectives and eligibility criteria for study inclusion; (b) provides reproducible methodology; (c) includes a systematic search to identify all eligible studies; (d) reports assessment of validity of findings of included studies (e.g., RoB assessment of the included studies); (e) systematically presents all the characteristics or findings of the included studies. 5 Reviews that did not meet all of the above criteria were not considered a SR for this study and were excluded. MA‐only articles were included if it was mentioned that the MA was based on an SR.

SRs and/or MA of primary studies evaluating methodological approaches used in defining review scope and study eligibility, literature search, study selection, data extraction, RoB assessment, data synthesis, and CoE assessment and reporting were included. The methodological approaches examined in these SRs and/or MA can also be related to the substeps or elements of these steps; for example, applying limits on date or type of publication are the elements of literature search. Included SRs examined or compared various aspects of a method or methods, and the associated factors, including but not limited to: precision or effectiveness; accuracy or reliability; impact on the SR and/or MA results; reproducibility of an SR steps or bias occurred; time and/or resource efficiency. SRs assessing the methodological quality of SRs (e.g., adherence to reporting guidelines), evaluating techniques for building search strategies or the use of specific database filters (e.g., use of Boolean operators or search filters for randomized controlled trials), examining various tools used for RoB or CoE assessment (e.g., ROBINS vs. Cochrane RoB tool), or evaluating statistical techniques used in meta‐analyses were excluded. 14

2.2. Search

The search for published SRs was performed on the following scientific databases initially from inception to third week of November 2020 and updated in the last week of February 2022: MEDLINE (via Ovid), Embase (via Ovid), Web of Science Core Collection, Cochrane Database of Systematic Reviews, and American Psychological Association (APA) PsycINFO. Search was restricted to English language publications. Following the objectives of this study, study design filters within databases were used to restrict the search to SRs and MA, where available. The reference lists of included SRs were also searched for potentially relevant publications.

The search terms included keywords, truncations, and subject headings for the key concepts in the review question: SRs and/or MA, methods, and evaluation. Some of the terms were adopted from the search strategy used in a previous review by Robson et al., which reviewed primary studies on methodological approaches used in study selection, data extraction, and quality appraisal steps of SR process. 14 Individual search strategies were developed for respective databases by combining the search terms using appropriate proximity and Boolean operators, along with the related subject headings in order to identify SRs and/or MA. 16 , 17 A senior librarian was consulted in the design of the search terms and strategy. Appendix A presents the detailed search strategies for all five databases.

2.3. Study selection and data extraction

Title and abstract screening of references were performed in three steps. First, one reviewer (PV) screened all the titles and excluded obviously irrelevant citations, for example, articles on topics not related to SRs, non‐SR publications (such as randomized controlled trials, observational studies, scoping reviews, etc.). Next, from the remaining citations, a random sample of 200 titles and abstracts were screened against the predefined eligibility criteria by two reviewers (PV and MM), independently, in duplicate. Discrepancies were discussed and resolved by consensus. This step ensured that the responses of the two reviewers were calibrated for consistency in the application of the eligibility criteria in the screening process. Finally, all the remaining titles and abstracts were reviewed by a single “calibrated” reviewer (PV) to identify potential full‐text records. Full‐text screening was performed by at least two authors independently (PV screened all the records, and duplicate assessment was conducted by MM, HC, or MG), with discrepancies resolved via discussions or by consulting a third reviewer.

Data related to review characteristics, results, key findings, and conclusions were extracted by at least two reviewers independently (PV performed data extraction for all the reviews and duplicate extraction was performed by AP, HC, or MG).

2.4. Quality assessment of included reviews

The quality assessment of the included SRs was performed using the AMSTAR 2 (A MeaSurement Tool to Assess systematic Reviews). The tool consists of a 16‐item checklist addressing critical and noncritical domains. 18 For the purpose of this study, the domain related to MA was reclassified from critical to noncritical, as SRs with and without MA were included. The other six critical domains were used according to the tool guidelines. 18 Two reviewers (PV and AP) independently responded to each of the 16 items in the checklist with either “yes,” “partial yes,” or “no.” Based on the interpretations of the critical and noncritical domains, the overall quality of the review was rated as high, moderate, low, or critically low. 18 Disagreements were resolved through discussion or by consulting a third reviewer.

2.5. Data synthesis

To provide an understandable summary of existing evidence syntheses, characteristics of the methods evaluated in the included SRs were examined and key findings were categorized and presented based on the corresponding step in the SR process. The categories of key elements within each step were discussed and agreed by the authors. Results of the included reviews were tabulated and summarized descriptively, along with a discussion on any overlap in the primary studies. 15 No quantitative analyses of the data were performed.

From 41,556 unique citations identified through literature search, 50 full‐text records were reviewed, and nine systematic reviews 14 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 were deemed eligible for inclusion. The flow of studies through the screening process is presented in Figure  1 . A list of excluded studies with reasons can be found in Appendix B .

An external file that holds a picture, illustration, etc.
Object name is JEBM-15-39-g001.jpg

Study selection flowchart

3.1. Characteristics of included reviews

Table  1 summarizes the characteristics of included SRs. The majority of the included reviews (six of nine) were published after 2010. 14 , 22 , 23 , 24 , 25 , 26 Four of the nine included SRs were Cochrane reviews. 20 , 21 , 22 , 23 The number of databases searched in the reviews ranged from 2 to 14, 2 reviews searched gray literature sources, 24 , 25 and 7 reviews included a supplementary search strategy to identify relevant literature. 14 , 19 , 20 , 21 , 22 , 23 , 26 Three of the included SRs (all Cochrane reviews) included an integrated MA. 20 , 21 , 23

Characteristics of included studies

Author, yearSearch strategy (year last searched; no. databases; supplementary searches)SR design (type of review; no. of studies included)Topic; subject areaSR objectivesSR authors’ comments on study quality
Crumley, 2005 2004; Seven databases; four journals handsearched, reference lists and contacting authorsSR;  = 64RCTs and CCTs; not specifiedTo identify and quantitatively review studies comparing two or more different resources (e.g., databases, Internet, handsearching) used to identify RCTs and CCTs for systematic reviews.Most of the studies adequately described reproducible search methods, expected search yield. Poor quality in studies was mainly due to lack of rigor in reporting selection methodology. Majority of the studies did not indicate the number of people involved in independently screening the searches or applying eligibility criteria to identify potentially relevant studies.
Hopewell, 2007 2002; eight databases; selected journals and published abstracts handsearched, and contacting authorsSR and MA;  = 34 (34 in quantitative analysis)RCTs; health careTo review systematically empirical studies, which have compared the results of handsearching with the results of searching one or more electronic databases to identify reports of randomized trials.The electronic search was designed and carried out appropriately in majority of the studies, while the appropriateness of handsearching was unclear in half the studies because of limited information. The screening studies methods used in both groups were comparable in most of the studies.
Hopewell, 2007 2005; two databases; selected journals and published abstracts handsearched, reference lists, citations and contacting authorsSR and MA;  = 5 (5 in quantitative analysis)RCTs; health careTo review systematically research studies, which have investigated the impact of gray literature in meta‐analyses of randomized trials of health care interventions.In majority of the studies, electronic searches were designed and conducted appropriately, and the selection of studies for eligibility was similar for handsearching and database searching. Insufficient data for most studies to assess the appropriateness of handsearching and investigator agreeability on the eligibility of the trial reports.
Horsley, 2011 2008; three databases; reference lists, citations and contacting authorsSR;  = 12Any topic or study areaTo investigate the effectiveness of checking reference lists for the identification of additional, relevant studies for systematic reviews. Effectiveness is defined as the proportion of relevant studies identified by review authors solely by checking reference lists.Interpretability and generalizability of included studies was difficult. Extensive heterogeneity among the studies in the number and type of databases used. Lack of control in majority of the studies related to the quality and comprehensiveness of searching.
Morrison, 2012 2011; six databases and gray literatureSR;  = 5RCTs; conventional medicineTo examine the impact of English language restriction on systematic review‐based meta‐analysesThe included studies were assessed to have good reporting quality and validity of results. Methodological issues were mainly noted in the areas of sample power calculation and distribution of confounders.
Robson, 2019 2016; three databases; reference lists and contacting authorsSR;  = 37N/RTo identify and summarize studies assessing methodologies for study selection, data abstraction, or quality appraisal in systematic reviews.The quality of the included studies was generally low. Only one study was assessed as having low RoB across all four domains. Majority of the studies were assessed to having unclear RoB across one or more domains.
Schmucker, 2017 2016; four databases; reference listsSR;  = 10Study data; medicineTo assess whether the inclusion of data that were not published at all and/or published only in the gray literature influences pooled effect estimates in meta‐analyses and leads to different interpretation.Majority of the included studies could not be judged on the adequacy of matching or adjusting for confounders of the gray/unpublished data in comparison to published data.
Also, generalizability of results was low or unclear in four research projects
Morissette, 2011 2009; five databases; reference lists and contacting authorsSR and MA;  = 6 (5 included in quantitative analysis)N/RTo determine whether blinded versus unblinded assessments of risk of bias result in similar or systematically different assessments in studies included in a systematic review.Four studies had unclear risk of bias, while two studies had high risk of bias.
O'Mara‐Eves, 2015 2013; 14 databases and gray literatureSR;  = 44N/RTo gather and present the available research evidence on existing methods for text mining related to the title and abstract screening stage in a systematic review, including the performance metrics used to evaluate these technologies.Quality appraised based on two criteria‐sampling of test cases and adequacy of methods description for replication. No study was excluded based on the quality (author contact).

SR = systematic review; MA = meta‐analysis; RCT = randomized controlled trial; CCT = controlled clinical trial; N/R = not reported.

The included SRs evaluated 24 unique methodological approaches (26 in total) used across five steps in the SR process; 8 SRs evaluated 6 approaches, 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 while 1 review evaluated 18 approaches. 14 Exclusion of gray or unpublished literature 21 , 26 and blinding of reviewers for RoB assessment 14 , 23 were evaluated in two reviews each. Included SRs evaluated methods used in five different steps in the SR process, including methods used in defining the scope of review ( n  = 3), literature search ( n  = 3), study selection ( n  = 2), data extraction ( n  = 1), and RoB assessment ( n  = 2) (Table  2 ).

Summary of findings from review evaluating systematic review methods

Key elementsAuthor, yearMethod assessedEvaluations/outcomes (P—primary; S—secondary)Summary of SR authors’ conclusionsQuality of review
Excluding study data based on publication statusHopewell, 2007 Gray vs. published literaturePooled effect estimatePublished trials are usually larger and show an overall greater treatment effect than gray trials. Excluding trials reported in gray literature from SRs and MAs may exaggerate the results.Moderate
Schmucker, 2017 Gray and/or unpublished vs. published literatureP: Pooled effect estimateExcluding unpublished trials had no or only a small effect on the pooled estimates of treatment effects. Insufficient evidence to conclude the impact of including unpublished or gray study data on MA conclusions.Moderate
S: Impact on interpretation of MA
Excluding study data based on language of publicationMorrison, 2012 English language vs. non‐English language publicationsP: Bias in summary treatment effectsNo evidence of a systematic bias from the use of English language restrictions in systematic review‐based meta‐analyses in conventional medicine. Conflicting results on the methodological and reporting quality of English and non‐English language RCTs. Further research required.Low
S: number of included studies and patients, methodological quality and statistical heterogeneity
Resources searchingCrumley, 2005 Two or more resources searching vs. resource‐specific searchingRecall and precisionMultiple‐source comprehensive searches are necessary to identify all RCTs for a systematic review. For electronic databases, using the Cochrane HSS or complex search strategy in consultation with a librarian is recommended.Critically low
Supplementary searchingHopewell, 2007 Handsearching only vs. one or more electronic database(s) searchingNumber of identified randomized trialsHandsearching is important for identifying trial reports for inclusion in systematic reviews of health care interventions published in nonindexed journals. Where time and resources are limited, majority of the full English‐language trial reports can be identified using a complex search or the Cochrane HSS.Moderate
Horsley, 2011 Checking reference list (no comparison)P: additional yield of checking reference listsThere is some evidence to support the use of checking reference lists to complement literature search in systematic reviews.Low
S: additional yield by publication type, study design or both and data pertaining to costs
Reviewer characteristicsRobson, 2019 Single vs. double reviewer screeningP: Accuracy, reliability, or efficiency of a methodUsing two reviewers for screening is recommended. If resources are limited, one reviewer can screen, and other reviewer can verify the list of excluded studies.Low
S: factors affecting accuracy or reliability of a method
Experienced vs. inexperienced reviewers for screeningScreening must be performed by experienced reviewers
Screening by blinded vs. unblinded reviewersAuthors do not recommend blinding of reviewers during screening as the blinding process was time‐consuming and had little impact on the results of MA
Use of technology for study selectionRobson, 2019 Use of dual computer monitors vs. nonuse of dual monitors for screeningP: Accuracy, reliability, or efficiency of a methodThere are no significant differences in the time spent on abstract or full‐text screening with the use and nonuse of dual monitorsLow
S: factors affecting accuracy or reliability of a method
Use of Google translate to translate non‐English citations to facilitate screeningUse of Google translate to screen German language citations
O'Mara‐Eves, 2015 Use of text mining for title and abstract screeningAny evaluation concerning workload reductionText mining approaches can be used to reduce the number of studies to be screened, increase the rate of screening, improve the workflow with screening prioritization, and replace the second reviewer. The evaluated approaches reported saving a workload of between 30% and 70%Critically low
Order of screeningRobson, 2019 Title‐first screening vs. title‐and‐abstract simultaneous screeningP: Accuracy, reliability, or efficiency of a methodTitle‐first screening showed no substantial gain in time when compared to simultaneous title and abstract screening.Low
S: factors affecting accuracy or reliability of a method
Reviewer characteristicsRobson, 2019 Single vs. double reviewer data extractionP: Accuracy, reliability, or efficiency of a methodUse two reviewers for data extraction. Single reviewer data extraction followed by the verification of outcome data by a second reviewer (where statistical analysis is planned), if resources precludeLow
S: factors affecting accuracy or reliability of a method
Experienced vs. inexperienced reviewers for data extractionExperienced reviewers must be used for extracting continuous outcomes data
Data extraction by blinded vs. unblinded reviewersAuthors do not recommend blinding of reviewers during data extraction as it had no impact on the results of MA
Use of technology for data extractionUse of dual computer monitors vs. nonuse of dual monitors for data extractionUsing two computer monitors may improve the efficiency of data extraction
Data extraction by two English reviewers using Google translate vs. data extraction by two reviewers fluent in respective languagesGoogle translate provides limited accuracy for data extraction
Computer‐assisted vs. double reviewer extraction of graphical dataUse of computer‐assisted programs to extract graphical data
Obtaining additional dataContacting study authors for additional dataRecommend contacting authors for obtaining additional relevant data
Reviewer characteristicsRobson, 2019 Quality appraisal by blinded vs. unblinded reviewersP: Accuracy, reliability, or efficiency of a methodInconsistent results on RoB assessments performed by blinded and unblinded reviewers. Blinding reviewers for quality appraisal not recommendedLow
S: factors affecting accuracy or reliability of a method
Morissette, 2011 Risk of bias (RoB) assessment by blinded vs. unblinded reviewersP: Mean difference and 95% confidence interval between RoB assessment scoresFindings related to the difference between blinded and unblinded RoB assessments are inconsistent from the studies. Pooled effects show no differences in RoB assessments for assessments completed in a blinded or unblinded manner.Moderate
S: qualitative level of agreement, mean RoB scores and measures of variance for the results of the RoB assessments, and inter‐rater reliability between blinded and unblinded reviewers
Robson, 2019 Experienced vs. inexperienced reviewers for quality appraisalP: Accuracy, reliability, or efficiency of a methodReviewers performing quality appraisal must be trained. Quality assessment tool must be pilot tested.Low
S: factors affecting accuracy or reliability of a method
Use of additional guidance vs. nonuse of additional guidance for quality appraisalProviding guidance and decision rules for quality appraisal improved the inter‐rater reliability in RoB assessments.
Obtaining additional dataContacting study authors for obtaining additional information/use of supplementary information available in the published trials vs. no additional information for quality appraisalAdditional data related to study quality obtained by contacting study authors improved the quality assessment.
RoB assessment of qualitative studiesStructured vs. unstructured appraisal of qualitative research studiesUse of structured tool if qualitative and quantitative studies designs are included in the review. For qualitative reviews, either structured or unstructured quality appraisal tool can be used.

There was some overlap in the primary studies evaluated in the included SRs on the same topics: Schmucker et al. 26 and Hopewell et al. 21 ( n  = 4), Hopewell et al. 20 and Crumley et al. 19 ( n  = 30), and Robson et al. 14 and Morissette et al. 23 ( n  = 4). There were no conflicting results between any of the identified SRs on the same topic.

3.2. Methodological quality of included reviews

Overall, the quality of the included reviews was assessed as moderate at best (Table  2 ). The most common critical weakness in the reviews was failure to provide justification for excluding individual studies (four reviews). Detailed quality assessment is provided in Appendix C .

3.3. Evidence on systematic review methods

3.3.1. methods for defining review scope and eligibility.

Two SRs investigated the effect of excluding data obtained from gray or unpublished sources on the pooled effect estimates of MA. 21 , 26 Hopewell et al. 21 reviewed five studies that compared the impact of gray literature on the results of a cohort of MA of RCTs in health care interventions. Gray literature was defined as information published in “print or electronic sources not controlled by commercial or academic publishers.” Findings showed an overall greater treatment effect for published trials than trials reported in gray literature. In a more recent review, Schmucker et al. 26 addressed similar objectives, by investigating gray and unpublished data in medicine. In addition to gray literature, defined similar to the previous review by Hopewell et al., the authors also evaluated unpublished data—defined as “supplemental unpublished data related to published trials, data obtained from the Food and Drug Administration  or other regulatory websites or postmarketing analyses hidden from the public.” The review found that in majority of the MA, excluding gray literature had little or no effect on the pooled effect estimates. The evidence was limited to conclude if the data from gray and unpublished literature had an impact on the conclusions of MA. 26

Morrison et al. 24 examined five studies measuring the effect of excluding non‐English language RCTs on the summary treatment effects of SR‐based MA in various fields of conventional medicine. Although none of the included studies reported major difference in the treatment effect estimates between English only and non‐English inclusive MA, the review found inconsistent evidence regarding the methodological and reporting quality of English and non‐English trials. 24 As such, there might be a risk of introducing “language bias” when excluding non‐English language RCTs. The authors also noted that the numbers of non‐English trials vary across medical specialties, as does the impact of these trials on MA results. Based on these findings, Morrison et al. 24 conclude that literature searches must include non‐English studies when resources and time are available to minimize the risk of introducing “language bias.”

3.3.2. Methods for searching studies

Crumley et al. 19 analyzed recall (also referred to as “sensitivity” by some researchers; defined as “percentage of relevant studies identified by the search”) and precision (defined as “percentage of studies identified by the search that were relevant”) when searching a single resource to identify randomized controlled trials and controlled clinical trials, as opposed to searching multiple resources. The studies included in their review frequently compared a MEDLINE only search with the search involving a combination of other resources. The review found low median recall estimates (median values between 24% and 92%) and very low median precisions (median values between 0% and 49%) for most of the electronic databases when searched singularly. 19 A between‐database comparison, based on the type of search strategy used, showed better recall and precision for complex and Cochrane Highly Sensitive search strategies (CHSSS). In conclusion, the authors emphasize that literature searches for trials in SRs must include multiple sources. 19

In an SR comparing handsearching and electronic database searching, Hopewell et al. 20 found that handsearching retrieved more relevant RCTs (retrieval rate of 92%−100%) than searching in a single electronic database (retrieval rates of 67% for PsycINFO/PsycLIT, 55% for MEDLINE, and 49% for Embase). The retrieval rates varied depending on the quality of handsearching, type of electronic search strategy used (e.g., simple, complex or CHSSS), and type of trial reports searched (e.g., full reports, conference abstracts, etc.). The authors concluded that handsearching was particularly important in identifying full trials published in nonindexed journals and in languages other than English, as well as those published as abstracts and letters. 20

The effectiveness of checking reference lists to retrieve additional relevant studies for an SR was investigated by Horsley et al. 22 The review reported that checking reference lists yielded 2.5%–40% more studies depending on the quality and comprehensiveness of the electronic search used. The authors conclude that there is some evidence, although from poor quality studies, to support use of checking reference lists to supplement database searching. 22

3.3.3. Methods for selecting studies

Three approaches relevant to reviewer characteristics, including number, experience, and blinding of reviewers involved in the screening process were highlighted in an SR by Robson et al. 14 Based on the retrieved evidence, the authors recommended that two independent, experienced, and unblinded reviewers be involved in study selection. 14 A modified approach has also been suggested by the review authors, where one reviewer screens and the other reviewer verifies the list of excluded studies, when the resources are limited. It should be noted however this suggestion is likely based on the authors’ opinion, as there was no evidence related to this from the studies included in the review.

Robson et al. 14 also reported two methods describing the use of technology for screening studies: use of Google Translate for translating languages (for example, German language articles to English) to facilitate screening was considered a viable method, while using two computer monitors for screening did not increase the screening efficiency in SR. Title‐first screening was found to be more efficient than simultaneous screening of titles and abstracts, although the gain in time with the former method was lesser than the latter. Therefore, considering that the search results are routinely exported as titles and abstracts, Robson et al. 14 recommend screening titles and abstracts simultaneously. However, the authors note that these conclusions were based on very limited number (in most instances one study per method) of low‐quality studies. 14

3.3.4. Methods for data extraction

Robson et al. 14 examined three approaches for data extraction relevant to reviewer characteristics, including number, experience, and blinding of reviewers (similar to the study selection step). Although based on limited evidence from a small number of studies, the authors recommended use of two experienced and unblinded reviewers for data extraction. The experience of the reviewers was suggested to be especially important when extracting continuous outcomes (or quantitative) data. However, when the resources are limited, data extraction by one reviewer and a verification of the outcomes data by a second reviewer was recommended.

As for the methods involving use of technology, Robson et al. 14 identified limited evidence on the use of two monitors to improve the data extraction efficiency and computer‐assisted programs for graphical data extraction. However, use of Google Translate for data extraction in non‐English articles was not considered to be viable. 14 In the same review, Robson et al. 14 identified evidence supporting contacting authors for obtaining additional relevant data.

3.3.5. Methods for RoB assessment

Two SRs examined the impact of blinding of reviewers for RoB assessments. 14 , 23 Morissette et al. 23 investigated the mean differences between the blinded and unblinded RoB assessment scores and found inconsistent differences among the included studies providing no definitive conclusions. Similar conclusions were drawn in a more recent review by Robson et al., 14 which included four studies on reviewer blinding for RoB assessment that completely overlapped with Morissette et al. 23

Use of experienced reviewers and provision of additional guidance for RoB assessment were examined by Robson et al. 14 The review concluded that providing intensive training and guidance on assessing studies reporting insufficient data to the reviewers improves RoB assessments. 14 Obtaining additional data related to quality assessment by contacting study authors was also found to help the RoB assessments, although based on limited evidence. When assessing the qualitative or mixed method reviews, Robson et al. 14 recommends the use of a structured RoB tool as opposed to an unstructured tool. No SRs were identified on data synthesis and CoE assessment and reporting steps.

4. DISCUSSION

4.1. summary of findings.

Nine SRs examining 24 unique methods used across five steps in the SR process were identified in this overview. The collective evidence supports some current traditional and modified SR practices, while challenging other approaches. However, the quality of the included reviews was assessed to be moderate at best and in the majority of the included SRs, evidence related to the evaluated methods was obtained from very limited numbers of primary studies. As such, the interpretations from these SRs should be made cautiously.

The evidence gathered from the included SRs corroborate a few current SR approaches. 5 For example, it is important to search multiple resources for identifying relevant trials (RCTs and/or CCTs). The resources must include a combination of electronic database searching, handsearching, and reference lists of retrieved articles. 5 However, no SRs have been identified that evaluated the impact of the number of electronic databases searched. A recent study by Halladay et al. 27 found that articles on therapeutic intervention, retrieved by searching databases other than PubMed (including Embase), contributed only a small amount of information to the MA and also had a minimal impact on the MA results. The authors concluded that when the resources are limited and when large number of studies are expected to be retrieved for the SR or MA, PubMed‐only search can yield reliable results. 27

Findings from the included SRs also reiterate some methodological modifications currently employed to “expedite” the SR process. 10 , 11 For example, excluding non‐English language trials and gray/unpublished trials from MA have been shown to have minimal or no impact on the results of MA. 24 , 26 However, the efficiency of these SR methods, in terms of time and the resources used, have not been evaluated in the included SRs. 24 , 26 Of the SRs included, only two have focused on the aspect of efficiency 14 , 25 ; O'Mara‐Eves et al. 25 report some evidence to support the use of text‐mining approaches for title and abstract screening in order to increase the rate of screening. Moreover, only one included SR 14 considered primary studies that evaluated reliability (inter‐ or intra‐reviewer consistency) and accuracy (validity when compared against a “gold standard” method) of the SR methods. This can be attributed to the limited number of primary studies that evaluated these outcomes when evaluating the SR methods. 14 Lack of outcome measures related to reliability, accuracy, and efficiency precludes making definitive recommendations on the use of these methods/modifications. Future research studies must focus on these outcomes.

Some evaluated methods may be relevant to multiple steps; for example, exclusions based on publication status (gray/unpublished literature) and language of publication (non‐English language studies) can be outlined in the a priori eligibility criteria or can be incorporated as search limits in the search strategy. SRs included in this overview focused on the effect of study exclusions on pooled treatment effect estimates or MA conclusions. Excluding studies from the search results, after conducting a comprehensive search, based on different eligibility criteria may yield different results when compared to the results obtained when limiting the search itself. 28 Further studies are required to examine this aspect.

Although we acknowledge the lack of standardized quality assessment tools for methodological study designs, we adhered to the Cochrane criteria for identifying SRs in this overview. This was done to ensure consistency in the quality of the included evidence. As a result, we excluded three reviews that did not provide any form of discussion on the quality of the included studies. The methods investigated in these reviews concern supplementary search, 29 data extraction, 12 and screening. 13 However, methods reported in two of these three reviews, by Mathes et al. 12 and Waffenschmidt et al., 13 have also been examined in the SR by Robson et al., 14 which was included in this overview; in most instances (with the exception of one study included in Mathes et al. 12 and Waffenschmidt et al. 13 each), the studies examined in these excluded reviews overlapped with those in the SR by Robson et al. 14

One of the key gaps in the knowledge observed in this overview was the dearth of SRs on the methods used in the data synthesis component of SR. Narrative and quantitative syntheses are the two most commonly used approaches for synthesizing data in evidence synthesis. 5 There are some published studies on the proposed indications and implications of these two approaches. 30 , 31 These studies found that both data synthesis methods produced comparable results and have their own advantages, suggesting that the choice of the method must be based on the purpose of the review. 31 With increasing number of “expedited” SR approaches (so called “rapid reviews”) avoiding MA, 10 , 11 further research studies are warranted in this area to determine the impact of the type of data synthesis on the results of the SR.

4.2. Implications for future research

The findings of this overview highlight several areas of paucity in primary research and evidence synthesis on SR methods. First, no SRs were identified on methods used in two important components of the SR process, including data synthesis and CoE and reporting. As for the included SRs, a limited number of evaluation studies have been identified for several methods. This indicates that further research is required to corroborate many of the methods recommended in current SR guidelines. 4 , 5 , 6 , 7 Second, some SRs evaluated the impact of methods on the results of quantitative synthesis and MA conclusions. Future research studies must also focus on the interpretations of SR results. 28 , 32 Finally, most of the included SRs were conducted on specific topics related to the field of health care, limiting the generalizability of the findings to other areas. It is important that future research studies evaluating evidence syntheses broaden the objectives and include studies on different topics within the field of health care.

4.3. Strengths and limitations

To our knowledge, this is the first overview summarizing current evidence from SRs and MA on different methodological approaches used in several fundamental steps in SR conduct. The overview methodology followed well established guidelines and strict criteria defined for the inclusion of SRs.

There are several limitations related to the nature of the included reviews. Evidence for most of the methods investigated in the included reviews was derived from a limited number of primary studies. Also, the majority of the included SRs may be considered outdated as they were published (or last updated) more than 5 years ago 33 ; only three of the nine SRs have been published in the last 5 years. 14 , 25 , 26 Therefore, important and recent evidence related to these topics may not have been included. Substantial numbers of included SRs were conducted in the field of health, which may limit the generalizability of the findings. Some method evaluations in the included SRs focused on quantitative analyses components and MA conclusions only. As such, the applicability of these findings to SR more broadly is still unclear. 28 Considering the methodological nature of our overview, limiting the inclusion of SRs according to the Cochrane criteria might have resulted in missing some relevant evidence from those reviews without a quality assessment component. 12 , 13 , 29 Although the included SRs performed some form of quality appraisal of the included studies, most of them did not use a standardized RoB tool, which may impact the confidence in their conclusions. Due to the type of outcome measures used for the method evaluations in the primary studies and the included SRs, some of the identified methods have not been validated against a reference standard.

Some limitations in the overview process must be noted. While our literature search was exhaustive covering five bibliographic databases and supplementary search of reference lists, no gray sources or other evidence resources were searched. Also, the search was primarily conducted in health databases, which might have resulted in missing SRs published in other fields. Moreover, only English language SRs were included for feasibility. As the literature search retrieved large number of citations (i.e., 41,556), the title and abstract screening was performed by a single reviewer, calibrated for consistency in the screening process by another reviewer, owing to time and resource limitations. These might have potentially resulted in some errors when retrieving and selecting relevant SRs. The SR methods were grouped based on key elements of each recommended SR step, as agreed by the authors. This categorization pertains to the identified set of methods and should be considered subjective.

5. CONCLUSIONS

This overview identified limited SR‐level evidence on various methodological approaches currently employed during five of the seven fundamental steps in the SR process. Limited evidence was also identified on some methodological modifications currently used to expedite the SR process. Overall, findings highlight the dearth of SRs on SR methodologies, warranting further work to confirm several current recommendations on conventional and expedited SR processes.

CONFLICT OF INTEREST

The authors declare no conflicts of interest.

Supporting information

APPENDIX A: Detailed search strategies

ACKNOWLEDGMENTS

The first author is supported by a La Trobe University Full Fee Research Scholarship and a Graduate Research Scholarship.

Open Access Funding provided by La Trobe University.

Veginadu P, Calache H, Gussy M, Pandian A, Masood M. An overview of methodological approaches in systematic reviews . J Evid Based Med . 2022; 15 :39–54. 10.1111/jebm.12468 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

How to Write a Systematic Review Dissertation: With Examples

Writing a systematic review dissertation isn’t easy because you must follow a thorough and accurate scientific process. You must be an expert in research methodology to synthesise studies. In this article, I will provide a step-by-step approach to writing a top-notch systematic review dissertation.

Table of Contents

However, for students who may find this process challenging and seek professional assistance, I recommend exploring SystematicReviewPro —a reliable systematic review writing service. By signing up and placing a free inquiry and engaging with the admin team at any time, students can avail themselves of an exclusive offer of up to 50% off on their systematic review order. Additionally, there is already a 30% discount running on the website, making it an excellent opportunity to ease your dissertation journey.

As an Undergraduate or Master’s student, you’re are allowed to pick a systematic review for your dissertation. As a PhD student, you can use a systematic review methodology in the second chapter (literature review) of your dissertation. A systematic review is considered the highest level of empirical evidence, especially in clinical sciences like nursing and medicine. When developing new practice guidelines, new services, or new products, systematic reviews are searched and synthesised first on that topic or idea.

Factors to Consider When Writing a Systematic Review Dissertation

The nature of your research topic or research question.

Some research topics or questions strictly conform to qualitative or quantitative methods. For example, if you’re exploring the lived experiences, attitudes, perceptions, and meaning-making in a given population, you’ll need qualitative methods. However, you will require quantitative methods if looking into quantifiable variables like happiness, depression, academic performance, sleep, etc. That said, the nature of your research question should guide you. If your topic is qualitative, you’ll need qualitative studies only. If your topic is quantitative, you’ll need quantitative studies only. Systematic reviews of qualitative studies are less intricate than of quantitative studies. Still, they require a thoughtful approach in synthesizing findings from various qualitative studies.

If you choose to review quantitative studies, you might need to conduct a meta-analysis in your systematic review. A meta-analysis refers to statistical techniques used in pooling findings from various independent studies to compute a summary statistic. For example, in your dissertation, you may aim to investigate the effect of a student well-being programme embedded in university classes on the happiness of university students. Various studies that have investigated the same or a related intervention and quantitively measured happiness among university students must be synthesised together using a statistical technique. The ultimate outcome of that meta-analysis is to provide an overview of the overall trend of the effect of the intervention on university student’s happiness. For more information about how to formulate a research question for a systematic review with a meta-analysis, visit this link.

meta-analysis dissertation example

An example meta-analysis showing the statistical combination of findings from various studies to indicate the overall effect of a psychological intervention on the psychological well-being of university students.

Availability of primary studies

Finding primary studies for your systematic review is the hardest thing you can encounter with this approach. You can choose your topic and plan your journey so well. Upon reaching the point you need primary studies to answer your research question, you get stuck. Retrieving primary studies is challenging because it requires advanced search strategies on various online databases. Doing an advanced search strategy can be an uphill task for someone who has never done a systematic review. This is because, more often than not, depending on the topic, primary studies are not readily available on the Internet. Remember, secondary studies, like systematic reviews and literature reviews, are not eligible for systematic reviews.

Supervisor’s recommendation

Always confirm with your supervisor if you can do a systematic review dissertation. Some supervisors may feel it better for you to do a primary study. So, always confirm with your supervisor before doing much.

Your confidence

Always ensure you’re confident that you can do a systematic review on your own. Writing a systematic review isn’t easy. You need to be aware that doing a systematic review may even be harder than doing interviews or surveys in primary research. Why? A systematic review involves combining many primary studies together in a scientific manner. That means you must have expertise in various research methodologies to know the best way to integrate or synthesise the various studies.

Availability of time and resources

The main advantage of doing a systematic review dissertation is that it saves a lot of time. Conducting interviews or surveys can be time- and resource-consuming. However, with a systematic review, you do everything from your desk. It will save you a lot of time and resources. If you find that you meet many of the requirements of successfully conducting a systematic review, the next step is to engage in the actual process. The step-by-step approach used in writing systematic reviews is outlined below.

Step-by-Step Process in Writing a Systematic Review Dissertation

The following steps are iterative, meaning you can start over again and again until you meet your research objectives. The step-by-step guide on how to write a systematic review dissertation is summarized in the infographic shown below.

Step-by-step guide on how to write a systematic review dissertation

Step-by-step guide on how to write a systematic review dissertation

Step 1: Formulate the systematic review research question

The starting point of a systematic review is to formulate a research question. As stated above, the nature of your research question will help you make key decisions. For example, you will be able to know which design (quantitative versus qualitative) to consider in your inclusion and exclusion criteria.

Step 2: Do a preliminary search

The next step is to perform a preliminary search on the Internet to determine if another systematic review has been published. It is not acceptable to repeat what has already been done. Your research should be novel and contribute to a knowledge gap. However, if you find that another systematic review has already been published on your topic. You should consider the publication date.

In most cases, systematic reviews on given topics are outdated. They have not used recent studies published on that topic, thus missing important updates. That can be a good reason you’re conducting your study. Suppose there’s an updated systematic review on your topic. In that case, you should consider reformulating your research question to address a specific knowledge gap.

Step 3: Develop your systematic review inclusion and exclusion criteria

One unique thing about systematic reviews is that they must be based on a very specific population, intervention/exposure, and assess a specific outcome. Let’s say, for example, you write on Intervention A’s effectiveness in reducing depression symptoms in older frail people. In that case, you must retrieve studies that strictly assess the effectiveness of Intervention A, the outcome being depression symptoms and the population being older frail people.

Therefore, it will be against the principles of a systematic review to focus on Intervention B (different intervention/exposure) on anxiety (different outcomes) in younger people (different populations). Also, depending on your research question, you will need to determine the research design (qualitative versus quantitative) of the studies you will review. Other criteria to consider are the country of publication, the publication date, language, etc.

Step 4: Develop your systematic review search strategy

As said, the main challenge in writing a systematic review is to identify papers. Your literature search should be thorough so that you don’t leave out some relevant studies. Developing a literature search strategy isn’t easy because you must start identifying relevant keywords and search terms for your topic. You must start by knowing common terminologies used in your subject of interest.

Afterward, combine the keywords using Boolean connectors like “AND” & “OR.” For example, suppose my topic is the effectiveness of cognitive behavioural therapy in treating anxiety in adolescents. In that regard, I can combine my keywords as follows: (Cognitive behavioural therapy OR CBT) AND (anxiety) AND (adolescents OR youth). If you use terminologies unknown in your discipline, you will likely not find relevant studies for review.

Step 5: Plan and perform systematic review database selection

At this stage, you identify the databases you’ll use to execute your search strategy. When writing a systematic review dissertation, you also need to report the databases that you searched. Commonly searched ones in the field of social and health sciences include PubMed, Google Scholar, Cochrane, PsycInfo, and many others. You need to know how each database works. Also, apart from Google Scholar and PubMed, most of these databases require paid or institutional access. Liaise with your supervisor or librarian to help in identifying good databases for subject and discipline.

Step 6: Perform systematic review screening using titles and abstracts

When you execute your search strategy on each database, results or search hits will be displayed. This is also another difficult step because of tedious work involved. You start by screening the titles. Then, eliminate results that contain irrelevant titles. You need to be careful at this point because sometimes people eliminate even relevant studies. The title doesn’t need to contain exactly your keywords. Some titles appear totally irrelevant but they actually contain useful data inside.

After screening titles, the next step is to screen abstracts. You may be surprised at this point that the titles you thought were irrelevant actually contain relevant information. For instance, some studies may indicate in the title that their study focused on depression as an outcome when you’re interested in anxiety. However, reading the abstract may surprise you that depression was only a primary outcome. The authors also measured secondary outcomes, among them anxiety. In such an article, you can decide to focus on anxiety results only because they are relevant to your study.

Step 7: Do a manual search to supplement database search

After screening articles identified using various databases, the next step is to augment the search strategy with a manual search. This will ensure you don’t miss relevant studies in your systematic review dissertation. The manual search involves identifying more studies in the bibliographies of the identified articles using a database search. It is also about contacting the authors and experts sourced from the found articles to give access to more articles that may not be found online. Finally, you can also identify key journals from the articles and perform a hand search. For example, suppose I identify the Journal of Cognitive Psychology. In that case, I will visit that journal’s website and perform a manual search there. A properly done manual search can help you identify more articles that you couldn’t have identified using databases only.

Step 8: Perform systematic review screening using the full-body texts

After having all your articles intact, the next step is to screen for full-text bodies. In most cases, the titles and abstracts may not contain enough information for screening purposes. You must read the full texts of the articles to determine their full eligibility. At this point, you screen articles identified through database search and manual search altogether. For example, sometimes you may be interested in healthy adolescents. In the abstract, the author of the articles may only report adolescents without providing any specifics about them. Upon reading the full text, you may discover that the authors included adolescents with mental issues that are not within your study’s scope. Therefore, always do a full-text screening before you move to the next step.

Step 9: Perform systematic review quality assessment using PRISMA, etc

Systematic review dissertations can be used to inform the formulation of practice guidelines and even inform policies. You must strive to review only studies with rigorous methodological quality. The quality assessment tool will depend on your study’s design. The commonly used ones for student dissertations include CASP Checklists and Joanna Briggs Institute (JBI) Checklists. You can consult with your supervisor before arriving at the final decision. Transparently report your quality assessment findings. For example, indicate the score of each study under each item of each tool and calculate the overall score in the form of a percentage. Also, always have a cut-off of 65%, and studies whose methodological rigour is below the cut-off are excluded.

Step 10: Perform systematic review data extraction

The next step is to extract relevant data from your studies. Your data extraction approach depends on the research design of the studies you used. If you use qualitative studies, your data extraction can focus on individual studies’ findings, particularly themes. You can also extract data that can aid in-depth analysis, such as country of study, population characteristics, etc. Using quantitative studies, you can collect quantitative data that will aid your analysis, such as means and standard deviations and other crucial information relevant to your analysis technique. Always chart your data in a tabular format to facilitate easy management and handling.

Step 11: Carry on with systematic review data analysis

The data analysis approach used in your systematic review dissertation will depend on the research design. Using qualitative studies, you will rely on qualitative approaches to analyse your data. For example, you can do a thematic analysis or a narrative synthesis. If you used quantitative studies, you might need to perform a meta-analysis or narrative synthesis. A meta-analysis is done when you have homogenous studies (such as population, outcome variables, measurement tools, etc.) that are experimental in nature. Particularly, meta-analysis is performed when reviewing controlled randomized trials or other interventional studies. In other words, meta-analysis is appropriately used when reviewing the effectiveness of interventions. However, if your quantitative studies are heterogenous, such as using different research designs, you must perform a narrative synthesis.

Step 12: Prepare the written report

The final step is to produce a written report of your systematic review dissertation. One of the ethical concerns in systematic reviews is transparency. You can improve the transparency of your reporting by using an established protocol like PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses).

Approximate price: $ 22

Calculate the price of your order

  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee

Money-back guarantee

  • 24/7 support
  • Systematic Review Service
  • Meta Analysis Services
  • Literature Search Service
  • Literature Review Assistance
  • Scientific Article Writing Service
  • Manuscript Publication Assistance
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard, etc)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore. That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Covidence website will be inaccessible as we upgrading our platform on Monday 23rd August at 10am AEST, / 2am CEST/1am BST (Sunday, 15th August 8pm EDT/5pm PDT) 

How to write the methods section of a systematic review

Home | Blog | How To | How to write the methods section of a systematic review

Covidence breaks down how to write a methods section

The methods section of your systematic review describes what you did, how you did it, and why. Readers need this information to interpret the results and conclusions of the review. Often, a lot of information needs to be distilled into just a few paragraphs. This can be a challenging task, but good preparation and the right tools will help you to set off in the right direction 🗺️🧭.

Systematic reviews are so-called because they are conducted in a way that is rigorous and replicable. So it’s important that these methods are reported in a way that is thorough, clear, and easy to navigate for the reader – whether that’s a patient, a healthcare worker, or a researcher. 

Like most things in a systematic review, the methods should be planned upfront and ideally described in detail in a project plan or protocol. Reviews of healthcare interventions follow the PRISMA guidelines for the minimum set of items to report in the methods section. But what else should be included? It’s a good idea to consider what readers will want to know about the review methods and whether the journal you’re planning to submit the work to has expectations on the reporting of methods. Finding out in advance will help you to plan what to include.

systematic review thesis example

Describe what happened

While the research plan sets out what you intend to do, the methods section is a write-up of what actually happened. It’s not a simple case of rewriting the plan in the past tense – you will also need to discuss and justify deviations from the plan and describe the handling of issues that were unforeseen at the time the plan was written. For this reason, it is useful to make detailed notes before, during, and after the review is completed. Relying on memory alone risks losing valuable information and trawling through emails when the deadline is looming can be frustrating and time consuming! 

Keep it brief

The methods section should be succinct but include all the noteworthy information. This can be a difficult balance to achieve. A useful strategy is to aim for a brief description that signposts the reader to a separate section or sections of supporting information. This could include datasets, a flowchart to show what happened to the excluded studies, a collection of search strategies, and tables containing detailed information about the studies.This separation keeps the review short and simple while enabling the reader to drill down to the detail as needed. And if the methods follow a well-known or standard process, it might suffice to say so and give a reference, rather than describe the process at length. 

Follow a structure

A clear structure provides focus. Use of descriptive headings keeps the writing on track and helps the reader get to key information quickly. What should the structure of the methods section look like? As always, a lot depends on the type of review but it will certainly contain information relating to the following areas:

  • Selection criteria ⭕
  • Data collection and analysis 👩‍💻
  • Study quality and risk of bias ⚖️

Let’s look at each of these in turn.

1. Selection criteria ⭕

The criteria for including and excluding studies are listed here. This includes detail about the types of studies, the types of participants, the types of interventions and the types of outcomes and how they were measured. 

2. Search 🕵🏾‍♀️

Comprehensive reporting of the search is important because this means it can be evaluated and replicated. The search strategies are included in the review, along with details of the databases searched. It’s also important to list any restrictions on the search (for example, language), describe how resources other than electronic databases were searched (for example,  non-indexed journals), and give the date that the searches were run. The PRISMA-S extension provides guidance on reporting literature searches. 

systematic review thesis example

Systematic reviewer pro-tip:

 Copy and paste the search strategy to avoid introducing typos

3. Data collection and analysis 👩‍💻

This section describes:

  • how studies were selected for inclusion in the review
  • how study data were extracted from the study reports
  • how study data were combined for analysis and synthesis

To describe how studies were selected for inclusion , review teams outline the screening process. Covidence uses reviewers’ decision data to automatically populate a PRISMA flow diagram for this purpose. Covidence can also calculate Cohen’s kappa to enable review teams to report the level of agreement among individual reviewers during screening.

To describe how study data were extracted from the study reports , reviewers outline the form that was used, any pilot-testing that was done, and the items that were extracted from the included studies. An important piece of information to include here is the process used to resolve conflict among the reviewers. Covidence’s data extraction tool saves reviewers’ comments and notes in the system as they work. This keeps the information in one place for easy retrieval ⚡.

To describe how study data were combined for analysis and synthesis, reviewers outline the type of synthesis (narrative or quantitative, for example), the methods for grouping data, the challenges that came up, and how these were dealt with. If the review includes a meta-analysis, it will detail how this was performed and how the treatment effects were measured.

4. Study quality and risk of bias ⚖️

Because the results of systematic reviews can be affected by many types of bias, reviewers make every effort to minimise it and to show the reader that the methods they used were appropriate. This section describes the methods used to assess study quality and an assessment of the risk of bias across a range of domains. 

Steps to assess the risk of bias in studies include looking at how study participants were assigned to treatment groups and whether patients and/or study assessors were blinded to the treatment given. Reviewers also report their assessment of the risk of bias due to missing outcome data, whether that is due to participant drop-out or non-reporting of the outcomes by the study authors.

Covidence’s default template for assessing study quality is Cochrane’s risk of bias tool but it is also possible to start from scratch and build a tool with a set of custom domains if you prefer.

Careful planning, clear writing, and a structured approach are key to a good methods section. A methodologist will be able to refer review teams to examples of good methods reporting in the literature. Covidence helps reviewers to screen references, extract data and complete risk of bias tables quickly and efficiently. Sign up for a free trial today!

Picture of Laura Mellor. Portsmouth, UK

Laura Mellor. Portsmouth, UK

Perhaps you'd also like....

systematic review thesis example

Top 5 Tips for High-Quality Systematic Review Data Extraction

Data extraction can be a complex step in the systematic review process. Here are 5 top tips from our experts to help prepare and achieve high quality data extraction.

systematic review thesis example

How to get through study quality assessment Systematic Review

Find out 5 tops tips to conducting quality assessment and why it’s an important step in the systematic review process.

systematic review thesis example

How to extract study data for your systematic review

Learn the basic process and some tips to build data extraction forms for your systematic review with Covidence.

Better systematic review management

Head office, working for an institution or organisation.

Find out why over 350 of the world’s leading institutions are seeing a surge in publications since using Covidence!

Request a consultation with one of our team members and start empowering your researchers: 

By using our site you consent to our use of cookies to measure and improve our site’s performance. Please see our Privacy Policy for more information. 

Systematic Reviews

  • Introduction
  • Guidelines and procedures
  • Management tools
  • Define the question
  • Check the topic
  • Determine inclusion/exclusion criteria
  • Develop a protocol
  • Identify keywords
  • Databases and search strategies
  • Grey literature
  • Manage and organise
  • Screen & Select
  • Locate full text
  • Extract data

Example reviews

  • Examples of systematic reviews
  • Accessing help This link opens in a new window
  • Systematic Style Reviews Guide This link opens in a new window

Please choose the tab below for your discipline to see relevant examples.

For more information about how to conduct and write reviews, please see the Guidelines section of this guide.

  • Health & Medicine
  • Social sciences
  • Vibration and bubbles: a systematic review of the effects of helicopter retrieval on injured divers. (2018).
  • Nicotine effects on exercise performance and physiological responses in nicotine‐naïve individuals: a systematic review. (2018).
  • Association of total white cell count with mortality and major adverse events in patients with peripheral arterial disease: A systematic review. (2014).
  • Do MOOCs contribute to student equity and social inclusion? A systematic review 2014–18. (2020).
  • Interventions in Foster Family Care: A Systematic Review. (2020).
  • Determinants of happiness among healthcare professionals between 2009 and 2019: a systematic review. (2020).
  • Systematic review of the outcomes and trade-offs of ten types of decarbonization policy instruments. (2021).
  • A systematic review on Asian's farmers' adaptation practices towards climate change. (2018).
  • Are concentrations of pollutants in sharks, rays and skates (Elasmobranchii) a cause for concern? A systematic review. (2020).
  • << Previous: Write
  • Next: Publish >>
  • Last Updated: Jul 12, 2024 4:02 PM
  • URL: https://libguides.jcu.edu.au/systematic-review

Acknowledgement of Country

  • Clinical Research
  • Healthcare Research
  • Public Health
  • Clinical Trials
  • Systematic Reviews

Systematic Literature Review: Some Examples

Awatif Alqahtany at Umm Al-Qura University

  • Umm Al-Qura University
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Abstract and Figures

Overview of systematic literature review (AMJED).

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Muhammad Rashid

  • Muhammad Rashid
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Study Site Homepage

  • Request new password
  • Create a new account

Doing a Systematic Review: A Student's Guide

Student resources, chapter 1. carrying out a systematic review as a master's thesis.

Creative visuals and engaging audios are a great way to reconsolidate your knowledge! Check out the suggested online presentations and podcasts below.

Systematic reviews: What are they and how to do them (presentation by Prof Rumona Dickson)

http://pcwww.liv.ac.uk/ehls/dickson/systematic-reviews

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

energies-logo

Article Menu

systematic review thesis example

  • Subscribe SciFeed
  • Recommended Articles
  • Author Biographies
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Deep learning models for pv power forecasting: review.

systematic review thesis example

1. Introduction

2. fundamental deep learning models for time-series forecasting and pv datasets.

  • Data collection and preprocessing

2.1. MLP (Multilayer Perceptron)

2.2. rnn (recurrent neural networks), 2.3. cnn (convolutional neural networks), 2.4. gnn (graph neural networks), 3. variations of the baseline models for pv power forecasting, 3.1. mlp-based model, 3.1.1. univariate forecasting models, 3.1.2. multivariate forecasting models, 3.1.3. frequency domain-based forecasting models, 3.2. rnn-based models, 3.2.1. probabilistic forecasting models, 3.2.2. multivariate forecasting models, 3.3. cnn-based models, 3.3.1. cross-time-scale models, 3.3.2. cross-variate-dependence models, 3.3.3. other models, 3.4. gnn-based models, 4. conclusions, author contributions, data availability statement, acknowledgments, conflicts of interest.

ReferenceBased FrameModelsMain Structure and FeaturesAlready Used in PV Forecasting
[ ]MLPN-BeatsBlock Input, Block Layers (FC layers), Backcast Output, Forecast Output, Doubly Residual Stacking
[ ]MLPN-BeatsXBlock Input, Block Layers (FC layers), Backcast Output, Forecast Output, Convolutional Layer, Doubly Residual Stacking, Interpretable Time-Series Signal Decomposition-
[ ]MLPN-HiTSMulti-Rate Signal Sampling, Hierarchical Interpolation, Cross-block Synchronization of Input Sample Rate, Output Interpolation Scale-
[ ]MLPDEPTSPeriodicity Module, Discrete Cosine Transform (DCT), Triply Residual Expansions-
[ ]MLPFreDoMixer, Discrete Fourier Transform (DFT), AverageTile, Inverse DFT
[ ]MLPMTS-MixersTemporal MLP, Factorized Channel MLP, Optional Embedding, Linear Projection Layer, Attention-based MTS-Mixer, Random Matrix-based MTS-Mixer, MLP-based MTS-Mixer-
[ ]MLPTSMixerTime-mixing MLP, Feature-mixing MLP, Temporal Projection, Align and Mixing-
[ ]MLPCI-TSMixerLinear Patch Embedding Layer, Mixer Layers, Inter-Patch Mixer Module, Intra-Patch Mixer Module, Inter-Channel Mixer Module
Linear Patch Embedding Layer, Gated Attention Block, Online Forecast Reconciliation Heads
-
[ ]MLPTFDNetMulti-Scale Window Mechanism, Trend Time-frequency Block, Seasonal Time-frequency Block, Frequency-FFN, Mixture Loss-
[ ]MLPFreTSDomain Conversion, Inversion Stages, Frequency-domain MLPs, Frequency Channel Learner, Frequency Temporal Learner, Dimension Extension Block
[ ]MLPU-MixerMixer, Normalization and Patch Embedding, Unet Encoder-Decoder, Stationarity Correction-
[ ]MLPTimeMixerMultiscale Time Series Downsampling, Past-Decomposable-Mixing Block, Future-Multipredictor-Mixing Block
[ ]RNNLSTNetConvolutional Component, Recurrent Component, Recurrent-skip Component, Dense Layer, Autoregressive Linear Model, Final Forecasting
[ ]RNNDA-RNNInput Attention Mechanism, Encoder (LSTM), Temporal Attention Mechanism, Decoder (LSTM), Output Layer (LSTM)-
[ ]RNNDeepARInput Layer, Encoder (LSTM), Autoregressive Mechanism, Decoder (LSTM), Output Layer (LSTM), Probabilistic Forecasting
[ ]RNNMQRNNEncoder (LSTM), Decoder (Global MLP and Local MLP), Forking-Sequences Training Scheme, Target Masking Strategy-
[ ]RNNmWDNMultilevel Discrete Wavelet Decomposition, Residual Classification Flow, multi-frequency Long Short-Term Memory-
[ ]RNNMTNetLarge Memory Component, Three Separate Encoders, Attention Mechanism, Convolutional Layer, Autoregressive Component
[ ]RNNESLSTMDeseasonalization and Adaptive Normalization, Generation of Forecasts, Ensembling, Dilated LSTM-based Stacks, Linear Adapter Layer, Residual Connections, Attention Mechanism-
[ ]RNNMH-TALEncoder (LSTM), Decoder (BiLSTM), Temporal Attention Mechanism, Multimodal Fusion, Fully Connected Layer-
[ ]RNNC2FARHierarchical Generation, Neural Network Parameterization, Negative Log-Likelihood Minimization, RNN-based Forecasting, Multi-Level C2FAR Models-
[ ]RNNSegRNNSegment-wise Iterations, Parallel Multi-step Forecasting (PMF), GRU Cell, Encoding Phase, Decoding Phase, Channel Independent (CI) Strategy, Channel Identifier-
[ ]RNNWITRANWater-wave Information Transmission, Horizontal Vertical Gated Selective Unit, Recurrent Acceleration Network-
[ ]RNNSutraNetsSub-series Decomposition, Autoregressive Model, Parallel Training, C2FAR-LSTMs, Low2HighFreq Approach, Backfill-alt Strategy, Monte Carlo Sampling-
[ ]CNNDeepGLOGlobal Matrix Factorization Model, Local Temporal Convolution Network, Hybrid Model, Handling Scale Variations
[ ]CNNDSANetGlobal Temporal Convolution, Local Temporal Convolution, Self-Attention Module, Autoregressive Component, Parallel Computing and Long Sequence Modeling
[ ]CNNMLCNNConvolutional Component, Sharing Mechanism, Fusion Encoder (LSTM), Main Decoder (LSTM), Autoregressive Component, Multi-Task Learning Framework-
[ ]CNNSCINetInteractive Learning, Hierarchical Structure, Residual Connection, Decoder (Fully Connected Network), Multiple Layers of SCINet, Intermediate Supervision
[ ]CNNMICNMultiple Branches of Convolution Kernels, Local Features Extraction, Global Correlations Modeling, Merge Operation, Seasonal Forecasting Block, Trend-cyclical Forecasting Block
[ ]CNNTimesNetTimesBlock, Multi-scale 2D Kernels, Residual Connection, Various Vision Backbones
[ ]CNNLightCTSPlain Stacking Architecture, Light-TCN, Global-Local TransFormer, Last-shot Compression Scheme, Embedding Module, Aggregation and Output Module
[ ]CNNTLNetsFourier Transform, Singular Value Decomposition, Matrix Multiplication, Convolutional Block, Receptive Field Learning-
[ ]CNNCross-LKTCNPatch-Style Embedding Strategy, Depth-Wise Large Kernel Convolution, Feed Forward Networks, Multiple Cross-LKTCN Block Stacking, Linear Head with a Flatten Layer-
[ ]CNNMPPNMulti-Resolution Patching, Multi-Periodic Pattern Mining, Channel Adaptive Module, Output Layer-
[ ]CNNFDNetDecomposed Forecasting Formula, Basic Linear Projection Layers, 2D Convolutional Layers, Focal Input Sequence Decomposition, Final Output Design-
[ ]CNNPatchMixerSingle-scale Depthwise Separable Convolutional Block, MLP, Patch Embedding, Patch-Mixing, Instance Normalization-
[ ]CNNWinNetInter-Intra Period Encoder, Two-Dimensional Period Decomposition, Decomposition Correlation Block, Series Decoder-
[ ]CNNModernTCNVariable-Independent Embedding, Depthwise Convolution, ConvFFN1, ConvFFN2, Fully-Convolutional Structure-
[ ]CNNConvTimeNetDeformable Patch Embedding, Fully Convolutional Blocks, Hierarchical and Multi-Scale Representations, Linear Layer-
[ ]GNNSTGCNSpatiotemporal Convolutional Blocks, Graph Convolutional Layers, Gated Temporal Convolutional Layers, Residual Connections and Bottleneck Strategy, Fully-Connected Output Layer
[ ]GNNMTGNNGraph Learning Layer, Graph Convolution Modules, Temporal Convolution Modules, Residual and Skip Connections, Output Module, Curriculum Learning Strategy
[ ]GNNStemGNNLatent Correlation Layer, Graph Fourier Transform (GFT), Discrete Fourier Transform (DFT), 1D Convolution and GLU
Graph Convolution and Inverse GFT, Residual Connections, Inverse Discrete Fourier Transform (IDFT)
[ ]GNNTPGNNEncoder-Decoder, Temporal Polynomial Graph (TPG), Diffusion Graph Convolution Layer, Adaptive Graph Construction
[ ]GNNFourierGNNHypervariate Graph, Fourier Graph Operator (FGO), Stacking FGO Layers in Fourier Space
[ ]GNNMSGNetScale Learning and Transforming Layer, Multiple Graph Convolution Module, Temporal Multi-Head Attention Module, ScaleGraph Block, Input Embedding and Residual Connection
Multi-Scale Adaptive Graph Convolution, Multi-Head Attention Mechanism, Integrating Representations from Different Scales
-
  • Patterson, K. An Introduction to ARMA Models. In Unit Root Tests in Time Series: Key Concepts and Problems ; Palgrave Macmillan: London, UK, 2011; pp. 68–122. [ Google Scholar ]
  • Cao, L.J.N. Support vector machines experts for time series forecasting. Neurocomputing 2003 , 51 , 321–339. [ Google Scholar ] [ CrossRef ]
  • Rumelhart, D.E.; Hinton, G.E.; Williams, R. Learning representations by back-propagating errors. Nature 1986 , 323 , 533–536. [ Google Scholar ] [ CrossRef ]
  • Elman, J. Finding structure in time. Cogn. Sci. 1990 , 14 , 179–211. [ Google Scholar ] [ CrossRef ]
  • LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998 , 86 , 2278–2324. [ Google Scholar ] [ CrossRef ]
  • Scarselli, F.; Gori, M.; Tsoi, A.C.; Hagenbuchner, M.; Monfardini, G. The graph neural network model. IEEE Trans. Neural Netw. 2008 , 20 , 61–80. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997 , 9 , 1735–1780. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Yang, R.; Zha, X.; Liu, K.; Xu, S. A CNN model embedded with local feature knowledge and its application to time-varying signal classification. Neural Netw. 2021 , 142 , 564–572. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Karimi, A.M.; Wu, Y.; Koyuturk, M.; French, R. Spatiotemporal graph neural network for performance prediction of photovoltaic power systems. Proc. AAAI Conf. Artif. Intell. 2021 , 35 , 15323–15330. [ Google Scholar ] [ CrossRef ]
  • Liu, C.; Li, M.; Yu, Y.; Wu, Z.; Gong, H.; Cheng, F. A review of multitemporal and multispatial scales photovoltaic forecasting methods. IEEE Access 2022 , 10 , 35073–35093. [ Google Scholar ] [ CrossRef ]
  • Huang, C.; Cao, L.; Peng, N.; Li, S.; Zhang, J.; Wang, L.; Luo, X.; Wang, J.-H. Day-ahead forecasting of hourly photovoltaic power based on robust multilayer perception. Sustainability 2018 , 10 , 4863. [ Google Scholar ] [ CrossRef ]
  • Anwar, M.T.; Islam, M.F.; Alam, M.G.R. Forecasting Meteorological Solar Irradiation Using Machine Learning and N-BEATS Architecture. In Proceedings of the 2023 8th International Conference on Machine Learning Technologies, Stockholm, Sweden, 10–12 March 2023; pp. 46–53. [ Google Scholar ]
  • Wang, K.; Qi, X.; Liu, H. Photovoltaic power forecasting based LSTM-Convolutional Network. Energy 2019 , 189 , 116225. [ Google Scholar ] [ CrossRef ]
  • Yemane, S. Deep Forecasting of Renewable Energy Production with Numerical Weather Predictions. Master’s Thesis, LUT University, Lappeenranta, Finland, 2021. [ Google Scholar ]
  • Sun, F.-K.; Boning, D. Fredo: Frequency domain-based long-term time series forecasting. arXiv 2022 , arXiv:2205.12301. [ Google Scholar ]
  • Wang, S.; Wu, H.; Shi, X.; Hu, T.; Luo, H.; Ma, L.; Zhang, J.Y.; Zhou, J. TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting. In Proceedings of the Twelfth International Conference on Learning Representations, Vienna, Austria, 7–11 May 2024. [ Google Scholar ]
  • Woschitz, M. Spatio-Temporal PV Forecasting with (Graph) Neural Networks. Master’s Thesis, Technische Universität Wien, Vienna, Austria, 2023. [ Google Scholar ]
  • Yi, K.; Zhang, Q.; Fan, W.; He, H.; Hu, L.; Wang, P.; An, N.; Cao, L.; Niu, Z. FourierGNN: Rethinking multivariate time series forecasting from a pure graph perspective. arXiv 2024 , arXiv:2311.06190v1. [ Google Scholar ]
  • Zhang, M.; Tao, P.; Ren, P.; Zhen, Z.; Wang, F.; Wang, G. Spatial-Temporal Graph Neural Network for Regional Photovoltaic Power Forecasting Based on Weather Condition Recognition. In Proceedings of the 10th Renewable Power Generation Conference (RPG 2021), Online, 14–15 October 2021; pp. 361–368. [ Google Scholar ]
  • Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv 2014 , arXiv:1412.3555. [ Google Scholar ]
  • Bai, S.; Kolter, J.Z.; Koltun, V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv 2018 , arXiv:1803.01271. [ Google Scholar ]
  • Zhang, H.; Lu, G.; Zhan, M.; Zhang, B. Semi-supervised classification of graph convolutional networks with Laplacian rank constraints. Neural Process. Lett. 2022 , 54 , 2645–2656. [ Google Scholar ] [ CrossRef ]
  • Oreshkin, B.; Carpov, D.; Chapados, N.; Bengio, Y. N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. arXiv 2019 , arXiv:1905.10437. [ Google Scholar ]
  • Olivares, K.G.; Challu, C.; Marcjasz, G.; Weron, R.; Dubrawski, A. Neural basis expansion analysis with exogenous variables: Forecasting electricity prices with NBEATSx. Int. J. Forecast. 2023 , 39 , 884–900. [ Google Scholar ] [ CrossRef ]
  • Challu, C.; Olivares, K.G.; Oreshkin, B.N.; Ramirez, F.G.; Canseco, M.M.; Dubrawski, A. Nhits: Neural hierarchical interpolation for time series forecasting. Proc. AAAI Conf. Artif. Intell. 2023 , 37 , 6989–6997. [ Google Scholar ] [ CrossRef ]
  • Fan, W.; Zheng, S.; Yi, X.; Cao, W.; Fu, Y.; Bian, J.; Liu, T.-Y. DEPTS: Deep expansion learning for periodic time series forecasting. arXiv 2022 , arXiv:2203.07681. [ Google Scholar ]
  • Li, Z.; Rao, Z.; Pan, L.; Xu, Z. Mts-mixers: Multivariate time series forecasting via factorized temporal and channel mixing. arXiv 2023 , arXiv:2302.04501. [ Google Scholar ]
  • Chen, S.-A.; Li, C.-L.; Yoder, N.; Arik, S.O.; Pfister, T. Tsmixer: An all-mlp architecture for time series forecasting. arXiv 2023 , arXiv:2303.06053. [ Google Scholar ]
  • Vijay, E.; Jati, A.; Nguyen, N.; Sinthong, G.; Kalagnanam, J. TSMixer: Lightweight MLP-mixer model for multivariate time series forecasting. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Barcelona, Spain, 25–29 August 2024. [ Google Scholar ]
  • Yi, K.; Zhang, Q.; Fan, W.; Wang, S.; Wang, P.; He, H.; An, N.; Lian, D.; Cao, L.; Niu, Z. Frequency-domain MLPs are more effective learners in time series forecasting. arXiv 2024 , arXiv:2311.06184. [ Google Scholar ]
  • Luo, Y.; Lyu, Z.; Huang, X. TFDNet: Time-Frequency Enhanced Decomposed Network for Long-term Time Series Forecasting. arXiv 2023 , arXiv:2308.13386. [ Google Scholar ]
  • Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv 2014 , arXiv:1406.1078. [ Google Scholar ]
  • Jia, P.; Zhang, H.; Liu, X.; Gong, X. Short-term photovoltaic power forecasting based on VMD and ISSA-GRU. IEEE Access 2021 , 9 , 105939–105950. [ Google Scholar ] [ CrossRef ]
  • Qin, Y.; Song, D.; Chen, H.; Cheng, W.; Jiang, G.; Cottrell, G. A dual-stage attention-based recurrent neural network for time series prediction. arXiv 2017 , arXiv:1704.02971. [ Google Scholar ]
  • Salinas, D.; Flunkert, V.; Gasthaus, J.; Januschowski, T. DeepAR: Probabilistic forecasting with autoregressive recurrent networks. Int. J. Forecast. 2020 , 36 , 1181–1191. [ Google Scholar ] [ CrossRef ]
  • Bergsma, S.; Zeyl, T.; Rahimipour Anaraki, J.; Guo, L. C2far: Coarse-to-fine autoregressive networks for precise probabilistic forecasting. Adv. Neural Inf. Process. Syst. 2022 , 35 , 21900–21915. [ Google Scholar ]
  • Bergsma, S.; Zeyl, T.; Guo, L. SutraNets: Sub-series Autoregressive Networks for Long-Sequence, Probabilistic Forecasting. Adv. Neural Inf. Process. Syst. 2023 , 36 , 30518–30533. [ Google Scholar ]
  • Wang, J.; Wang, Z.; Li, J.; Wu, J. Multilevel wavelet decomposition network for interpretable time series analysis. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; pp. 2437–2446. [ Google Scholar ]
  • Lin, S.; Lin, W.; Wu, W.; Zhao, F.; Mo, R.; Zhang, H. SegRNN: Segment Recurrent Neural Network for Long-Term Time Series Forecasting. arXiv 2023 , arXiv:2308.11200. [ Google Scholar ]
  • Jia, Y.; Lin, Y.; Hao, X.; Lin, Y.; Guo, S.; Wan, H. Witran: Water-wave information transmission and recurrent acceleration network for long-range time series forecasting. Adv. Neural Inf. Process. Syst. 2024 , 36 , 12389–12456. [ Google Scholar ]
  • Lai, G.; Chang, W.-C.; Yang, Y.; Liu, H. Modeling long-and short-term temporal patterns with deep neural networks. In Proceedings of the 41st international ACM SIGIR conference on research & development in information retrieval, Ann Arbor, MI, USA, 8–12 July 2018; pp. 95–104. [ Google Scholar ]
  • Chang, Y.-Y.; Sun, F.-Y.; Wu, Y.-H.; Lin, S.-D. A memory-network based solution for multivariate time-series forecasting. arXiv 2018 , arXiv:1809.02105. [ Google Scholar ]
  • LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989 , 1 , 541–551. [ Google Scholar ] [ CrossRef ]
  • Sen, R.; Yu, H.-F.; Dhillon, I. Think globally, act locally: A deep neural network approach to high-dimensional time series forecasting. NIPS’19 2019 , 32 , 4837–4846. [ Google Scholar ]
  • Huang, S.; Wang, D.; Wu, X.; Tang, A. Dsanet: Dual self-attention network for multivariate time series forecasting. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Beijing, China, 3–7 November 2019; pp. 2129–2132. [ Google Scholar ]
  • Liu, M.; Zeng, A.; Chen, M.; Xu, Z.; Lai, Q.; Ma, L.; Xu, Q. Scinet: Time series modeling and forecasting with sample convolution and interaction. Adv. Neural Inf. Process. Syst. 2022 , 35 , 5816–5828. [ Google Scholar ]
  • Wang, H.; Peng, J.; Huang, F.; Wang, J.; Chen, J.; Xiao, Y. Micn: Multi-scale local and global context modeling for long-term series forecasting. In Proceedings of the Eleventh International Conference on Learning Representations, Kigali, Rwanda, 1–5 May 2022. [ Google Scholar ]
  • Gong, Z.; Tang, Y.; Liang, J. Patchmixer: A patch-mixing architecture for long-term time series forecasting. arXiv 2023 , arXiv:2310.00655. [ Google Scholar ]
  • Wu, H.; Hu, T.; Liu, Y.; Zhou, H.; Wang, J.; Long, M. Timesnet: Temporal 2d-variation modeling for general time series analysis. In Proceedings of the The eleventh international conference on learning representations. arXiv 2022 , arXiv:2210.02186. [ Google Scholar ]
  • Ou, W.; Guo, D.; Zhang, Z.; Zhao, Z.; Lin, Y. WinNet: Time series forecasting with a window-enhanced period extracting and interacting. arXiv 2023 , arXiv:2311.00214. [ Google Scholar ]
  • Cheng, J.; Huang, K.; Zheng, Z. Towards better forecasting by fusing near and distant future visions. Proc. AAAI Conf. Artif. Intell. 2020 , 34 , 3593–3600. [ Google Scholar ] [ CrossRef ]
  • Luo, D.; Wang, X. Cross-LKTCN: Modern Convolution Utilizing Cross-Variable Dependency for Multivariate Time Series Forecasting Dependency for Multivariate Time Series Forecasting. arXiv 2023 , arXiv:2306.02326. [ Google Scholar ]
  • Cheng, M.; Yang, J.; Pan, T.; Liu, Q.; Li, Z. Convtimenet: A deep hierarchical fully convolutional model for multivariate time series analysis. arXiv 2024 , arXiv:2403.01493. [ Google Scholar ]
  • Luo, D.; Wang, X. Moderntcn: A modern pure convolution structure for general time series analysis. In Proceedings of the Twelfth International Conference on Learning Representations, Vienna, Austria, 7–11 May 2024. [ Google Scholar ]
  • Wang, W.; Liu, Y.; Sun, H. Tlnets: Transformation learning networks for long-range time-series prediction. arXiv 2023 , arXiv:2305.15770. [ Google Scholar ]
  • Shen, L.; Wei, Y.; Wang, Y.; Qiu, H. FDNet: Focal Decomposed Network for efficient, robust and practical time series forecasting. Knowl.Based Syst. 2023 , 275 , 110666. [ Google Scholar ] [ CrossRef ]
  • Lai, Z.; Zhang, D.; Li, H.; Jensen, C.S.; Lu, H.; Zhao, Y. Lightcts: A lightweight framework for correlated time series forecasting. Proc. ACM Manag. Data 2023 , 1 , 1–26. [ Google Scholar ] [ CrossRef ]
  • Li, Y.; Tarlow, D.; Brockschmidt, M.; Zemel, R. Gated graph sequence neural networks. arXiv 2015 , arXiv:1511.05493. [ Google Scholar ]
  • Wu, Z.; Pan, S.; Long, G.; Jiang, J.; Chang, X.; Zhang, C. Connecting the dots: Multivariate time series forecasting with graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Virtual Event, CA, USA, 6–10 July 2020; pp. 753–763. [ Google Scholar ]
  • Liu, Y.; Liu, Q.; Zhang, J.-W.; Feng, H.; Wang, Z.; Zhou, Z.; Chen, W. Multivariate time-series forecasting with temporal polynomial graph neural networks. Adv. Neural Inf. Process. Syst. 2022 , 35 , 19414–19426. [ Google Scholar ]
  • Cao, D.; Wang, Y.; Duan, J.; Zhang, C.; Zhu, X.; Huang, C.; Tong, Y.; Xu, B.; Bai, J.; Tong, J. Spectral temporal graph neural network for multivariate time-series forecasting. Adv. Neural Inf. Process. Syst. 2020 , 33 , 17766–17778. [ Google Scholar ]
  • Zhang, S.; Gong, S.; Ren, Z.; Zhang, Z. Photovoltaic Power Prediction Based on Time-Space-Attention Mechanism and Spectral Temporal Graph. 2021. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4547760 (accessed on 22 October 2023).
  • Cai, W.; Liang, Y.; Liu, X.; Feng, J.; Wu, Y. Msgnet: Learning multi-scale inter-series correlations for multivariate time series forecasting. Proc. AAAI Conf. Artif. Intell. 2024 , 38 , 11141–11149. [ Google Scholar ] [ CrossRef ]
  • Wen, R.; Torkkola, K.; Narayanaswamy, B.; Madeka, D. A multi-horizon quantile recurrent forecaster. arXiv 2017 , arXiv:1711.11053. [ Google Scholar ]
  • Smyl, S. A hybrid method of exponential smoothing and recurrent neural networks for time series forecasting. Int. J. Forecast. 2020 , 36 , 75–85. [ Google Scholar ] [ CrossRef ]
  • Fan, C.; Zhang, Y.; Pan, Y.; Li, X.; Zhang, C.; Yuan, R.; Wu, D.; Wang, W.; Pei, J.; Huang, H. Multi-horizon time series forecasting with temporal attention learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 2527–2535. [ Google Scholar ]
  • Wang, X.; Wang, Z.; Yang, K.; Feng, J.; Song, Z.; Deng, C.; Zhu, L. MPPN: Multi-Resolution Periodic Pattern Network For Long-Term Time Series Forecasting. arXiv 2023 , arXiv:2306.06895. [ Google Scholar ]

Click here to enlarge figure

MetricsEquationDescription
MSE Measures the expected value of the squared difference between the predicted and actual values, to evaluate the degree of variation in the forecasting.
MAE Calculates the average of the absolute errors between the predicted and actual values.
MBE Computes the average deviation between the predicted and true values.
MAPE Assesses the average of the percentage errors between the predicted and actual values.
RMSE Evaluates the average of the square root of the squared errors between the predicted and actual values.
SDE Measures the degree of dispersion between the predicted values and the actual values.
RSE Employed to evaluate the performance improvement of the predictive model compared to a simple average model.
RRSE Represents the square root of the performance improvement of the predictive model relative to a simple average model.
R Signifies the correlation between the predicted values and the actual values.
ReferenceDatasetsDescription
[ ]Solar-6Hourly PV power generation data from the Andre Agassi Preparatory Academy Building B PV power station (36.19 N, 115.16 W, elevation 620 m) in the United States, spanning from 1 January 2012 to 31 December 2017. The data can be obtained from . e.g., accessed on 1 July 2024
[ ]NSRDBThe NSRDB is a database containing global PV horizontal irradiance (GHI), direct normal irradiance (DNI), and diffuse horizontal irradiance (DHI) data. The authors selected data from four regions in Bangladesh (Khulna, Chittagong, Rajshahi, and Sylhet) during the period from 1 January 2018 to 31 December 2020. The data can be obtained from . e.g., accessed on 1 July 2024
[ ]DKASCThis is a publicly available PV system dataset provided by the Desert Knowledge Australia Solar Centre (DKASC). The data includes features related to the PV system, such as current phase average, active power, wind speed, air temperature, relative humidity, global horizontal radiation, scattered horizontal radiation, and wind direction, with a sampling frequency of 5 min.
The data can be obtained from . e.g., accessed on 1 July 2024
[ ]Solar-7PV power generation data from the Finnish Meteorological Institute (FMI) station located in Helsinki, Finland. The data span from 26 August 2015 to 31 December 2020, and contain four PV power generation time-series with a sampling frequency of 1 min. The data also include global horizontal radiation, scattered radiation, direct normal irradiance, global radiation on the tilted PV surface, air temperature, and PV module temperature.
The data can be obtained from . e.g., accessed on 1 July 2024
[ ]Solar-2Data from the National Renewable Energy Laboratory (NREL) in the United States, recording PV power generation from 137 PV power stations in Alabama in 2006. The data have a sampling frequency of 10 min, with a total of 52,560 time points.
The data can be obtained from . e.g., accessed on 1 July 2024
[ ]Solar-1Data from the National Renewable Energy Laboratory (NREL) in the United States, recording PV power generation from 137 PV power stations in Alabama in 2007. The data have a sampling frequency of 10 min, with a total of 52,560 time points.
[ ]Solar-5Data collected by the Energy Intranets project in the Netherlands and provided by the Netherlands Research Council (NWO). The data consist of power data recorded at a sampling frequency of 0.5 Hz from 175 private residential rooftop PV systems in the province of Utrecht, spanning from January 2014 to December 2017. The data include the geographic location (latitude and longitude), tilt angle, azimuth angle, and estimated maximum power output for each PV system.
The data can be obtained from . e.g., accessed on 1 July 2024
[ ]Solar-3Data from a PV power plant in Florida, collected by the National Renewable Energy Laboratory (NREL) in the United States. The dataset contains 593 data points, spanning from 1 January 2006 to 31 December 2016, with a sampling interval of 1 h.
The data can be obtained from . e.g., accessed on 1 July 2024
[ ]Solar-4Actual power generation data from 20 PV power stations in Jilin Province, China, ranging from 13 March 2018 to 30 June 2019, with a sampling frequency of 15 min. The data include the latitude and longitude information for each power station and the cloud total amount (CTA) data provided by the Fengyun-4G (FY-4G) satellite.
MethodRMSEMAE
Robust-MLP0.65080.4370
Generic-MLP0.66350.4511
FeaturesModelsCTGKHUSYLRAJ
RMSEMAPERMSEMAPERMSEMAPERMSEMAPE
clear sky GHIN-BEATS29.032.34%37.893.29%31.762.53%35.773.15%
LSTM424.3919.79%421.20 15.73% 434.22 22.04% 424.39 20.23%
clear sky DHIN-BEATS73.001 6.28% 70.81 5.93% 102.39 11.79% 91.33 8.11%
LSTM136.21 14.09% 120.81 13.74% 125.49 18.69% 156.73 16.89%
clear sky DNIN-BEATS103.33 10.32% 119.10 13.02% 98.39 9.44% 85.63 7.39%
LSTM256.49 20.33% 254.40 21.79% 260.11 19.04% 252.39 18.34%
MethodsMetricForecasting LengthAvg
96192336720
TimeMixerMSE0.1890.2220.2310.2230.216
MAE0.2590.2830.2920.2850.280
DLinearMSE0.2900.3200.3530.3570.330
MAE0.3780.3980.4150.4130.401
InformerMSE0.2870.2970.3670.3740.331
MAE0.3230.3410.4290.4310.381
MethodsMetricForecasting LengthsAvg
96192336720
FreDoMSE0.1760.1930.2020.2070.195
MAE0.2340.2480.2550.2600.249
AutoformerMSE0.4660.7610.8200.8340.720
MAE0.4670.6180.6900.6530.607
ModelsMetric
MAERMSEMAPESDE
LSTM0.327 0.709 0.062 0.689
CNN0.304 0.822 0.058 0.790
LSTM-CNN0.221 0.621 0.042 0.635
ModelsThe Weather Conditions Are Relatively StableThe Weather Conditions Fluctuate ObviouslyThe Weather Conditions Fluctuated Violently
RMSE (kW)MAE (kW) RMSE (kW)MAE (kW) RMSE (kW)MAE (kW)
LSTM4.96693.84760.99249.20257.60640.972517.444011.93880.8513
GRU4.97913.90010.992310.53678.31490.964018.879112.76210.8258
VMD-ISSA-GRU0.68980.54090.99991.99331.47110.99873.78582.85650.9930
DeepARNaïveSeasonal Naive PredictorConstant Predictor
36 h forecasting horizon
Best sample110150
Average sample1400428911193078
Worst sample4424998957168216
Average error1178378121484091
1 h forecasting horizon
Best sample0000
Average sample5000
Worst sample169194163194
Average error22192019
ModelsMetricHorizon
361224
RNN-GRURSE0.19320.26280.41630.4852
CORR0.98230.96750.91500.8823
LSTNetRSE0.19160.24750.34490.4521
CORR0.98200.96980.93940.8911
MTNetRSE0.18470.23980.32510.4285
CORR0.98400.97230.94620.9013
ModelsMetricHorizon
361224
TCNRSE0.19400.25810.35120.4732
CORR0.98350.96020.93210.8812
LSTNetRSE0.18430.25590.32540.4643
CORR0.98430.96900.94670.8870
SCINetRSE0.17750.23010.29970.4081
CORR0.98530.97390.95500.9112
MethodsMetricForecasting LengthsAvg
96192336720
MICNMSE0.2570.2780.2980.2990.283
MAE0.3250.3540.3750.3790.358
TimesNetMSE0.3730.3970.4200.4200.430
MAE0.3580.3760.3800.3810.374
MethodsMetricForecasting LengthsFLOPs
(Unit: M)
Params (Unit: K)Latency
(Unit: s)
Peak Mem
(Unit: Mb)
361224
DSANetRRSE0.18220.24500.32870.438991463770.832.5
CORR0.98420.97010.94440.8943
MTGNNRRSE0.17780.23480.31090.427010903480.59.9
CORR0.98520.97260.95090.9031
LightCTSRRSE0.17140.22020.29550.4129169380.28.6
CORR0.98640.97650.95680.9084
MetricPV s432PV s499PV s353PV s192
GNNLinRegGNNLinRegGNNLinRegGNNLinReg
RMSE85.99 112.74 89.14 146.63 97.25 105.77 91.08 132.83
MAE57.59 78.04 65.93 104.82 69.14 75.55 72.54 98.54
ModelsMetricHorizon
361224
RNN-GRURSE0.19320.26280.41630.4852
CORR0.98230.96750.91500.8823
LSTNetRSE0.1843 0.2559 0.3254 0.4643
CORR0.9843 0.9690 0.9467 0.8870
MTGNNRSE0.1778 0.2348 0.3109 0.4270
CORR0.9852 0.9726 0.9509 0.9031
TPGNNRSE0.1850 0.2412 0.3059 0.3498
CORR0.9840 0.9716 0.9529 0.9710
ModelsMAE (KW)RMSE (KW) (KW)
LSTM0.360.602.480.90
GRU0.390.672.630.85
StemGNN0.350.592.920.88
MAERMSEMAPE (%)
N-BEATS0.090.1523.53
LSTNet0.070.1919.13
TCN0.060.0621.1
DeepGLO0.090.1421.6
StemGNN0.030.0711.55
MAERMSEMAPE (%)
LSTNet0.148 0.200 132.95
TCN0.176 0.222 142.23
DeepGLO0.178 0.400 346.78
StemGNN0.176 0.222 128.39
MTGNN0.151 0.207 507.91
FourierGNN0.120 0.162116.48
Forecasting LengthMetricModels
STGCN-
Classified
STGCN-
Unclassified
GCNLSTM
15 minRMSE2.242% 2.297% 2.921% 3.323%
MAE1.282% 1.909% 1.645% 2.146%
1 hRMSE3.457% 3.709% 4.226% 4.261%
MAE2.141% 2.246% 2.410% 2.573%
2 hRMSE4.870% 5.028% 5.497% 5.309%
MAE2.712% 2.986% 3.071% 3.031%
3 hRMSE6.143% 6.468% 6.978% 6.676%
MAE3.467% 3.702% 3.758% 3.991%
4 hRMSE7.342% 7.402% 8.395% 8.535%
MAE4.075% 4.342% 4.427% 5.291%
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Yu, J.; Li, X.; Yang, L.; Li, L.; Huang, Z.; Shen, K.; Yang, X.; Yang, X.; Xu, Z.; Zhang, D.; et al. Deep Learning Models for PV Power Forecasting: Review. Energies 2024 , 17 , 3973. https://doi.org/10.3390/en17163973

Yu J, Li X, Yang L, Li L, Huang Z, Shen K, Yang X, Yang X, Xu Z, Zhang D, et al. Deep Learning Models for PV Power Forecasting: Review. Energies . 2024; 17(16):3973. https://doi.org/10.3390/en17163973

Yu, Junfeng, Xiaodong Li, Lei Yang, Linze Li, Zhichao Huang, Keyan Shen, Xu Yang, Xu Yang, Zhikang Xu, Dongying Zhang, and et al. 2024. "Deep Learning Models for PV Power Forecasting: Review" Energies 17, no. 16: 3973. https://doi.org/10.3390/en17163973

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

IMAGES

  1. (PDF) How to Write a Systematic Review

    systematic review thesis example

  2. how to write a systematic review dissertation

    systematic review thesis example

  3. 4 components of a systematic review

    systematic review thesis example

  4. Business paper: How to write a systematic review paper

    systematic review thesis example

  5. (PDF) How to do a systematic literature review

    systematic review thesis example

  6. Systematic Reviews: What They Are, Why They Are Important, and How to

    systematic review thesis example

COMMENTS

  1. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  2. How to Write a Systematic Review: A Narrative Review

    Background. A systematic review, as its name suggests, is a systematic way of collecting, evaluating, integrating, and presenting findings from several studies on a specific question or topic.[] A systematic review is a research that, by identifying and combining evidence, is tailored to and answers the research question, based on an assessment of all relevant studies.[2,3] To identify assess ...

  3. Chapter 1. Carrying Out a Systematic Review as a Master's Thesis

    by Angela Boland, M. Gemma Cherry and Rumona Dickson. Chapter 1. Carrying Out a Systematic Review as a Master's Thesis. Explore the wealth of resources available across the web. Here are some good places to start. Link to the Campbell Collaboration, an organization that prepares, maintains and disseminates systematic reviews in education, crime ...

  4. How to do a systematic review

    A systematic review aims to bring evidence together to answer a pre-defined research question. This involves the identification of all primary research relevant to the defined review question, the critical appraisal of this research, and the synthesis of the findings.13 Systematic reviews may combine data from different.

  5. PDF The systematic literature review process: a simple guide for public

    not be appropriate or necessary. Systematic reviews rely on the availability of an adequate body of literature on the topic of interest.2 Systematic reviews require a comprehensive search for relevant studies, data extraction, quality assessment, and synthesis of findings. 3 Therefore, systematic reviews are not suitable when there is

  6. How to write a systematic literature review [9 steps]

    Analyze the results. Interpret and present the results. 1. Decide on your team. When carrying out a systematic literature review, you should employ multiple reviewers in order to minimize bias and strengthen analysis. A minimum of two is a good rule of thumb, with a third to serve as a tiebreaker if needed.

  7. PDF Writing an Effective Literature Review

    at each of these in turn.IntroductionThe first part of any literature review is a way of inviting your read. into the topic and orientating them. A good introduction tells the reader what the review is about - its s. pe—and what you are going to cover. It may also specifically tell you.

  8. Chapter 1: Carrying Out a Systematic Review as a Master's Thesis

    Doing a Systematic Review: A Student's Guide. by Angela Boland, M. Gemma Cherry and Rumona Dickson. Chapter 1: Carrying Out a Systematic Review as a Master's Thesis. What Is The Difference Between A Systematic Review And A Meta-Analysis?

  9. Guidelines for writing a systematic review

    A preliminary review, which can often result in a full systematic review, to understand the available research literature, is usually time or scope limited. Complies evidence from multiple reviews and does not search for primary studies. 3. Identifying a topic and developing inclusion/exclusion criteria.

  10. A Systematic Review and Meta-Analysis of the Effectiveness of Child

    The present study is a systematic review and meta-analysis that explores the effectiveness of child-parent interventions for childhood anxiety disorders. The research located during the literature search was coded for inclusionary criteria and resulted in eight qualifying individual randomized controlled trials (RCT) with a total of 710

  11. PDF Master'S Thesis a Systematic Literature Review on Agile Project ...

    le project management (APM), the thesis uses a systematic literature review. Since managing projects in agile way is a relatively new concept compared to the traditional waterfall model, the resu. ts of the review provide an overview of the research conducted in this area. The results are expected t. he.

  12. How to Do a Systematic Review: A Best Practice Guide ...

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to ...

  13. How to Do a Systematic Review: A Best Practice Guide for ...

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question.

  14. An overview of methodological approaches in systematic reviews

    1. INTRODUCTION. Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search ...

  15. How to Write a Systematic Review Dissertation: With Examples

    Step 10: Perform systematic review data extraction. The next step is to extract relevant data from your studies. Your data extraction approach depends on the research design of the studies you used. If you use qualitative studies, your data extraction can focus on individual studies' findings, particularly themes.

  16. PDF Conducting a Systematic Review: Methodology and Steps

    METHODOLOGY AND STEPS. 9. MPLE DATA EXTRACTION FORM 201.INTRODUCTIONSystematic reviews have gained momentum as a key method of evidence syn. hesis in global development research in recent times. As defined in the Cochrane Handbook on Systematic reviews "Systematic reviews seek to collate evidence that fits pre-specified eligibility cri.

  17. Chapter 1. Carrying Out a Systematic Review as a Master's Thesis

    by Angela Boland, M. Gemma Cherry and Rumona Dickson. Chapter 1: Carrying Out a Systematic Review as a Master's Thesis. Chapter 3: Defining My Review Question and Identifying Inclusion and Exclusion Criteria. Chapter 5: Applying Inclusion and Exclusion Criteria. Chapter 8: Understanding and Synthesizing Numerical Data from Intervention Studies.

  18. How to write the methods section of a systematic review

    Keep it brief. The methods section should be succinct but include all the noteworthy information. This can be a difficult balance to achieve. A useful strategy is to aim for a brief description that signposts the reader to a separate section or sections of supporting information. This could include datasets, a flowchart to show what happened to ...

  19. Examples of systematic reviews

    Please choose the tab below for your discipline to see relevant examples. For more information about how to conduct and write reviews, please see the Guidelines section of this guide. Vibration and bubbles: a systematic review of the effects of helicopter retrieval on injured divers. (2018). Nicotine effects on exercise performance and ...

  20. A Guide to Conducting a Standalone Systematic Literature Review

    Communications of the Association for Information Systems, 2015, 37. �hal-01574600�. Communications of the Association for Information Systems. Volume 37 Article 43 11-2015. A Guide to Conducting a Standalone Systematic Literature Review. Chitu Okoli. Concordia University, [email protected].

  21. (PDF) Systematic Literature Review: Some Examples

    Example for a Systematic Literature Review: In references 5 example for paper that use Systematic Literature Review (SlR) example: ( Event-Driven Process Chain for Modeling and Verification of ...

  22. Chapter 1. Carrying Out a Systematic Review as a Master's Thesis

    by Angela Boland, M. Gemma Cherry and Rumona Dickson. Chapter 1: Carrying Out a Systematic Review as a Master's Thesis. Chapter 3: Defining My Review Question and Identifying Inclusion and Exclusion Criteria. Chapter 5: Applying Inclusion and Exclusion Criteria. Chapter 8: Understanding and Synthesizing Numerical Data from Intervention Studies.

  23. Deep Learning Models for PV Power Forecasting: Review

    Accurate forecasting of photovoltaic (PV) power is essential for grid scheduling and energy management. In recent years, deep learning technology has made significant progress in time-series forecasting, offering new solutions for PV power forecasting. This study provides a systematic review of deep learning models for PV power forecasting, concentrating on comparisons of the features ...