Monash University Logo

  • Help & FAQ

Understanding the influence of different proxy perspectives in explaining the difference between self-rated and proxy-rated quality of life in people living with dementia: a systematic literature review and meta-analysis

  • SPHPM Health Economics Group
  • Epidemiology and Preventive Medicine Alfred Hospital

Research output : Contribution to journal › Review Article › Research › peer-review

Purpose: Proxy assessment can be elicited via the proxy-patient perspective (i.e., asking proxies to assess the patient’s quality of life (QoL) as they think the patient would respond) or proxy-proxy perspective (i.e., asking proxies to provide their own perspective on the patient’s QoL). This review aimed to identify the role of the proxy perspective in explaining the differences between self-rated and proxy-rated QoL in people living with dementia. Methods: A systematic literate review was conducted by sourcing articles from a previously published review, supplemented by an update of the review in four bibliographic databases. Peer-reviewed studies that reported both self-reported and proxy-reported mean QoL estimates using the same standardized QoL instrument, published in English, and focused on the QoL of people with dementia were included. A meta-analysis was conducted to synthesize the mean differences between self- and proxy-report across different proxy perspectives. Results: The review included 96 articles from which 635 observations were extracted. Most observations extracted used the proxy-proxy perspective (79%) compared with the proxy-patient perspective (10%); with 11% of the studies not stating the perspective. The QOL-AD was the most commonly used measure, followed by the EQ-5D and DEMQOL. The standardized mean difference (SMD) between the self- and proxy-report was lower for the proxy-patient perspective (SMD: 0.250; 95% CI 0.116; 0.384) compared to the proxy-proxy perspective (SMD: 0.532; 95% CI 0.456; 0.609). Conclusion: Different proxy perspectives affect the ratings of QoL, whereby adopting a proxy-proxy QoL perspective has a higher inter-rater gap in comparison with the proxy-patient perspective.

Original languageEnglish
Pages (from-to)2055–2066
Number of pages12
Journal
Volume33
DOIs
Publication statusPublished - 2024
  • Outcome measurement
  • Quality of Life

This output contributes to the following UN Sustainable Development Goals (SDGs)

Access to Document

  • 10.1007/s11136-024-03660-w Licence: CC BY

Other files and links

  • Link to publication in Scopus

T1 - Understanding the influence of different proxy perspectives in explaining the difference between self-rated and proxy-rated quality of life in people living with dementia

T2 - a systematic literature review and meta-analysis

AU - Sokolova, Valeriia

AU - Bogatyreva, Ekaterina

AU - Leuenberger, Anna

A2 - Engel, Lidia

N1 - Publisher Copyright: © The Author(s) 2024.

N2 - Purpose: Proxy assessment can be elicited via the proxy-patient perspective (i.e., asking proxies to assess the patient’s quality of life (QoL) as they think the patient would respond) or proxy-proxy perspective (i.e., asking proxies to provide their own perspective on the patient’s QoL). This review aimed to identify the role of the proxy perspective in explaining the differences between self-rated and proxy-rated QoL in people living with dementia. Methods: A systematic literate review was conducted by sourcing articles from a previously published review, supplemented by an update of the review in four bibliographic databases. Peer-reviewed studies that reported both self-reported and proxy-reported mean QoL estimates using the same standardized QoL instrument, published in English, and focused on the QoL of people with dementia were included. A meta-analysis was conducted to synthesize the mean differences between self- and proxy-report across different proxy perspectives. Results: The review included 96 articles from which 635 observations were extracted. Most observations extracted used the proxy-proxy perspective (79%) compared with the proxy-patient perspective (10%); with 11% of the studies not stating the perspective. The QOL-AD was the most commonly used measure, followed by the EQ-5D and DEMQOL. The standardized mean difference (SMD) between the self- and proxy-report was lower for the proxy-patient perspective (SMD: 0.250; 95% CI 0.116; 0.384) compared to the proxy-proxy perspective (SMD: 0.532; 95% CI 0.456; 0.609). Conclusion: Different proxy perspectives affect the ratings of QoL, whereby adopting a proxy-proxy QoL perspective has a higher inter-rater gap in comparison with the proxy-patient perspective.

AB - Purpose: Proxy assessment can be elicited via the proxy-patient perspective (i.e., asking proxies to assess the patient’s quality of life (QoL) as they think the patient would respond) or proxy-proxy perspective (i.e., asking proxies to provide their own perspective on the patient’s QoL). This review aimed to identify the role of the proxy perspective in explaining the differences between self-rated and proxy-rated QoL in people living with dementia. Methods: A systematic literate review was conducted by sourcing articles from a previously published review, supplemented by an update of the review in four bibliographic databases. Peer-reviewed studies that reported both self-reported and proxy-reported mean QoL estimates using the same standardized QoL instrument, published in English, and focused on the QoL of people with dementia were included. A meta-analysis was conducted to synthesize the mean differences between self- and proxy-report across different proxy perspectives. Results: The review included 96 articles from which 635 observations were extracted. Most observations extracted used the proxy-proxy perspective (79%) compared with the proxy-patient perspective (10%); with 11% of the studies not stating the perspective. The QOL-AD was the most commonly used measure, followed by the EQ-5D and DEMQOL. The standardized mean difference (SMD) between the self- and proxy-report was lower for the proxy-patient perspective (SMD: 0.250; 95% CI 0.116; 0.384) compared to the proxy-proxy perspective (SMD: 0.532; 95% CI 0.456; 0.609). Conclusion: Different proxy perspectives affect the ratings of QoL, whereby adopting a proxy-proxy QoL perspective has a higher inter-rater gap in comparison with the proxy-patient perspective.

KW - Agreement

KW - Dementia

KW - Outcome measurement

KW - Quality of Life

UR - http://www.scopus.com/inward/record.url?scp=85191174607&partnerID=8YFLogxK

U2 - 10.1007/s11136-024-03660-w

DO - 10.1007/s11136-024-03660-w

M3 - Review Article

C2 - 38656407

AN - SCOPUS:85191174607

SN - 0962-9343

JO - Quality of Life Research

JF - Quality of Life Research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 29 August 2024

A meta-analysis of performance advantages on athletes in multiple object tracking tasks

  • Hui Juan Liu 1 ,
  • Qi Zhang 1 ,
  • Sen Chen 1 ,
  • Yu Zhang 1 &

Scientific Reports volume  14 , Article number:  20086 ( 2024 ) Cite this article

Metrics details

  • Neuroscience

This study compared the multiple object tracking (MOT) performance of athletes vs. non-athletes and expert athletes vs. novice athletes by systematically reviewing and meta-analyzing the literature. A systematic literature search was conducted using five databases for articles published until July 2024. Healthy people were included, specifically classified as athletes and non-athletes, or experts and novices. Potential sources of heterogeneity were selected using a random-effects model. Moderator analyses were also performed. A total of 23 studies were included in this review. Regarding the overall effect, athletes were significantly better at MOT tasks than non-athletes, and experts performed better than novices. Subgroup analyses showed that expert athletes had a significantly larger effect than novices, and that the type of sport significantly moderated the difference in MOT performance between the two groups. Meta-regression revealed that the number of targets and duration of tracking moderated the differences in performance between experts and novices, but did not affect the differences between athletes and non-athletes. This meta-analysis provides evidence of performance advantages for athletes compared with nonathletes, and experts compared with novices in MOT tasks. Moreover, the two effects were moderated by different factors; therefore, future studies should classify participants more specifically according to sports levels.

Introduction

Visual attention plays a crucial role in all tasks involving perception and action, particularly in sports. In highly dynamic and constantly changing scenarios, players need to flexibly adjust their visual attention while simultaneously performing various activities to act successfully, requiring continuous attention throughout the process 1 , 2 . Nakayama and Mackeben 3 first linked perceptual research to attention by dividing it into instantaneous and continuous attention. This study focused on continuous dynamic attention, involving multiple moving objects simultaneously over a period of a few seconds. Continuous attention may be static or dynamic, as the stimulus may remain stationary, or motion may occur during sustained attention to the target. The process of multiple object tracking (MOT) involves continuous attention. The core aspects of attention include selectivity, capacity limitations, and subjective effort 4 ; MOT serves as a visual illustration of these three components of attention 5 .

The MOT paradigm is a cognitive task originally developed to study visual attention 4 ; the paradigm was later used by researchers to evaluate and enhance the ability to track targets within a dynamic environment where all objects are in constant motion 6 , 7 . Performance on MOT tasks is defined as being able to successfully track several moving circles within a specified arena 8 . Typically, tasks require participants to track multiple targets. The general procedure is as follows: first, objects with all the same characteristics appear in the visual field (usually 6 to 10 objects); then, several of these objects are designated as targets (usually 2 to 5 objects), and the participant tracks the objects during the following time period. At the end of the object movement, the tracking performance for one or all targets is tested. Researchers can manipulate the number of targets 9 and distractors 10 , speed of movement 11 , and tracking duration 12 to explore and compare the dynamic visual attention of various populations. Differences in MOT performance among experts and novices in various fields have also been studied, such as drivers 13 , video game players 14 , and athletes 12 , 15 . Especially in sports, many studies have focused on the MOT performance advantages of expert athletes.

However, previous study on the performance advantages of athletes in MOT are controversial. Some studies found that individuals with sports expertise can perform better on MOT tasks, including tracking accuracy 16 , reaction time 17 , and tracking speed thresholds 6 . A previous study found that volleyball athletes had faster reaction times than nonathletes when detecting changes in targets in MOT 18 . Recent studies have also found that basketball and volleyball athletes have a performance advantage over nonathletes in MOT tasks 10 , 18 , 19 , 20 , 21 , 22 . Professional athletes in soccer, ice hockey, rugby, and other sports also outperformed high-level amateur athletes and nonathletes in MOT tasks, and showed higher learning efficiency 23 . These studies investigated players from team ball games (e.g., basketball 19 , 24 ) because dynamic visual attention plays a key role in these types of sports. Players need to simultaneously pay attention not only to the spatial position of the ball and to the court but also to the movement and position of teammates and opponents 25 .

Nevertheless, not all empirical evidence is consistent with this conclusion; some studies have found no statistical difference between athletes and non-athletes or experts and novices. Memmert et al. 26 showed that team sports experts did not perform better than novices on visual attention tasks. Li et al. 27 also found that there was no significant difference in tracking accuracy between expert athletes and novices when the number of tracking targets was small. These findings demonstrated that there was no discernible difference in the performance of MOT tasks between athletes and nonathlete college students or athletes of different levels. In addition, one study found that basketball players had lower MOT scores than nonathletes 28 . This demonstrates that the MOT performance advantages of athletes are controversial.

Given the inconsistent findings in prior studies concerning the MOT, no meta-analyses have examined MOT performance advantages in athletes. A meta-analysis on whether athletes show performance advantages in MOT is worth considering. Following the cognitive skill transfer hypothesis, training in a cognitive task may enhance performance on related, but untrained, cognitive tasks 29 , 30 . The so-called broad transfer hypothesis argues that long-term experience in team sports leads to adaptations in basic cognitive abilities causing performance differences between experts and novices even on tasks independent to experts’ domain 31 , 32 . In this regard, the specific cognitive demands of open-skill sports like basketball, or soccer were argued to cause superior cognitive abilities in elite athletes 33 .

The cognitive advantages in other areas for expert athletes have been explored in many meta-analyses 31 , 34 . A previous meta-analysis 31 that examined whether professional athletes remained ‘experts’ in the cognitive lab found that expert athletes performed better on measures of processing speed and a category of varied attentional paradigms (e.g., the Paced Auditory Serial Addition Task (PASAT) and the Eriksen arrow flankers task). They proposed further research with higher-level cognitive tasks, such as tasks of executive function and complex tasks involving attention (e.g., MOT) 31 . Previous systematic reviews have also shown that sporting experts are more successful than novices when reacting to an upcoming event, while recruiting fewer attentional resources and devoting more attention to subsequent targets analysis in unexpected situations 35 . For MOT tasks, attentional resource theory 36 would hold that there is a pool of resources required for tracking objects, and that the limit on tracking depends on the resource demands required to track each object. For example, if the tracking task were so difficult that tracking one target consumed all available tracking resources, then only a single item could be tracked. However, if each item only required 1/4th of the total available resources, then four objects could be tracked. Experts may require less attentional resource for each item due to their superior cognitive abilities 37 . Therefore, it is reasonable to speculate that this expert advantage may be revealed in MOT tasks.

Furthermore, the reasons for differences in the results of MOT tasks performed by athletes with different expertise levels in previous studies may vary; they may include task parameters, task presentation, and the type and experience of participants. Therefore, the moderating factors affecting the differential performance in MOT need to be examined. The difficulty level of the MOT task can be varied parametrically (e.g., number of targets 38 ; number of distractors 39 ; speed of the target 40 ; and duration of tracking 41 ), which may result in group differences (i.e., expertise effects 20 , 42 ). For example, Qiu et al. 10 conducted MOT tests with graded levels of the number of targets (two, three, or four); compared with nonathletes, athletes performed better in the three- and four-target conditions. Zhang et al. 43 found that as the speed of the movement increased, athletes showed a stronger tracking advantage than non-athletes; moreover, the higher the skill level of the athletes, the more obvious the effect. As different settings of MOT parameters affect the MOT performance, the number of targets and distractors, speed of the targets, and duration of tracking should be used as moderating variables in meta-analysis.

In recent years, MOT has not been limited to 2D frames; with the development of virtual reality technology, 3D-MOT in sports has attracted the attention of researchers. Compared to 2D-MOT, 3D-MOT can bring advantages to motion performance, such as creating and controlling virtual motion scenes 44 , presenting stereoscopic vision 7 and immersing the visual scene; that is, the athletes can experience the sports scene in person instead of watching the video from a third-person perspective 45 . Based on these advantages, some studies indicated that virtual reality technology can more effectively measure perception and motor performance in the field of motion than traditional methods 46 . Cooke et al. 47 found that the tracking accuracy of 3D-MOT tasks was better than that of 2D-MOT tasks; when the object distance increased, individuals could track objects at a faster speed in 3D-MOT task. This suggests that an increase in depth information enhances tracking performance by increasing object differentiation because objects that are confused in a two-dimensional plane are likely to be distinguished in a three-dimensional space. However, the effect of 3D-MOT on the final score remains unclear. The nature of this difference is to display whether it is 3D or 2D, which we studied as a moderating variable called the display type.

Moreover, athletes’ sports level and sport type may influence MOT performance. A meta-analysis of cognitive function in expert and elite athletes showed that high-performance-level athletes have superior cognitive function compared to low-performance-level athletes 48 . This indicates that the competitive level plays a role in athletes’ MOT performance. However, the cognitive functions in Scharfen and Memmert’s 48 study only included executive functions, visual perceptual ability, and motor inhibition and did not focus on MOT. In addition, a previous meta-analysis found a significant moderating effect of sport type on the relationship between expertise level and perceptual cognitive skills, suggesting that the difference in performance between experts and novices may differ in various sports 34 . This suggests that the type of sport may have an effect on perceptual cognition; therefore, the effect of competitive level and sport type on MOT was also explored in this study.

Additionally, previous studies on the advantage of expert athletes included different group comparisons; some compared experts with novices 35 , while others compared athletes with non-athletes 16 , 49 . This difference in classification is mainly due to differences in the control group; that is, the classification and characteristics of novices and nonathletes are inconsistent. However, there is no unified concept for novice athletes. For example, one study considered novice athletes to be players with no more than two years of formal experience in any sport 26 , while others defined novice athletes as players with less than a few years of practice in particular sports, such novice athletes in martial arts had less than one year of practice 50 and those in badminton had less than 2–3 years of practice 51 ; some studies referred to novice athletes as people with no sports experience 52 , that is, confusing novice athletes with non-athletes.

Thus far, similar meta-analyses on differences in athletes’ advantages have compared experts with novices in areas such as cognitive function 48 , visual search 53 , and quiet eyes 54 . However, it is important to note that these studies differentiated only between experts and novices and mixed different athletic levels (non-athletes were not distinguished from novices), which may not be the clearest comparison, and the results may be different if further differentiation is made. Vague definitions of novices or experts directly affect performance 53 . One study 43 categorized participants into three groups: experts, novices, and controls or non-athletes, and found that the tracking accuracy of the expert group was significantly higher than that of the control group. Furthermore, the difference in tracking accuracy between the novice and control groups was marginally significant, whereas there was no significant difference between the expert and novice groups. This suggests that different categories produce differences in performances.

Based on previous studies, this meta-analysis addresses the differences between athletes and non-athletes. Athletes include both expert athletes and novice athletes, who have different level of sports experiences, while non-athletes refer to those who have no sports training experience. Therefore, we aimed to distinguish the performance advantage between athletes and nonathletes, as well as between expert and novice athletes. This approach is necessary to avoid potential confounders caused by unclear definitions.

In sum, the MOT performance advantage in athletes is controversial and is affected by various factors. However, no previous meta-analysis has reported the dominant performance of athletes in MOT tasks. Therefore, the present study aimed to conduct a systematic review and meta-analysis to obtain a clearer and stronger conclusion about the differences between athletes vs. nonathletes or experts vs. novices in dynamic visual tracking in sports. Potential moderators such as athletes’ competitive level, type of sport, parameters of the MOT task and type of display were considered. Based on the above discussion, we made the following hypotheses:

H1 (hypothesis related to overall effect sizes):

Athletes would have a significant MOT performance advantage over non-athletes (H1a), and likewise experts would have a MOT performance advantage over novices (H1b), although the extent of this advantage may vary.

H2 (hypothesis related to moderator analyses of athletes vs. non-athletes):

Type of sport (H2a), parameters of the MOT task (H2b), type of display (H2c) and athletes’ competitive level (H2d) would significantly modulate the differential performance of MOT task for comparisons athletes vs. non-athletes.

H3 (hypothesis related to moderator analyses of experts vs. novices):

Type of sport (H3a), parameters of the MOT task (H3b) and type of display (H3c) would significantly modulate the differential performance of MOT task for comparisons experts vs. novices.

H4 (hypothesis related to comparison of the two groups: athletes vs. non-athletes and experts vs. novices):

A comparison of effect sizes between athletes vs. nonathletes and experts vs. novices would reveal a significant difference.

Materials and methods

The review procedures followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement 55 . Following recent best practices to enhance transparency, replicability, and robustness of systematic reviews in sport and exercise psychology, this study was registered via the Open Science Framework. The systematic review, search strategy, and meta-analysis are detailed in a registration document available in the Open Science Framework, as are all supplemental files ( https://osf.io/ncq7v/ ).

Data search and selection criteria

The categories for comparison of MOT performance between athletes and other groups included athletes and non-athletes, as well as experts and novices. This review focused on published research that explored the differences in performance on MOT tasks between athletes and non-athletes or experts and novices.

Systematic database searches were performed up to July 2024 using the Web of Science, PubMed, SPORTdiscus, ProQuest and Scopus. As there are many related Chinese studies, the Chinese database-Chinese National Knowledge Infrastructure (CNKI) was also included in the search. For the searches, the terms ‘multiple object tracking*’ OR ‘MOT’ OR ‘2D-MOT’ OR ‘3D-MOT’ were combined (AND) with ‘sport*’ OR ‘athlete*’ OR ‘expert*’ OR ‘player*’ OR ‘elite*’ OR ‘novice*’ OR ‘nonathlete*’. Additionally, we conducted a manual search using Google Scholar. We searched references of eligible articles and used Google Scholar to identify articles that cited eligible articles. Two experienced researchers conducted the literature search with assistance of two other researchers.

After removing duplicates, an initial screening at the titles and/or abstracts was conducted using the following inclusion criteria to identify relevant research reports (articles, dissertations, and theses): (1) involving healthy athletes, (2) performed MOT tasks and performance on MOT tasks is reported. Following the initial screening, studies were included if they (1) included athletes and non-athletes or experts and novices, (2) reported a comparison of MOT task performances between experts and novices or athletes and non-athletes, (3) reported participation outcomes, and (4) had sufficient data to calculate effect sizes. If relevant research/data was not available online, the authors were contacted personally.

Exclusion criteria related to insufficient data reporting included lack of data on athletes or nonathletes, no specific sport mentioned, and no MOT exposure. Literature reviews were excluded.

Data extraction

Data were extracted independently by two authors, and the remaining authors were responsible for verification. The extracted data included study characteristics (authors, publication year, and sports), sample characteristics (team country, sample size, age, and so on.), MOT parameters (number of targets and distractors, targets’ movement speed, and duration of tracking), type of MOT display, and outcomes. The coding process was conducted by two authors separately according to a coding manual determined in advance and then cross-checked 56 . The disputed documents were discussed in groups and a consensus was reached to ensure the accuracy of the coding. The final consistency was 97.4% (ICC).

The basis for the comparative classification of the articles was as follows: first, according to the different definitions of nonathletes and novices, samples in previous difference comparison studies were divided into athlete and nonathlete comparison groups or expert and novice comparison groups. As the athletes included both experts and novices, they were divided into two groups (expert and novice athletes) for the moderating analysis. This is different from the comparison of experts and novices in the other group classification. This classification effectively distinguishes two different control groups: for the comparison of athletes vs. non-athletes, the subgroups were expert athletes vs. non-athletes and novice athletes vs. non-athletes, both using non-athletes as the control; for the comparison of experts vs. novices, the control group was novice athletes.

Different studies often used different classification criteria, and we attempted to match these criteria. The classification criteria for athletes and nonathletes, as well as experts and novices, were as follows: In previous studies, the highest level of competition among athletes and years of training experience were commonly used as criteria to distinguish expertise levels 57 . This study considered the highest level of competition and years of experience as criteria for evaluating the classification of athletes. Based on the division of years of training by Memmert et al. 26 , the study also divided experts and novice athletes by years of exercise (over 10 years and under 10 years). For the highest level of competition, those at the provincial level and above are classified as experts. In addition, in most Chinese articles included, the grades of the athletes were listed, and sports-level certification was the main criterion for evaluating Chinese athletes. Chinese athletes are typically classified into three technical grades: master sportsman, national first-level athletes, and national second-level athletes. Among these, master sportsman is deemed the highest title awarded by the State Sports Commission of China. Athletes with the master sportsman certificate are required to participate in international competitions and have achievements in international competitions such as the Olympic Games, World Championships, World Cup, Asian Games, and Asian Championships. Athletes with the national first-level athlete certificate are required to place in the top three non-team events or fourth to eighth places in team events in the national Championships. As the criteria for master sportsmen and national first-level athletes are comparable to the criteria for expert athletes in other countries, they were classified as expert athletes. Meanwhile, athletes with the national second-level athlete certificates are those who have participated in the National A-League, B-League, Cup League, and National Youth Games, or who have won first to fourth place in provincial, autonomous region, and municipality championships, and were classified as novice athletes 43 . In addition to certified athletes, some physical education students in sports colleges in China also receive systematic sports training, but their level is usually lower than the Chinese national second-level athletes; they were also classified as novice athletes. Finally, if a study included a mix of two categories in one, the category with the highest level was considered. Table 1 shows the criteria for defining experts, novices, and non-athletes in this study. Participants were classified as those who met one or more of these criteria. Table 2 shows the classification of the two comparison groups (athletes and non-athletes 10 , 12 , 16 , 17 , 19 , 20 , 21 , 22 , 24 , 28 , 43 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 ; experts and novices 10 , 21 , 26 , 27 , 43 , 51 , 60 , 66 ) of the studies included in this meta-analysis.

Assessment of study quality

As the studies were non-randomized by nature (i.e., experts were compared with novices) and the term ‘exposure’ was more appropriate than ‘intervention’, Cochrane’s RoBANS tool 67 was used to assess the risk of bias arising from (i) participant selection, (ii) confounding variables, (iii) measurement of exposure, (iv) blinding of outcome assessment, (v) incomplete outcome data, and (vi) selective outcome-reporting. Six categories each were assessed as ‘high risk’, ‘unclear risk’, or ‘low risk’. The risk of bias was considered similar for all primary outcomes, as the data were generated from the same MOT technology in each study. Therefore, only one risk of bias assessment was performed for each study. The RoBANS assessment was conducted independently by two authors (L and Z). Disagreements were resolved by consensus or consultation with a third assessor (C), when required.

Summary measures, synthesis of results, and publication bias

MOT data were analyzed using Comprehensive Meta-Analysis 3.3 (CMA 3.3) software (Biostat, Englewood, NJ, USA) for meta-analysis and meta-regression. The level of statistical significance was set at p < 0.05. The MOT outcomes were (1) MOT task accuracy (ACC), (2) MOT task reaction time (RT), and (3) tracking speed thresholds. The two groups were compared based on the above outcomes. Based on previous study about the methods for dealing with multiple outcomes in meta-analysis 56 , 68 , for different categories of outcomes MOT variable, we conducted each analysis followed by moderator analyses.

Hedges’ g (standardized mean difference effect size) between the two groups with the corresponding 95% confidence interval (95% CI) was calculated. A random-effects model was used to account for differences between studies 69 . The evaluation criteria for the effect size were as follows 70 : trivial: <0.2; small: 0.2–0.6; moderate: 0.6–1.2; large: 1.2–2.0; very large: 2.0–4.0; and extremely large: >4.0. The direction of Hedges’ g was manually adjusted to consider variable outcomes, with a positive effect size indicating that the athlete or expert performed better on the MOT task.

The heterogeneity test of variance of the effect sizes in this study was carried out using the Q statistic. Q statistic is a measure of the total observed dispersion of the estimated effect sizes. Total heterogeneity ( Q T ) is used to determine whether the effect sizes for all studies are homogeneous 71 . If a significant Q T value is observed ( p < 0.05), this indicates heterogeneity of results and may result in a search for potential moderating variables 72 . In the moderator analyses, a Q B and Q W statistic is computed to test respectively, for between- and within-group homogeneity. Q W signifies the degree of heterogeneity of studies within a moderator category, whereas the Q B statistic refers to a difference in the pooled effect sizes between moderator categories 71 . In addition to the Q statistics, the I 2 statistic was also utilized to analyze the heterogeneity test results. The I 2 statistic is a more thorough metric that shows the percentage of variability related to true heterogeneity. Heterogeneity was classified as follows: low, moderate, and high when the I 2 values were <25%, between 25 and 75%, and >75%, respectively 73 .

Publication bias for the studies pooled for the meta-analysis was assessed by visually inspecting funnel plots and by computing Egger’s test results 74 . Statistical significance was defined as 2-sided p < 0.05, which was taken as a reflection of the presence of publication bias. In the absence of publication bias, the funnel plots should resemble an inverted funnel shape, with studies scattered symmetrically around the pooled effect size estimate.

Moderator analyses

Using a random-effects model for analysis, potential sources of heterogeneity likely to influence MOT performance in athletes, non-athletes, experts, and novices were examined.

The potential moderating effects of participant and/or study characteristics (covariates) on the mean overall effect size were explored using meta-regression 75 and subgroup analysis. Continuous variables including (1) number of targets, (2) number of distractors, (3) speed of target movement, and (4) duration of tracking were analyzed using meta-regression. Categorical variables including (5) type of sport, (6) competitive level, and (7) type of display were used for subgroup analysis. No moderator analysis was conducted for gender because of an insufficient number of studies reported results for different sexes separately.

Coding of subgroup variables

For a subgroup (categorical) variable, each subgroup should have a minimum of 4 studies(k≥4) 76 .

Competitive level

Based on the classification criteria in our data extraction section, we categorized athletes into expert and novice athletes as shown in Table 1 . However, the competitive level as a moderating variable was only applicable for comparisons between athletes and non-athletes. Due to insufficient available literature, competitive level could not be used as a moderating variable for expert vs. novice comparisons.

Type of sport

The included studies span various sports, including basketball, ice-hockey, handball, soccer, volleyball, rugby, badminton, swimming, and mixed sports. Given the number of studies on non-ball sports (e.g., swimming) was small (k < 4) and the mixed sports contained both ball and non-ball sports, and they were both a single category that could not be easily unified into one category, we did not include non-ball and mixed sports in the subgroup analysis. Sports with more than four effect sizes, such as basketball, ice hockey, and handball, were grouped into individual categories. Ball sports with fewer than four effect sizes were combined into an “other ball sports” category.

Type of display

The classification was based on the display mode of MOT. 2D-MOT was conducted on a computer display, while 3D-MOT used a perceptual-cognitive training system based on a 3D virtual environment 77 . However, since only one study included 3D-MOT, this type of display was not incorporated in the meta-analysis.

Search results

The search process identified 1,571 studies (462 from Web of Science, 112 from PubMed, 160 from SPORTdiscus, 197 from ProQuest, 424 from Scopus, 211 from CNKI, and 5 additional records identified through other sources (Google Scholar)). A PRISMA flow diagram of the screening process and the number of retrieved and excluded articles is shown in Fig.  1 ref. 55 . Among these, 492 duplicate studies were excluded. A further 1031 studies were excluded after reading their titles and abstracts. After full-text screening, 9 additional studies were excluded because: (1) there was no difference comparison 78 , 79 , 80 ; (2) participants were grouped according to MOT scores rather than skill 81 ; (3) data were insufficient 82 ; (4) the participants were athletes with intellectual disabilities 83 ; (5) the number of targets was only one 84 ; (6) participants did not include non-athletes or novices 85 (soccer experience: ≥6.4 years); (7) participants were all non-athletes 11 (they had only been involved in sports activities). For one study with insufficient data, the original data were obtained from the author via email 12 .

figure 1

PRISMA flowchart of article screening and selection.

After reviewing the full texts, we identified 27 articles that met the inclusion criteria. These articles included three outcome variables for MOT: accuracy (ACC), reaction time (RT), and speed thresholds. Ideally, a meta-analysis should be performed for each outcome variable. However, only one article used RT as the sole outcome variable, and three articles used speed threshold as the sole outcome variable. Additionally, although three papers included both RT and ACC, and one paper included both speed and ACC, the number of articles that included RT or speed, and could be classified into athlete vs. non-athlete or expert vs. novice groups, was fewer than three. Due to the limited number of articles reporting RT and speed variables, we decided to include only articles that reported ACC outcomes. Consequently, four articles that did not include ACC indicators were excluded 6 , 18 , 86 , 87 . Ultimately, we included 23 articles that utilized ACC as the outcome variable for meta-analysis and moderator analysis.

Accordingly, 23 studies were considered eligible for meta-analysis with 25 comparisons. Eighteen presented comparisons between athletes and non-athletes, and seven between expert and novice athletes. Studies by Qiu et al. 10 and Zhang et al. 43 included experts, novice players, and non-athletes; therefore, the 2 studies both included the comparison between athletes and non-athletes as well as between experts and novices. The classifications are listed in Table 2 .

Study characteristics

The sample characteristics and exposures used in the included studies are shown in the supplemental file (Table S1 ). The main content included is shown below. (1) Sample size: a total of 1453 participants participated in the included studies. The groups’ sample sizes ranged from 8 to 44. (2) Age: all studies reported the ages of the participants. The age range was between 16 and 29 years old with a mean age of 22.05 years old. (3) Competitive level: all studies reported the competitive level of the participants. Of the included studies, 14 reported on expert athletes (61%), 11 reported on novice athletes (48%), and 2 included both experts and novices. (4) Type of sport: among the 23 articles, there was 9 articles about basketball (39%), 3 articles about handball (13%), 3 articles about soccer (13%), 2 articles about ice-hockey (8%), one about rugby (5%),one about badminton (5%) and four about mix sport (including ball sports and non-ball sports)(17%), respectively. (5) Type of display: only one study reported 3D-MOT (4%) while other 22 study reported 2D-MOT (96%). (6) MOT parameters: 18 articles (78%) reported all parameters of MOT (number of targets, number of distractors, speed of target movement, and duration of tracking). Of the remaining five articles, two did not report duration of tracking and three did not report tracking speed.

Study quality

Regarding participant selection, a low risk of bias was identified in 96% of the studies. The remaining 4% were at high risk due to the vague definition of novice inclusion. A low risk of bias was reported in 61% of the studies when analyzing the issue of confounding variables, as almost half of the trials included a single-blind experiment. However, in some studies, this was not mentioned or the participants knew the purpose of the experiment, which may have influenced the results because of the Hawthorne effect. Most studies had a low risk of bias for exposure measurement (83%), as most trials provided familiarization with the testing procedures and reported the procedures of the MOT task in detail. Regarding the blinding outcome assessment, usually, testers were not blinded, but in some studies, a second, independent tester provided inter-rater reliability calculations. In cases where this did not occur, we judged the study to be at high risk for blinding outcome assessment. A low risk of bias was found in 91% of the studies. For incomplete outcome data, most studies had a low risk of bias (74%) and often not only reported the final sample, but also provided a clear indication of whether the participants were part of a larger sample of the initially recruited group. Finally, considering selective outcome reporting, almost half of the studies had a pre-registered or pre-published protocol with which to compare manuscripts. Therefore, it was unclear whether the reporting outcome was complete or selective in 39% of the studies. The complete quality scores for each study are presented in the supplemental file (Table S2 ).

Effect sizes

Athletes vs. non-athletes.

The results of this meta-analysis showed a significant small effect, with a better MOT performance for athletes compared to non-athletes (g = 0.56; 95% CI = 0.29 to 0.84; p < 0.001; Fig.  2 ). There was high heterogeneity in the overall effect ( Q T = 130.478, p < 0.001, I 2 = 80.37%). No significant bias was detected within Egger’s test ( p = 0.993). Visual inspection of the funnel plots also indicated low publication bias (supplemental file: Fig.  S1 -A).

figure 2

The effect sizes (Hedges’ g) of the MOT performance for athletes compared with non-athletes included in the meta-analysis. 95%CI = 95% confidence interval. The size of the plotted squares reflects the statistical weight of each study.

Expert athletes vs. novice athletes

The results of this meta-analysis showed a significant moderate effect, with a better MOT performance for expert athletes compared to novice athletes (g = 0.92; 95% CI = 0.45 to 1.39; p <.001; Fig.  3 ). There was high heterogeneity in the overall effect ( Q T = 60.25, p < 0.001, I 2 = 83.40%). No significant bias was detected within Egger’s test ( p = 0.088). Visual inspection of the funnel plots also indicated low publication bias (supplemental file: Fig.  S1 -B).

figure 3

The effect sizes (Hedges’ g) of the MOT performance for experts compared with novices included in the meta-analysis. 95%CI = 95% confidence interval. The size of the plotted squares reflects the statistical weight of each study.

Comparison of the two differences

A comparison of the effect sizes of the difference between athletes and nonathletes and between expert and novice athletes showed no significant difference in the effect sizes ( Q B = 2.14, p = 0.144).

Moderating effect

The competitive level, type of sport, and type of display were analyzed for the subgroups. Based on previous studies, the athletes included in this study were further categorized into expert and novice athletes. Seventeen effect sizes were reported for expert athletes and ten for novice athletes. The competitive level of the sample had a significant effect on the effect size, Q B (1) = 10.21, p = 0.001. The effect size for expert athletes vs. non-athletes in the MOT task (g = 0.84, p < 0.001; moderate effect; Q W (16) = 80.15, p = 0.001) was significantly higher than that for novice athletes vs. non-athletes (g = 0.07, p = 0.673; trivial effect; Q W (9) = 22.49, p = 0.007). Regarding the type of sport, we divided the types of sports into basketball, ice-hockey, handball, and other ball sports. Sports type had no significant effect on effect size Q B (4) = 4.94, p = 0.177. For the display type, only one effect size was reported for 3D display while 74 were reported for 2D displays; therefore, it was not included in the moderating analysis.

The MOT parameters were analyzed using meta-regression. The number of tracked targets ranged from 2 to 6, and had no effect on the effect size (coefficient: -0.01, p = 0.970). The number of distractors ranged from 2 to 12, and had no effect on the effect size (coefficient: 0.04, p = 0.228). The speed of the targets ranged from 3 to 25.5°/s, and had no effect on the effect size (coefficient: -0.01, p = 0.689). The tracking duration ranged from 4s to 12s, and had no effect on the effect size (coefficient: 0.05, p = 0.094). Results of the moderating analysis are summarized, with the full values presented in Table 3 .

The type of sport was analyzed for each subgroup. Regarding type of sport, we divided the types of sports into basketball, ice hockey, and other ball sports as moderating variables. Sports type had a significant effect on the effect size Q B (2) = 12.26, p = 0.002. The results indicated that sport type had a significant moderating effect on the difference in MOT performance between experts and novices. Further analysis showed that the effect sizes of the basketball (g = 1.46, p < 0.001; large effect; Q W (22) = 107.23, p < 0.001) was significant, whereas those of ice hockey (g = 0.41, p = 0.079; small effect; Q W (3) = 6.64, p = 0.084) and other ball sports (g = 0.92, p = 0.081; moderate effect; Q W (3) = 35.78, p < 0.001) subgroups were not. For the display type, no one effect size was reported for 3D display; therefore, it was not included in the moderating analysis.

The MOT parameters were analyzed using meta-regression. The number of tracked targets ranged from 2 to 8, and had a positive and significant effect on the effect size (coefficient: 0.23, p = 0.004). As the number of targets increased, the effect size also increased. The duration of tracking ranged from 1 to 8s, and had a significant negative effect on effect size (coefficient: -0.12, p = 0.040). The longer the tracking duration, the smaller the effect size. The number of distractors ranged from 4 to 8, and the speed of targets ranged from 3°/s to 10°/s. However, there was no significant effect size for either the number of distractors (coefficient: 0.18, p = 0.350) or speed of targets (coefficient: −0.01, p = 0.980). Results of the moderating analysis are summarized, with the full values presented in Table 4 .

This meta-analysis compared the performance of different populations (athletes, nonathletes, experts, and novices) on MOT tasks. The results highlighted performance differences in MOT tasks among different populations and, importantly, that the magnitudes of these effects depended on population characteristics. The effect size between nonathletes and athletes was not affected by the MOT parameters, whereas the effect size between experts and novices was affected. Although there was great heterogeneity in the methods of exposure stimulation, athletes, particularly experts, displayed superior performances on MOT tasks.

Overall difference analysis

Consistent with Hypothesis 1a, our study demonstrated athletes have a MOT performance advantage over non-athletes. Compared with nonathletes, athletes demonstrated a significant advantage in MOT task performance (g = 0.56), indicating that sports experience may transfer from a sport-specific task to MOT tasks. This finding is consistent with those of the majority of the studies included, which showed that athletes exhibit superior dynamic visual attention compared to non-athletic university students. Expertise in the sports domain, characterized by dynamically changing, high-paced, and unpredictable scenarios, may be transferred to a more general perceptual-cognitive domain (i.e., MOT) 23 . A previous review suggested that the possible reasons for the benefits of exercise on executive function are either the aerobic hypothesis or the athlete’s cognitive demand for exercise itself, both of which have been explored in different ways 88 . In team ball sports, players not only monitor the position and movement of the ball, but also track the positions of opponents and teammates in the field, which are comparable to the cognitive demands of the MOT paradigm 89 . Therefore, it is reasonable to suggest that professional players in ball sports may exhibit superior MOT performance owing to their high demand for wider attention 90 . Harris et al. 11 found that individuals who regularly engaged in object-tracking sports displayed improved tracking performance relative to those engaging in non-tracking sports but reported no differences in gaze strategy. Training on an adaptive MOT task led to improved working memory capacity, but no significant changes in gaze strategy. Consequently, MOT expertise is more closely linked to processing capacity limits than to perceptual-cognitive strategies 11 . Future studies should focus on whether expertise depends on overt visual attention or capacity limitations.

Some studies showed that basketball players do not exhibit superior MOT performance 10 , 28 . Nevertheless, Gong et al. 28 found that expert athletes had a wider field of vision, gazed longer at blank areas between the tracking targets, and predicted the changing locations of the targets. This indicates that players may be focused not only on the targets but also on the relevant area of their future direction. This tracking strategy may be consistent with real-world sports scenarios. Therefore, researchers should pay attention not only to the differences in task performance but also to the tracking strategies underlying the behavioral that underpin such differences 62 . However, the studies in this review mainly focused on strategic sports, and the moderating effect of sports type was significant. As only basketball played a significant role in this analysis, further research is required to determine whether the effect is significant in non-team ball sports.

Some studies have shown that superior tracking performance in MOT tasks is significantly related to players’ training experience 10 , 23 . Consistent with H1b, the results supported that experts performed better than novice in MOT tasks. Compared with novice athletes, there was a moderate effect for performance advantages of expert athletes (g = 0.92). Experts tend to adopt a chunking strategy during cognitive processing, which, coupled with long-term training, enables them to better adapt to the fast-changing dynamics of sports 27 . The adjustable view of the focus of attention holds posits that the scope of attention can be controlled to either narrow or expand, allowing the focus of attention to range from 1 to 3–5 information blocks 91 . When the number of targets exceeds the attention focus capacity, experts tend to use a chunking strategy, integrating several targets into one information unit to produce one information block, and integrating several other targets into another information unit to produce another information block. Consequently, experts demonstrated a better tracking performance under high-workload conditions. Another plausible explanation is that individuals skilled in tracking multiple objects are more easily drawn to ball sports pursuits and continue to play ball sports, and sports training strengthens their ability to track multiple objects 20 , 92 . For team sports, many of the required sport skills may be translated into general cognitive domains 93 . This may also account for MOT performance differences between team sports experts and novices. Overall, our results extend previous studies and confirm that expert athletes have better visual tracking abilities than novices. Consequently, H1a and H1b were fully supported.

A comparison of effect sizes between athletes vs. nonathletes and experts vs. novices revealed no significant differences. Thus, H4 was not supported. However, this should not be interpreted as rendering the categorical comparisons meaningless. Imprecision in the criteria used to define athletes as experts or elite threatens the validity of sports expertise research. Recently, a study 94 reinforced the importance of a clear definition of athletes’ level, highlighting that athletic success may be explained by different attributes, with athletic level influencing the intervention results. Therefore, it is crucial to consider the effects of comparing different classes of athletes, such as experts, novices, and non-athletes, on outcomes. One possible explanation for the lack of significant differences between the two categories in this study is that experts (i.e., all national first-level athletes or individuals with 10+ years of exercise experience) have a higher athletic level than athletes overall (comprising expert and novice athletes), and novices exhibit a higher athletic level than non-athletes, thereby resulting in similar disparities in athletic levels between these two groups. Thus, experts vs. novices produced an effect size similar to that of athletes vs. non-athletes. This adds to the evidence that competitive levels play an important role in MOT tasks. Interestingly, our results showed that the two classifications are affected by different moderating variables, suggesting that moderating variables may also influence the differences between different classifications. Taken together, future studies should aim to establish more consistent conditions to investigate nonathletes, along with expert and novice players.

For athletes vs. non-athletes, competitive level was considered a potential moderator in MOT task performance in the subgroup analysis. Our findings indicated a notable difference in effect sizes between studies involving expert athletes and non-athletes (g = 0.84; p < 0.001; moderate effect) compared to those involving novice athletes and non-athletes (g = 0.07; p = 0.673; trivial effect). This result supported H2d. Specifically, the higher the competitive level of the athletes, the more obvious the MOT difference between athletes and non-athletes. These results support previous research on the MOT that has consistently shown that expert athletes tend to perform better than intermediate or novice athletes 10 . Overall, our study provides further evidence for the differences in expertise observed between experts and novices.

The present study found a significant moderating effect of sport type (basketball, ice-hockey, and other ball sports) on the difference between expert and novice athletes results. Specifically, the effect size of the difference in MOT performance between basketball was significant (g = 1.46, p < 0.001; large effect). This suggests that the difference in MOT performance between basketball experts and novices was even more pronounced compared to other sports. For basketball players, MOT is an important indicator that separates novices from experts. Basketball is a fast-paced team sports game that requires the players to pay attention not only to the movement and position of teammates and opponents at the same time but also to the spatial position of the ball and the field 25 . Basketball players with good tracking abilities can predict and evaluate their athletic performance 15 . In addition, the actual motion scenario of basketball may be better suited for the MOT process than other sports. However, in our investigation of differences in MOT between athletes and nonathletes, the results were not entirely consistent with our hypothesis. H3a was supported, while H2a was not supported. We did not observe that sport type moderated these differences, which may also be associated with variations in competitive level (due to a mix of experts and novices). Nevertheless, further studies that focus on the surveillance of various sports are necessary to confirm these findings.

Regarding the MOT task parameters, first, compared with athletes and nonathletes, the difference in effect size between experts and novices was more subject to be moderated by the parameter settings of the MOT task (e.g., number of targets and duration of tracking). This is not exactly the same as our assumption that the parameters can significantly modulate both the two comparison groups. H3b was partially supported. Specifically, for the difference between experts and novices, we found that the larger the number of targets tracked, the greater the difference in effect size. This indicates that experts have a broader ability to focus on more targets than novices. Previous research has shown that the effects of expertise in team ball sports transferring to visual attention tracking tasks occur only in elite athletes with extensive training under higher attentional load 10 . According to the flexible resource theory 95 , tracking is mediated by a finite attentional resource distributed among the targets. The number of objects that could be tracked would be inversely related to the resource demands for each individual object 37 . When tracking more targets, professional athletes allocated less attentional resource to each target than novice athletes, thereby professional athletes could track more targets with a limited pool of resources. In contrast, when tracking fewer targets, both experts and novices had sufficient resources for each set of targets to process information precisely. In addition, we found that with an increase in the duration of tracking, the difference in the magnitude of the effect between expert and novice athletes was significantly reduced. Perhaps because the situation on the sports field changes frequently, the tracking time is usually short and phased. Expert athletes need to keep tracking in a particular situation and change the tracking target when the situation changes; therefore, their advantages are reflected more in relatively short tracking. However, the effects of the MOT parameters were significant only between experts and novices, but not between athletes and non-athletes. H2b was not supported. This could be because the difference in competitive level between athletes and nonathletes was not very large (as athletes included experts and novices), while novices and nonathletes had small differences in MOT performance (g = 0.07, p = 0.673; trivial effect), resulting in a small overall effect size, and it was not easy to observe the moderating effect.

Unfortunately, due to the insufficient number of articles on 3D-MOT included in our study, a subgroup analysis of the moderating variable display type was not conducted. Therefore, H2c and H3c were not tested. However, exploring this factor remains meaningful. A comparative study of 2D-MOT and 3D-MOT found that soccer players were more efficient at visual tracking in 3D virtual reality dynamic tracking tasks and that the longer their professional sports experience, the better their tracking performance in 3D dynamic tracking tasks 86 . This suggests that differences may also be influenced by sports levels. The movement of objects in a real three-dimensional space at different depth positions is continuous rather than confined to different depth planes. Moreover, current 3D-MOT tests typically use a staircase procedure 7 , where the speed increases if the participant successfully tracks all targets and decreases if at least one target is missed. Speed thresholds in this procedure are determined using a one-up one-down method 7 . In contrast 2D-MOT usually employs a fixed speed and uses ACC as an outcome measure. Future research could focus on more other factors as moderating factors, not only 3D display but also staircases and field of view in 3D-MOT. Future studies also should include 3D-MOT interventions in athletes to gain deeper insight. Romeas et al. 7 demonstrated that 3D-MOT training could improve the accuracy of soccer-passing decisions. However, one review questioned the effectiveness of 3D-MOT training 96 .

Recently, to make the 3D-MOT evaluation and training more effective, 3D-MOT has become more ecological in the field of virtual reality. Ehmann et al. 85 , 97 proposed a 360-degree alternative to the classical tracking task, with humanoid avatars running on a curved screen surrounding the observer. They validated the use of their tool to assess visuospatial performance in a visual tracking task 97 and discriminated the effect of expertise in young soccer players 85 . Vu et al. 63 used a multiple-soccer player tracking task in virtual reality to compare the visual tracking performance of soccer players and non-soccer players perceiving virtual players moving along real games or pseudo-random trajectories. Yet the results indicate that the use of soccer-specific trajectories may not be sufficient to replicate the representativeness of field conditions in a study on visual tracking performance. Future research should focus on the ecological application of 3D-MOT in various sports.

Limitations

Our results showed no support for a publication bias either when comparing the overall effect of athletes vs. nonathletes or experts vs. novices. Therefore, in the context of the existing statistical and methodological tools, it seems reasonable to assume that publication bias is not an issue. However, a more definite answer could be provided by recent initiatives for better scientific practices, including mandatory usage of open data repositories for all published studies 98 . Although the narrow inclusion criteria (e.g., the inclusion of studies involving athletes compared to nonathletes and experts compared to novices) helped reduce the risk of bias, it also produced certain limitations. Numerous studies on various team sports with large sample sizes and comprehensive data were excluded because they examined athletes exclusively or did not report sufficient data. This reduced the data in our meta-analysis, thereby weakening the evidence base, especially for team sports other than basketball. Among the 23 studies selected, 10 included basketball players as participants; therefore, basketball dominated the sample and results of our study. This may have impacted the moderating variable of sport type, which could potentially bias conclusions for team sports in general. This highlights the pressing need for further research in other sports to address this issue.

Another limitation was the heterogeneity of the estimates. The heterogeneity of the main effects in athletes vs. non-athletes ( I 2 = 80.37%) and experts vs. novices ( I 2 = 83.40%) both exceeded 75%, showing that there was a lot of dispersion in the results. Furthermore, although several moderators were tested for their potential influence on differences, there might have been other influencing factors not considered in our meta-regression because of a lack of information. For example, Ehmann et al. 85 demonstrated different MOT task performance levels among different age groups. Legault et al. 6 reported differences in MOT performance patterns between male and female athletes. Previous studies have found that visuospatial working memory can significantly predict MOT performance 99 , and 3D-MOT is also significantly correlated with visuospatial working memory 97 . Future studies should consider the moderating effects of age, gender, visuospatial working memory, etc. In addition, our study included few individual sports, and future studies should consider both individual and team sports.

This meta-analysis provides evidence of superior performance in athletes, especially in higher-level athletes, on MOT tasks. Our results support the superior perceptual cognitive ability of athletes vs. non-athletes and experts vs. novices. Moreover, the difference in effect size between non-athletes and athletes was moderated by the athletes’ competitive level. Similarly, the difference in the effect size of the MOT performance between experts and novices was influenced by the type of sport. Adjusting the parameters of the MOT task, such as the number of targets and the tracking duration, affected the difference in effect size between experts and novices, but did not influence the difference between athletes and non-athletes. Future studies should include a wider variety of sports; more clearly classify participants’ sports levels, such as experts, novices, and non-athletes; and compare MOT performance between different age groups (such as high school athletes and peers) and genders.

Data availability

All relevant data are within the manuscript and its supplemental files.

Memmert, D., Klemp, M., Schwab, S. & Low, B. Individual attention capacity enhances in-field group performances in soccer. Int. J. Sport Exerc. Psychol. https://doi.org/10.1080/1612197X.2023.2204364 (2023).

Article   Google Scholar  

Memmert, D. The Handbook of Attention 643–662 (Cambridge MIT Press, 2015).

Google Scholar  

Nakayama, K. & Mackeben, M. Sustained and transient components of focal visual attention. Vis. Res. 29 , 1631–1647 (1989).

Article   CAS   PubMed   Google Scholar  

Pylyshyn, Z. W. & Storm, R. W. Tracking multiple independent targets: Evidence for a parallel tracking mechanism. Spat. Vis. 3 , 179–197 (1988).

Scholl, B. J. What have we learned about attention from multiple-object tracking (and vice versa)? In Computation, Cognition, and Pylyshyn (eds Dedrick, Don & Trick, Lana) (The MIT Press, 2009).

Legault, I., Sutterlin-Guindon, D. & Faubert, J. Perceptual cognitive abilities in young athletes: A gender comparison. PLoS ONE 17 , e0273607. https://doi.org/10.1371/journal.pone.0273607 (2022).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Romeas, T., Guldner, A. & Faubert, J. 3D-multiple object tracking training task improves passing decision-making accuracy in soccer players. Psychol. Sport Exerc. 22 , 1–9. https://doi.org/10.1016/j.psychsport.2015.06.002 (2016).

Děchtěrenko, F., Jakubková, D., Lukavský, J. & Howard, C. J. Tracking multiple fish. PeerJ 10 , e13031 (2022).

Article   PubMed   PubMed Central   Google Scholar  

Mäki-Marttunen, V., Hagen, T., Laeng, B. & Espeseth, T. Distinct neural mechanisms meet challenges in dynamic visual attention due to either load or object spacing. J. Cogn. Neurosci. 32 , 65–84 (2020).

Article   PubMed   Google Scholar  

Qiu, F. et al. Influence of sports expertise level on attention in multiple object tracking. PeerJ 6 , e5732. https://doi.org/10.7717/peerj.5732 (2018).

Harris, D. J., Wilson, M. R., Crowe, E. M. & Vine, S. J. Examining the roles of working memory and visual attention in multiple object tracking expertise. Cogn. Process. 21 , 209–222. https://doi.org/10.1007/s10339-020-00954-y (2020).

Martin, A., Sfer, A. M., D’Urso Villar, M. A. & Barraza, J. F. Position affects performance in multiple-object tracking in rugby union players. Front. Psychol. 8 , 1494. https://doi.org/10.3389/fpsyg.2017.01494 (2017).

Woods, C. T., Raynor, A. J., Bruce, L. & McDonald, Z. Discriminating talent-identified junior Australian football players using a video decision-making task. J. Sports Sci. 34 , 342–347 (2016).

Dobrowolski, P., Hanusz, K., Sobczyk, B., Skorko, M. & Wiatrow, A. Cognitive enhancement in video game players: The role of video game genre. Comput. Hum. Behavior 44 , 59–63 (2015).

Mangine, G. T. et al. Visual tracking speed is related to basketball-specific measures of performance in NBA players. J. Strength Cond. Res. 28 , 2406–2414 (2014).

Zwierko, T., Lesiakowski, P., Redondo, B. & Vera, J. Examining the ability to track multiple moving targets as a function of postural stability: A comparison between team sports players and sedentary individuals. PeerJ 10 , e13964. https://doi.org/10.7717/peerj.13964 (2022).

Wierzbicki, M., Rupaszewski, K. & Styrkowiec, P. Comparing highly trained handball players’ and non-athletes’ performance in a multi-object tracking task. J. General Psychol. 151 , 173–185 (2024).

Zhang, X. M., Yan, M. & Liao, Y. G. Differential performance of Chinese volleyball athletes and nonathletes on a multiple-object tracking task. Percept. Motor Skills 109 , 747–756. https://doi.org/10.2466/pms.109.3.747-756 (2009).

Jin, P., Ge, Z. & Fan, T. Team ball sport experience minimizes sex difference in visual attention. Front. Psychol. 13 , 987672. https://doi.org/10.3389/fpsyg.2022.987672 (2022).

Jin, P. et al. Dynamic visual attention characteristics and their relationship to match performance in skilled basketball players. PeerJ 8 , e9803. https://doi.org/10.7717/peerj.9803 (2020).

Zhang, Y., Lu, Y., Wang, D., Zhou, C. & Xu, C. Relationship between individual alpha peak frequency and attentional performance in a multiple object tracking task among ice-hockey players. PLoS ONE 16 , e0251443. https://doi.org/10.1371/journal.pone.0251443 (2021).

Zhu, H. et al. Visual and action-control expressway associated with efficient information transmission in elite athletes. Neuroscience 404 , 353–370. https://doi.org/10.1016/j.neuroscience.2019.02.006 (2019).

Faubert, J. Professional athletes have extraordinary skills for rapidly learning complex and neutral dynamic visual scenes. Sci. Rep. 3 , 1154. https://doi.org/10.1038/srep01154 (2013).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Gou, Q. & Li, S. Study on the correlation between basketball players’ multiple-object tracking ability and sports decision-making. PLoS ONE 18 , e0283965. https://doi.org/10.1371/journal.pone.0283965 (2023).

Zarić, I., Dopsaj, M. & Marković, M. Match performance in young female basketball players: Relationship with laboratory and field tests. Int. J. Perform. Anal. Sport 18 , 90–103. https://doi.org/10.1080/24748668.2018.1452109 (2018).

Memmert, D., Simons, D. J. & Grimme, T. The relationship between visual attention and expertise in sports. Psychol. Sport Exerc. 10 , 146–151 (2009).

Li, J. Influence of the number of objects on basketball players’ performance in visual tracking task. J. TUS 27 , 133–137. https://doi.org/10.13297/j.cnki.issn1005-0000.2012.02.011 (2012).

Gong, R., Chen, T., Yue, X. Q., Xiao, Y. R. & Zhang, Y. Effects of different exercise intensity on multiple object tracking of basketball players and analysis of eye movements. J. TUS 31 , 358–363. https://doi.org/10.13297/j.cnki.issn1005-0000.2016.04.015 (2016).

Jacobson, J. & Matthaeus, L. Athletics and executive functioning: How athletic participation and sport type correlate with cognitive performance. Psychol. Sport Exerc. 15 , 521–527 (2014).

Taatgen, N. A. The nature and transfer of cognitive skills. Psychol. Rev. 120 , 439 (2013).

Voss, M. W., Kramer, A. F., Basak, C., Prakash, R. S. & Roberts, B. Are expert athletes ‘expert’ in the cognitive laboratory? A meta-analytic review of cognition and sport expertise. Appl. Cogn. Psychol. 24 , 812–826 (2010).

Krenn, B., Finkenzeller, T., Würth, S. & Amesberger, G. Sport type determines differences in executive functions in elite athletes. Psychol. Sport Exerc. 38 , 72–79 (2018).

Wang, C.-H. et al. Open vs. closed skill sports and the modulation of inhibitory control. PloS ONE 8 , e55773 (2013).

Mann, D. T., Williams, A. M., Ward, P. & Janelle, C. M. Perceptual-cognitive expertise in sport: A meta-analysis. J. Sport Exerc. Psychol. 29 , 457–478 (2007).

Li, L. & Smith, D. M. Neural efficiency in athletes: A systematic review. Front. Behav. Neurosci. 15 , 698555. https://doi.org/10.3389/fnbeh.2021.698555 (2021).

Allen, R., Mcgeorge, P., Pearson, D. & Milne, A. B. Attention and expertise in multiple target tracking. Appl. Cogn. Psychol. Off. J. Soc. Appl. Res. Memory Cogn. 18 , 337–347 (2004).

Alvarez, G. A. & Franconeri, S. L. How many objects can you track?: Evidence for a resource-limited attentive tracking mechanism. J. Vis. 7 , 14–14 (2007).

Drew, T., Horowitz, T. S. & Vogel, E. K. Swapping or dropping? Electrophysiological measures of difficulty during multiple object tracking. Cognition 126 , 213–223 (2013).

Bettencourt, K. C. & Somers, D. C. Effects of target enhancement and distractor suppression on multiple object tracking capacity. J. Vis. 9 , 9–9 (2009).

Meyerhoff, H., Papenmeier, F., Jahn, G. & Huff, M. Exploring the temporal dynamics of attentional reallocations with the multiple object tracking paradigm. J. Vis. 16 , 1262–1262 (2016).

Oksama, L. & Hyönä, J. Is multiple object tracking carried out automatically by an early vision mechanism independent of higher-order cognition? An individual difference approach. Visual Cogn. 11 , 631–671 (2004).

Meyerhoff, H. S., Papenmeier, F. & Huff, M. Studying visual attention using the multiple object tracking paradigm: A tutorial review. Atten. Percept. Psychophys. 79 , 1255–1274. https://doi.org/10.3758/s13414-017-1338-1 (2017).

Zhang, Y. H. et al. The effect of sports speed on multiple objecttracking performance of ice hockey players: An EEG study. J. Shanghai Univ. Sport 45 , 71–80. https://doi.org/10.16099/j.sus.2021.05.009 (2021).

Bideau, B. et al. Using virtual reality to analyze links between handball thrower kinematics and goalkeeper’s reactions. Neurosci. Lett. 372 , 119–122 (2004).

Petit, J.-P. & Ripoll, H. Scene perception and decision making in sport simulation: A masked priming investigation. Int. J. Sport Psychol. 39 (1), 1–19 (2008).

Vignais, N., Kulpa, R., Brault, S., Presse, D. & Bideau, B. Which technology to investigate visual perception in sport: Video vs. virtual reality. Hum. Mov. Sci. 39 , 12–26 (2015).

Cooke, J. R., Ter Horst, A. C., van Beers, R. J. & Medendorp, W. P. Effect of depth information on multiple-object tracking in three dimensions: A probabilistic perspective. PLoS Comput. Biol. 13 , e1005554 (2017).

Scharfen, H. E. & Memmert, D. Measurement of cognitive functions in experts and elite athletes: A meta-analytic review. Appl. Cogn. Psychol. 33 , 843–860. https://doi.org/10.1002/acp.3526 (2019).

Vu, A., Sorel, A., Limballe, A., Bideau, B. & Kulpa, R. Multiple players tracking in virtual reality: Influence of soccer specific trajectories and relationship with gaze activity. Front. Psychol. 13 , 901438 (2022).

Sanchez-Lopez, J., Fernandez, T., Silva-Pereyra, J., Martinez Mesa, J. A. & Di Russo, F. Differences in visuo-motor control in skilled vs. novice martial arts athletes during sustained and transient attention tasks: A motor-related cortical potential study. PloS ONE 9 , e91112 (2014).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Ji, C. X. & Liu, J. Y. Non target inhibition mechanism in multiple object tracking task of badminton player. J. TUS 30 , 152–156. https://doi.org/10.13297/j.cnki.issn1005-0000.2015.02.012 (2015).

Gou, Q. & Li, S. Study on the correlation between basketball players’ multiple-object tracking ability and sports decision-making. PloS ONE 18 , e0283965 (2023).

Silva, A. F. et al. Differences in visual search behavior between expert and novice team sports athletes: A systematic review with meta-analysis. Front. Psychol. 13 , 1001066. https://doi.org/10.3389/fpsyg.2022.1001066 (2022).

Lebeau, J. C. et al. Quiet eye and performance in sport: A meta-analysis. J. Sport Exerc. Psychol. 38 , 441–457. https://doi.org/10.1123/jsep.2015-0123 (2016).

Page, M. J. et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Int. J. Surg. 88 , 105906. https://doi.org/10.1016/j.ijsu.2021.105906 (2021).

Lipsey, M. W. & Wilson, D. B. Practical Meta-Analysis (SAGE Publications Inc, 2001).

Swann, C., Moran, A. & Piggott, D. Defining elite athletes: Issues in the study of expert performance in sport psychology. Psychol. Sport Exerc. 16 , 3–14. https://doi.org/10.1016/j.psychsport.2014.07.004 (2015).

Zhang, X. M., Liao, Y. G. & Ge, Cl. Performances of general college students and athletes in multiple-object tracking task. J. Beijing Sport Univ. 31 , 504–506 (2008).

CAS   Google Scholar  

Qiu, F. et al. Neural efficiency in basketball players is associated with bidirectional reductions in cortical activation and deactivation during multiple-object tracking task performance. Biol. Psychol. 144 , 28–36. https://doi.org/10.1016/j.biopsycho.2019.03.008 (2019).

Jin, P., Zhao, Z. Q. & Zhu, X. F. The relationship between sport types, sex and visual attention as assessed in a multiple object tracking task. Front. Psychol. 14 , 1099254. https://doi.org/10.3389/fpsyg.2023.1099254 (2023).

Mackenzie, A. K., Baker, J., Daly, R. C. & Howard, C. J. Peak occipital alpha frequency mediates the relationship between sporting expertise and multiple object tracking performance. Brain Behavior 14 , e3434 (2024).

Styrkowiec, P. et al. Gaze behavior and cognitive performance on tasks of multiple object tracking and multiple identity tracking by handball players and non-athletes. Percept. Motor Skills 131 , 818–842 (2024).

Vu, A., Sorel, A., Limballe, A., Bideau, B. & Kulpa, R. Multiple players tracking in virtual reality: Influence of soccer specific trajectories and relationship with gaze activity. Front. Psychol. 13 , 901438. https://doi.org/10.3389/fpsyg.2022.901438 (2022).

Zhang, Y., Wei, Y. F., Tao, J. T. & Li, J. Multiple object tracking and motion trajectory prediction of basketball players. Chin. J. Sports Med. 40 , 800–809. https://doi.org/10.16038/j.1000-6710.2021.10.007 (2021).

Article   CAS   Google Scholar  

Su, Q., Wang, F., Li, J., Dai, Q. & Li, B. Applying the multiple object juggling task to measure the attention of athletes: Evidence from female soccer. Medicine 103 , e37113 (2024).

Jin, P., Ji, Z., Wang, T. & Zhu, X. Association between sports expertise and visual attention in male and female soccer players. PeerJ 11 , e16286 (2023).

Kim, S. Y. et al. Testing a tool for assessing the risk of bias for nonrandomized studies showed moderate reliability and promising validity. J. Clin. Epidemiol. 66 , 408–414. https://doi.org/10.1016/j.jclinepi.2012.09.016 (2013).

Moeyaert, M. et al. Methods for dealing with multiple outcomes in meta-analysis: A comparison between averaging effect sizes, robust variance estimation and multilevel meta-analysis. Int. J. Soc. Res. Methodol. 20 , 559–572 (2017).

Borenstein, M., Hedges, L. V., Higgins, J. P. & Rothstein, H. R. A basic introduction to fixed-effect and random-effects models for meta-analysis. Res. Synth. Methods 1 , 97–111 (2010).

Hopkins, W., Marshall, S., Batterham, A. & Hanin, J. Progressive statistics for studies in sports medicine and exercise science. Med. Sci. Sports Exerc. 41 , 3 (2009).

Smith, S. M. & Vela, E. Environmental context-dependent memory: A review and meta-analysis. Psychon. Bulletin Rev. 8 , 203–220 (2001).

Borenstein, M., Cooper, H., Hedges, L. & Valentine, J. Effect sizes for continuous data. Handb. Res. Synth. Meta Anal. 2 , 221–235 (2009).

Higgins, J. P., Thompson, S. G., Deeks, J. J. & Altman, D. G. Measuring inconsistency in meta-analyses. BMJ 327 , 557–560 (2003).

Egger, M., Smith, D. G., Schneider, M. & Minder, C. Bias in meta-analysis detected by a simple, graphical test. BMJ 315 , 629–634 (1997).

Thompson, S. G. & Higgins, J. P. How should meta-regression analyses be undertaken and interpreted?. Stat. Med. 21 , 1559–1573. https://doi.org/10.1002/sim.1187 (2002).

Fu, R. et al. Conducting quantitative synthesis when comparing medical interventions: AHRQ and the Effective Health Care Program. J. Clin. Epidemiol. 64 , 1187–1197 (2011).

Parsons, B. et al. Enhancing cognitive function using perceptual-cognitive training. Clin. EEG Neurosci. 47 , 37–47 (2016).

Chermann, J. F., Romeas, T., Marty, F. & Faubert, J. Perceptual-cognitive three-dimensional multiple-object tracking task can help the monitoring of sport-related concussion. BMJ Open Sport Exerc. Med. 4 , e000384. https://doi.org/10.1136/bmjsem-2018-000384 (2018).

Ren, Y., Wang, G., Zhang, L., Lu, A. & Wang, C. Perceptual-cognitive tasks affect landing performance of soccer players at different levels of fatigue. Appl. Bionics Biomech. 2022 , 4282648. https://doi.org/10.1155/2022/4282648 (2022).

Romeas, T., Chaumillon, R., Labbe, D. & Faubert, J. Combining 3D-MOT with sport decision-making for perceptual-cognitive training in virtual reality. Percept. Mot. Skills 126 , 922–948. https://doi.org/10.1177/0031512519860286 (2019).

Furukado, R. & Isogai, H. Differences in visual search strategies based on multiple object tracking skill level of soccer athletes. J. Japan Soc. Sports Ind. 29 , 2_91-2_107 (2019).

Liao, Y. G., Zhang, X.M. & Ge, C.L. Performance of Athletes in Visual Multiple object tracking task. J. Xi'AN Inst. Phys. Educ . 23 , 124–127 (2006).

Van Biesen, D., Jacobs, L., McCulloch, K., Janssens, L. & Vanlandewijck, Y. C. Cognitive-motor dual-task ability of athletes with and without intellectual impairment. J. Sports Sci. 36 , 513–521. https://doi.org/10.1080/02640414.2017.1322215 (2018).

Zhang, X. M., Liao, Y. G. & Ge, C. L. Comparative research on visual selection attention bet ween college students and athletes. China Sport Sci. 25 , 22–24 (2005).

Ehmann, P. et al. Perceptual-cognitive performance of youth soccer players in a 360°-environment—differences between age groups and performance levels. Psychol. Sport Exerc. https://doi.org/10.1016/j.psychsport.2021.102120 (2022).

Wang, J., Zhang, Y. & Li, J. Advantages of football athletes in multiple object tracking task based on virtual reality. J. Beijing Sport Univ. 46 , 4. https://doi.org/10.19582/j.cnki.11-3785/g8.2023.04.012(2023) (2023).

Tan, Q. & Baek, S.-S. Analysis and research on the timeliness of virtual reality sports actions in football scenes. Wirel. Commun. Mob. Comput. 2021 , 8687378 (2021).

Furley, P., Schütz, L.-M. & Wood, G. A critical review of research on executive functions in sport and exercise. Int. Rev. Sport Exerc. Psychol. https://doi.org/10.1080/1750984X.2023.2217437 (2023).

Howard, C. J., Uttley, J. & Andrews, S. Team ball sport participation is associated with performance in two sustained visual attention tasks: Position monitoring and target identification in rapid serial visual presentation streams. Prog. Brain Res. 240 , 53–69 (2018).

Alves, H. et al. Perceptual-cognitive expertise in elite volleyball players. Front. Psychol. 4 , 36. https://doi.org/10.3389/fpsyg.2013.00036 (2013).

Cowan, N. Working Memory Capacity (Psychology Press, 2012).

Book   Google Scholar  

Howard, C. J., Uttley, J. & Andrews, S. Team ball sport participation is associated with performance in two sustained visual attention tasks: Position monitoring and target identification in rapid serial visual presentation streams. Prog. Brain Res. 240 , 53–69. https://doi.org/10.1016/bs.pbr.2018.09.001 (2018).

Vestberg, T., Gustafson, R., Maurex, L., Ingvar, M. & Petrovic, P. Executive functions predict the success of top-soccer players. PloS ONE 7 , e34731 (2012).

McKay, A. K. A. et al. Defining training and performance caliber: A participant classification framework. Int. J. Sports Physiol. Perform. 17 , 317–331. https://doi.org/10.1123/ijspp.2021-0451 (2022).

Tullo, D., Faubert, J. & Bertone, A. The characterization of attention resource capacity and its relationship with fluid reasoning intelligence: A multiple object tracking study. Intelligence 69 , 158–168 (2018).

Vater, C., Gray, R. & Holcombe, A. O. A critical systematic review of the Neurotracker perceptual-cognitive training tool. Psychol. Sport Exerc. https://doi.org/10.3758/s13423-021-01892-2 (2021).

Ehmann, P. et al. 360°-multiple object tracking in team sport athletes: Reliability and relationship to visuospatial cognitive functions. Psychol. Sport Exerc. https://doi.org/10.1016/j.psychsport.2021.101952 (2021).

Voyer, D., Voyer, S. D. & Saint-Aubin, J. Sex differences in visual-spatial working memory: A meta-analysis. Psychon. Bulletin Rev. 24 , 307–334 (2017).

Trick, L. M., Mutreja, R. & Hunt, K. Spatial and visuospatial working memory tests predict performance in classic multiple-object tracking in young adults, but nonspatial measures of the executive do not. Atten. Percept. Psychophys. 74 , 300–311 (2012).

Download references

National Natural Science Foundation of China (No. 32071087) and Fundamental Research Funds for the Central Universities (Beijing Sport University) (No. 2018PT016, 2017YB029, 2016RB025) supported this work through the corresponding authors (JL and YZ). The funders were not directly linked to this project and in no way influenced study design, collection, analysis, interpretation of data, writing of the report, or the decision to submit the paper for publication.

Author information

Authors and affiliations.

School of Psychology, Beijing Sport University, Beijing, 100084, China

Hui Juan Liu, Qi Zhang, Sen Chen & Yu Zhang

Center for Cognition and Brain Disorders, School of Clinical Medicine, Hangzhou Normal University, Hangzhou, 311121, China

You can also search for this author in PubMed   Google Scholar

Contributions

H.J.L. did the study conception and design, data collection, statistics, and writing; Q.Z. helped the study data collection, statistics, and writing; JL and YZ participated in study conception and design and writing; S.C. helped with the data collection and re-check data. J.L. and Y.Z. contributed equally to this work and should be considered co-coresponding authors. All authors have read and approved the final version of the manuscript, and agree with the order of presentation of the authors.

Corresponding authors

Correspondence to Yu Zhang or Jie Li .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary table 1., supplementary table 2., supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Liu, H.J., Zhang, Q., Chen, S. et al. A meta-analysis of performance advantages on athletes in multiple object tracking tasks. Sci Rep 14 , 20086 (2024). https://doi.org/10.1038/s41598-024-70793-w

Download citation

Received : 10 June 2024

Accepted : 21 August 2024

Published : 29 August 2024

DOI : https://doi.org/10.1038/s41598-024-70793-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Multiple object tracking
  • Performance

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

difference between traditional and systematic literature review

Research on K-12 maker education in the early 2020s – a systematic literature review

  • Open access
  • Published: 27 August 2024

Cite this article

You have full access to this open access article

difference between traditional and systematic literature review

  • Sini Davies   ORCID: orcid.org/0000-0003-3689-7967 1 &
  • Pirita Seitamaa-Hakkarainen   ORCID: orcid.org/0000-0001-7493-7435 1  

30 Accesses

Explore all metrics

This systematic literature review focuses on the research published on K-12 maker education in the early 2020s, providing a current picture of the field. Maker education is a hands-on approach to learning that encourages students to engage in collaborative and innovative activities, using a combination of traditional design and fabrication tools and digital technologies to explore real-life phenomena and create tangible artifacts. The review examines the included studies from three perspectives: characteristics, research interests and findings, previous research gaps filled, and further research gaps identified. The review concludes by discussing the overall picture of the research on maker education in the early 2020s and suggesting directions for further studies. Overall, this review provides a valuable resource for researchers, educators, and policymakers to understand the current state of K-12 maker education research.

Similar content being viewed by others

difference between traditional and systematic literature review

Learning through Making and Maker Education

difference between traditional and systematic literature review

Maker Education: Opportunities and Threats for Engineering and Technology Education

difference between traditional and systematic literature review

Maker Movement in Education: History and Prospects

Explore related subjects.

  • Artificial Intelligence
  • Digital Education and Educational Technology

Avoid common mistakes on your manuscript.

Introduction

Maker culture developed through the pioneering efforts of Papert ( 1980 ) and his followers, such as Blikstein ( 2013 ), Kafai and Peppler ( 2011 ), and Resnick ( 2017 ). It has gained popularity worldwide as an educational approach to encourage student engagement in learning science, technology, engineering, arts, and mathematics (STEAM) (Martin, 2015 ; Papavlasopoulou et al., 2017 ; Vossoughi & Bevan, 2014 ). Maker education involves engaging students to collaborate and innovate together by turning their ideas into tangible creations through the use of conceptual ideas (whether spoken or written), visual representations such as drawings and sketches, and material objects like prototypes and models (Kangas et al., 2013 ; Koh et al., 2015 ). Another core aspect of maker education is combining traditional design and fabrication tools and methods with digital technologies, such as 3D CAD and 3D printing, electronics, robotics, and programming, which enables students to create multifaceted artifacts and hybrid solutions to their design problems that include both digital and virtual features (e.g., Blikstein, 2013 ; Davies et al., 2023 ; Riikonen, Seitamaa-Hakkarainen, et al., 2020 ). The educational value of such multi-dimensional, concrete making has become widely recognized (e.g., Blikstein, 2013 ; Kafai, 1996 ; Kafai et al., 2014 ; Martin, 2015 ).

Maker education has been studied intensively, as indicated by several previous literature reviews (Iivari et al., 2016 ; Lin et al., 2020 ; Papavlasopoulou et al., 2017 ; Rouse & Rouse, 2022 ; Schad & Jones, 2020 ; Vossoughi & Bevan, 2014 ; Yulis San Juan & Murai, 2022 ). These reviews have revealed how the field has been evolving and provided a valuable overall picture of the research on maker education before the 2020s, including only a few studies published in 2020 or 2021. However, the early years of the 2020s have been an extraordinary period in time in many ways. The world was hit by the COVID-19 pandemic, followed by the global economic crises, increasing geopolitical tensions, and wars that have had a major impact on societies, education, our everyday lives, and inevitably on academic research as well. Furthermore, 2023 was a landmark year in the development of artificial intelligence (AI). In late 2022, OpenAI announced the release of ChatGPT 3.5, a major update to their large language model that is able to generate human-like text. Since then, sophisticated AI systems have rushed into our lives at an accelerating speed and are now becoming integrated with other technologies and applications, shaping how we live, work, our cultures, and our environments irreversibly (see, e.g., World Economic Forum, 2023 ). Thus, it can be argued that towards the end of 2023, the world had transitioned into the era of AI. It is essential that researchers, educators, and policymakers have a fresh overall understanding and a current picture of research on K-12 maker education to develop new, research-based approaches to technology and design education in the present rapidly evolving technological landscape of AI. This is especially important in order to avoid falling back towards shallow epistemic and educational practices of repetition and reproduction. The present systematic review was conducted to provide a ‘big picture’ of the research on K-12 maker education published in the extraordinary times of the early 2020s and to act as a landmark between the research on the field before and after the transition to the AI era. The review was driven by one main research question: How has the research on maker education developed in the early 2020s? To answer this question, three specific research questions were set:

What were the characteristics of the studies in terms of geographical regions, quantity of publications, research settings, and research methods?.

What were the research interests and findings of the reviewed studies?.

How did the reviewed studies fulfill the research gaps identified in previous literature reviews, and what further research gaps they identified?.

The following will outline the theoretical background of the systematic literature review by examining previous literature reviews on maker culture and maker education. This will be followed by an explanation of the methodologies used and findings. Finally, the review will conclude by discussing the overall picture of the research on maker education in the early 2020s and suggesting directions for further studies.

Previous literature reviews on maker culture and maker education

Several literature reviews have been conducted on maker education over the past ten years. The first one by Vossoughi and Bevan ( 2014 ) concentrated on the impact of tinkering and making on children’s learning, design principles and pedagogical approaches in maker programs, and specific tensions and possibilities within the maker movement for equity-oriented teaching and learning. They approached the maker movement in the context of out-of-school time STEM from three perspectives: (1) entrepreneurship and community creativity, (2) STEM pipeline and workforce development, and (3) inquiry-based education. At the time of their review, the research on maker education was just emerging, and therefore, their review included only a few studies. The review findings highlighted how STEM practices were developed through tinkering and striving for equity and intellectual safety (Vossoughi & Bevan, 2014 ). Furthermore, they also revealed how making activities support new ways of learning and collaboration in STEM. Their findings also pointed out some tensions and gaps in the literature, especially regarding a focus that is too narrow on STEM, tools, and techniques, as well as a lack of maker projects conducted within early childhood education or families.

In subsequent literature reviews (Iivari et al., 2016 ; Lin et al., 2020 ; Papavlasopoulou et al., 2017 ; Rouse & Rouse, 2022 ; Schad & Jones, 2020 ; Yulis San Juan & Murai, 2022 ), the interests of the reviews were expanded. Iivari and colleagues ( 2016 ) reviewed the potential of digital fabrication and making for empowering children and helping them see themselves as future digital innovators. They analyzed the studies based on five conditions: conditions for convergence, entry, social support, competence, and reflection, which were initially developed to help with project planning (Chawla & Heft, 2002 ). Their findings revealed that most of the studies included in their review emphasized the conditions for convergence, entry, and competence. However, only a few studies addressed the conditions for social support and reflection (Iivari et al., 2016 ). The reviewed studies emphasized children’s own interests and their voluntary participation in the projects. Furthermore, the studies highlighted projects leading to both material and learning-related outcomes and the development of children’s competencies in decision-making, design, engineering, technology, and innovation through projects.

Papavlasopoulou and colleagues ( 2017 ) took a broader scope on their systematic literature review, characterizing the overall development and stage of research on maker education through analyzing research settings, interests, and methods, synthesizing findings, and identifying research gaps. They were specifically interested in the technology used, subject areas that implement making activities, and evaluation methods of making instruction across all levels of education and in both formal and informal settings. Their data comprised 43 peer-reviewed empirical studies on maker-centered teaching and learning with children in their sample, providing participants with any making experience. In Papavlasopoulou and colleagues’ ( 2017 ) review, the included studies were published between 2011 and November 2015 as journal articles, conference papers, or book chapters. Most of the studies were conducted with fewer than 50 participants ( n  = 34), the most prominent age group being children from the beginning of primary school up to 14 years old ( n  = 22). The analyzed studies usually utilized more than one data collection method, mainly focusing on qualitative ( n  = 22) or mixed method ( n  = 11) approaches. Most included studies focused on programming skills and computational thinking ( n  = 32) or STEM subjects ( n  = 6). The studies reported a wide range of positive effects of maker education on learning, the development of participants’ self-efficacy, perceptions, and engagement (Papavlasopoulou et al., 2017 ). There were hardly any studies reporting adverse effects.

Schad and Jones ( 2020 ) focused their literature review on empirical studies of the maker movement’s impacts on formal K12 educational environments, published between 2000 and 2018. Their Boolean search (maker movement AND education) to three major academic research databases resulted in 599 studies, of which 20 were included in the review. Fourteen of these studies focused on K12 students, and six on K12 teachers. All but three of the studies were published between 2014 and 2018. Similarly to the studies reported in the previous literature reviews (Iivari et al., 2016 ; Papavlasopoulou et al., 2017 ; Vossoughi & Bevan, 2014 ), the vast majority of the studies were qualitative studies that reported positive opportunities for maker-centered approaches in STEM learning and promotion of excitement and motivation. On the other hand, the studies on K12 in- and preservice teacher education mainly focused on the importance of offering opportunities for teachers to engage in making activities. Both, studies focused on students or teachers, promoting equity and offering equally motivating learning experiences regardless of participants’ gender or background was emphasized.

Lin and colleagues’ ( 2020 ) review focused on the assessment of maker-centered learning activities. After applying inclusion and exclusion criteria, their review consisted of 60 peer-reviewed empirical studies on making activities that included making tangible artifacts and assessments to measure learning outcomes. The studies were published between 2006 and 2019. Lin and colleagues ( 2020 ) also focused on all age groups and activities in both formal and informal settings. Most studies included applied STEM as their main subject domain and utilized a technology-based platform, such as LilyPad Arduino microcontroller, Scratch, or laser cutting. The results of the review revealed that in most studies, learning outcomes were usually measured through the assessment of artifacts, tests, surveys, interviews, and observations. The learning outcomes measured were most often cognitive skills on STEM-related content knowledge or students’ feelings and attitudes towards STEM or computing.

The two latest systematic reviews, published in 2022, also focused on specific research interests in maker education (Rouse & Rouse, 2022 ; Yulis San Juan & Murai, 2022 ). Rouse and Rouse ( 2022 ) reviewed studies that specifically investigated learning in preK-12 maker education in formal school-based settings. Their analysis included 22 papers from seven countries, all but two published between 2017 and 2019. Only two of the studies focused on early childhood education, and three involved participants from the elementary level. Like previous reviews, most studies were conducted with qualitative methods ( n  = 17). On the other hand, in contrast to the earlier reviews (Lin et al., 2020 ; Papavlasopoulou et al., 2017 ; Schad & Jones, 2020 ), the studies included in the review did not concentrate on content-related outcomes on STEM or computing. Instead, a wide range of learning outcomes was investigated, such as 21st-century skills, agency, and materialized knowledge. On the other hand, they found that equity and inclusivity were not ubiquitously considered when researchers design makerspace interventions. Yulis San Juan and Murai’s ( 2022 ) literature review focused on frustration in maker-centered learning activities. Their analysis consisted of 28 studies published between 2013 and 2021. Their findings of the studies identified six factors that are most often recognized as the causes of frustration in makerspace activities: ‘unfamiliar pedagogical approach, time constraints, collaboration, outcome expectations, lack of skills and knowledge, and tool affordances and availability’ (Yulis San Juan & Murai, 2022 , p. 4).

From these previous literature reviews, five significant research gaps emerged that required further investigation and attention:

Teacher training, pedagogies, and orchestration of learning activities in maker education (Papavlasopoulou et al., 2017 ; Rouse & Rouse, 2022 ; Schad & Jones, 2020 ; Vossoughi & Bevan, 2014 ).

Wide variety of learning outcomes that potentially emerge from making activities, as well as the development of assessment methods and especially systematic ways to measure student learning (Lin et al., 2020 ; Rouse & Rouse, 2022 ; Schad & Jones, 2020 ).

Equity and inclusivity in maker education (Rouse & Rouse, 2022 ; Vossoughi & Bevan, 2014 ).

Practices, tools, and technologies used in makerspaces and digital fabrication (Iivari et al., 2016 ; Papavlasopoulou et al., 2017 ).

Implementation and effects of maker education in formal, school-based settings and specific age groups, especially early childhood education (Papavlasopoulou et al., 2017 ; Rouse & Rouse, 2022 ).

Methodology

This review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, adapting it to educational settings where studies are conducted with qualitative, quantitative, and mixed methods (Page et al., 2021 ; Tong et al., 2012 ). Review protocols were defined for data collection, inclusion, exclusion, and quality criteria and the data analysis. In the following, the method used for each stage of the review process will be defined in detail.

Data collection

To gather high-quality and comprehensive data, a search for peer-reviewed articles was conducted in three international online bibliographic databases: Scopus, Education Resources Information Center (ERIC), and Academic Search Complete (EBSCO). Scopus and EBSCO are extensive multi-disciplinary databases for research literature, covering research published in over 200 disciplines, including education, from over 6000 publishers. ERIC concentrates exclusively on educational-related literature, covering publications from over 1900 full-text journals. These three databases were considered to offer a broad scope to capture comprehensive new literature on K-12 maker education. The search aimed to capture peer-reviewed literature on maker education and related processes conducted in both formal and informal K-12 educational settings. The search was limited to articles published in English between 2020 and 2023. Major search terms and their variations were identified to conduct the search, and a Boolean search string was formed from them. The search was implemented in October 2023 with the following search string that was used to search on titles, abstracts, and keywords:

(“maker education” OR “maker pedagogy” OR “maker-centered learning” OR “maker centered learning” OR “maker-centred learning” OR “maker centred learning” OR “maker learning” OR “maker space*” OR makerspace* OR “maker culture” OR “design learning” OR “maker practices” OR “collaborative invention*” OR co-invention*) AND (“knowledge-creation” OR “knowledge creation” OR “knowledgecreation” OR maker* OR epistemic OR “technology education” OR “design-based learning” OR “design based learning” OR “designbased learning” OR “design learning” OR “design thinking” OR “codesign” OR “co-design” OR “co design” OR craft* OR tinker* OR “collaborative learning” OR inquiry* OR “STEAM” OR “project-based learning” OR “project based learning” OR “projectbased learning” OR “learning project*” OR “knowledge building” OR “making” OR creati* OR innovat* OR process*) AND (school* OR pedago* OR “secondary education” OR “pre-primary education” OR “primary education” OR “special education” OR “early childhood education” OR “elementary education” OR primary-age* OR elementary-age* OR “k-12” OR “youth” OR teen* OR adolescen* OR child* OR “tween”) .

Inclusion and exclusion criteria

The search provided 700 articles in total, 335 from Scopus, 345 from EBSCO, and 20 from ERIC that were aggregated to Rayyan (Ouzzani et al., 2016 ), a web and mobile app for systematic reviews, for further processing and analysis. After eliminating duplicates, 513 studies remained. At the next stage, the titles and abstracts of these studies were screened independently by two researchers to identify papers within the scope of this review. Any conference papers, posters, work-in-progress studies, non-peer-reviewed papers, review articles, and papers focusing on teacher education or teachers’ professional development were excluded from the review. To be included, the study had to meet all the following four inclusion criteria. It had to:

show empirical evidence.

describe any making experience or testing process conducted by the participants.

include participants from the K-12 age group in their sample.

have an educational purpose.

For example, studies that relied purely on statistical data collected outside a maker educational setting or studies that described a maker space design process but did not include any research data from an actual making experience conducted by participants from the K-12 age group were excluded. Studies conducted both in formal and informal settings were included in the review. Also, papers were included regardless of whether they were conducted using qualitative, quantitative, or mixed methods. After the independent screening process, the results were combined, and any conflicting assessments were discussed and settled. Finally, 149 studies were included to be retrieved for further evaluation of eligibility, of which five studies were not available for retrieval. Thus, the screening resulted in 144 included studies with full text retrieved to apply quality criteria and further analysis.

Quality criteria

The quality of each of the remaining 144 studies was assessed against the Critical Appraisal Skills Programme’s ( 2023 ) qualitative study checklist, which was slightly adjusted for the context of this review. The checklist consisted of ten questions that each address one quality criterion:

Was there a clear statement of the aims of the research?.

Are the methodologies used appropriate?.

Was the research design appropriate to address the research aims?.

Was the recruitment strategy appropriate to the aims of the research?.

Was the data collected in a way that addressed the research issue?.

Has the relationship between the researcher and participants been adequately considered?.

Have ethical issues been taken into consideration?.

Was the data analysis sufficiently rigorous?.

Is there a clear statement of findings?.

How valuable is the research?.

The first author assessed the quality by reading each study’s full text. To be included in the final analysis, the study had to meet both the inclusion-exclusion and the quality criteria. In this phase, the final assessment for eligibility, 50 studies were excluded due to not meeting the initial inclusion and exclusion criteria, and 32 studies for not filling the criteria for quality. A total of 62 studies were included in the final analysis of this literature review. The PRISMA flow chart (Haddaway et al., 2022 ; see also Page et al., 2021 ) of the study selection process is presented in Fig.  1 .

figure 1

PRISMA study selection flow chart (Haddaway et al., 2022 )

Qualitative content analysis of the reviewed studies

The analysis of the studies included in the review was conducted through careful reading of the full texts of the articles by the first author. To answer the first research question: What were the characteristics of the studies in terms of geographical regions, quantity of publications, research settings, and methods; a deductive coding framework was applied that consisted of characterizing factors of the study, its research setting as well as data collection and analysis methods applied. The predetermined categories of the study characteristics and the codes associated with each category are presented in Table  1 . The educational level of the participants was determined by following The International Standard Classification of Education (ISCED) (UNESCO Institute for Statistics, 2012 ). Educational level was chosen instead of an age group as a coding category because, during the first abstract and title screening of the articles, it became evident that the studies describe their participants more often by their educational level than age. The educational levels were converted from national educational systems following the ISCED diagrams (UNESCO Institute for Statistics, 2021 ).

In addition to the deductive coding, the following analysis categories were gathered from the articles through inductive analysis: journal, duration of the project, number of participants, types of research data collected, and specific data analysis methods. Furthermore, the following characteristics of the studies were marked in the data when applicable: if the research was conducted as a case study, usage of control groups, specific focus on minority groups, gifted students, special needs students, or inclusion. Inductive coding and thematic analysis were applied to answer the second research question: what were the research interests and findings of the reviewed studies? The categorization of research interests was then combined with some aspects of the first part of the analysis to reveal further interesting characteristics about the latest developments in the research in maker education.

In the following, the findings of this systematic literature review will be presented for each research question separately.

Characteristics of research in K-12 maker education in the 2020s

Of the studies included in the review, presented in Table  2 and 20 studies were published in 2020, 17 in 2021, 12 in 2022, and 13 in 2023. The slight decline in publications does not necessarily indicate a decline in interest towards maker education but is more likely due to the COVID-19 pandemic that heavily limited hands-on activities and in situ data collection. Compared to the latest wide-scope review on maker education (Papavlasopoulou et al., 2017 ), the number of high-quality studies published yearly appears to be at similar levels to those in the previous reviews. The studies included in the present review were published in 34 different peer-reviewed academic journals, of which 13 published two or more articles.

Regarding the geographic distribution of studies conducted on maker education, the field seems to be becoming more internationally spread. In 2020, the studies mainly published research conducted in either the USA ( n  = 6) or Finland ( n  = 12), whereas in the subsequent years, the studies were distributed more evenly around the world. However, North America and Scandinavia remained the epicenters of research on maker education, conducting over half of the studies published each year.

Most of the reviewed studies used qualitative methods ( n  = 42). Mixed methods were utilized in 13 studies, and quantitative methods in seven. Forty-four studies were described as case studies by their authors, and, on the other hand, a control group was used in four quantitative and two mixed methods studies. The analysis indicated an interesting research shift towards making activities part of formal educational settings instead of informal, extracurricular activities. Of the studies included in this review, 82% ( n  = 51) were conducted exclusively in formal educational settings. This contrasts significantly with the previous literature review by Papavlasopoulou and colleagues ( 2017 ), where most studies were conducted in informal settings. Furthermore, Schad and Jones ( 2020 ) identified only 20 studies between 2000 and 2018 conducted in formal educational settings in K12-education, and Rouse and Rouse ( 2022 ) identified 22 studies in similar settings from 2014 to early 2020. In these reviews, nearly all studies done in formal educational settings were published in the last years of the 2010 decade. Thus, this finding suggests that the change in learning settings started to emerge in the latter half of the 2010s, and in the 2020s, maker education in formal settings has become the prominent focus of research. The need for further research in formal settings was one of the main research gaps identified in previous literature reviews (Papavlasopoulou et al., 2017 ; Rouse & Rouse, 2022 ).

In addition to the shift from informal to formal educational settings, the projects studied in the reviewed articles were conducted nearly as often in school and classroom environments ( n  = 26) as in designated makerspaces ( n  = 28). Only seven of the studied projects took place in other locations, such as youth clubs, libraries, or summer camps. One project was conducted entirely in an online learning environment. Most of the studied projects involved children exclusively from primary ( n  = 27) or lower secondary ( n  = 26) education levels. Only three studies were done with students in upper secondary education. Like the previous literature reviews, only a few studies concentrated on children in early childhood education (Papavlasopoulou et al., 2017 ; Rouse & Rouse, 2022 ). Three articles reported projects conducted exclusively on early childhood education age groups, and three studies had participants from early childhood education together with children from primary ( n  = 2) or lower secondary education ( n  = 1).

The number of child participants in the studies varied between 1 and 576, and 14 studies also included teachers or other adults in their sample. The number of participating children in relation to the methods used is presented in Fig.  2 . Most of the qualitative studies had less than 100 children in their sample. However, there were three qualitative studies with 100 to 199 child participants (Friend & Mills, 2021 ; Leskinen et al., 2021 ; Riikonen, Kangas, et al., 2020 ) and one study with 576 participating children (Forbes et al., 2021 ). Studies utilizing mixed methods were either conducted with a very large number of child participants or with less than 100 participants, ranging from 4 to 99. Studies using quantitative methods, on the other hand, in most cases had 50–199 participants ( n  = 6). One quantitative study was conducted with 35 child participants (Yin et al., 2020 ). Many studies included participants from non-dominant backgrounds or with special educational needs. However, only two studies focused specifically on youth from non-dominant backgrounds (Brownell, 2020 ; Hsu et al., 2022 ), and three studies focused exclusively on inclusion and students with special needs (Giusti & Bombieri, 2020 ; Martin et al., 2020 ; Sormunen et al., 2020 ). In addition, one study specifically chose gifted students in their sample (Andersen et al., 2022 ).

figure 2

Child participants in the reviewed studies in relation to the methods used

Slightly over half of the studied projects had only collaborative tasks ( n  = 36), 11 projects involved both collaborative and individual tasks, and in 11 projects, the participants worked on their own individual tasks. Four studies did not specify whether the project was built around collaborative or individual tasks. In most cases, the projects involved both traditional tangible tools and materials as well as digital devices and fabrication technologies ( n  = 54). In five projects, the students worked entirely with digital design and making methods, and in 3 cases, only with traditional tangible materials. Similarly, the outcomes of the project tasks were mainly focused on designing and building artifacts that included both digital and material elements ( n  = 31), or the project included multiple activities and building of several artifacts that were either digital, material, or had both elements ( n  = 17). Eleven projects included digital exploration without an aim to build a design artifact as a preparatory activity, whereas one project was based solely on digital exploration as the making activity. Material artifacts without digital elements were made in seven of the studied projects, and six concentrated solely on digital artifact making.

The duration of the projects varied between two hours (Tisza & Markopoulos, 2021 ) and five years (Keune et al., 2022 ). The number of studies in each categorized project duration range, in relation to the methods used, is presented in Fig.  3 . Over half of the projects lasted between 1 month and one year ( n  = 35), nine were longer, lasting between 1 and 5 years, and 14 were short projects lasting less than one month. Three qualitative studies and one quantitative study did not give any indication of the duration of the project. Most of the projects of qualitative studies took at least one month ( n  = 32), whereas projects in mixed method studies usually were shorter than three months ( n  = 10). On the other hand, quantitative studies usually investigated projects that were either shorter than three months ( n  = 4) or longer than one year ( n  = 2).

figure 3

Duration of the studied projects in relation to the methods used

A multitude of different types of data was used in the reviewed studies. The data collection methods utilized by at least three reviewed studies are presented in Table  3 . Qualitative studies usually utilized several (2 to 6) different data gathering methods ( n  = 31), and all mixed method studies used more than one type of data (2 to 6). The most common data collection methods in qualitative studies were video data, interviews, and ethnographic observations combined with other data, such as design artifacts, photographs, and student portfolios. In addition to the data types specified in Table  3 , some studies used more unusual data collection methods such as lesson plans (Herro et al., 2021b ), the think-aloud protocol (Friend & Mills, 2021 ; Impedovo & Cederqvist, 2023 ), and social networks (Tenhovirta et al., 2022 ). Eleven qualitative studies used only one type of data, mainly video recordings ( n  = 9). Mixed method studies, on the other hand, relied often on interviews, pre-post measurements, surveys, and video data. In addition to the data types in Table  3 , mixed-method studies utilized biometric measurements (Hsu et al., 2022 ; Lee, 2021 ), lesson plans (Falloon et al., 2022 ), and teacher assessments (Doss & Bloom, 2023 ). In contrast to the qualitative and mixed method studies, all quantitative studies, apart from one (Yin et al., 2020 ), used only one form of research data, either pre-post measurements or surveys.

The findings of the data collection methods are similar to the previous literature review of Papavlasopoulou and colleagues ( 2017 ) regarding the wide variety of data types used in qualitative and mixed-method studies. However, when compared to their findings on specific types of research data used, video recordings have become the most popular way of collecting data in recent years, replacing interviews and ethnographic observations.

Research interests and findings of the reviewed studies

Seven categories of research interests emerged from the inductive coding of the reviewed studies. The categories are presented in Table  4 in relation to the research methods and educational levels of the participating children. Five qualitative studies, four mixed methods studies, and two quantitative studies had research interests from more than one category. Processes, activity, and practices, as well as sociomateriality in maker education, were studied exclusively with qualitative methods, and, on the other hand, nearly all studies on student motivation, interests, attitudes, engagement, and mindset were conducted with mixed or quantitative methods. In the two biggest categories, most of the studies utilized qualitative methods. Studies conducted with mixed or quantitative methods mainly concentrated on two categories: student learning and learning opportunities and student motivation, interests, attitudes, engagement, and mindset. In the following section, the research interests and findings for each category will be presented in detail.

Nearly half of the reviewed studies ( n  = 30) had a research interest in either student learning through making activities in general or learning opportunities provided by such activities. Five qualitative case studies (Giusti & Bombieri, 2020 ; Hachey et al., 2022 ; Hagerman et al., 2022 ; Hartikainen et al., 2023 ; Morado et al., 2021 ) and two mixed method studies (Martin et al., 2020 ; Vuopala et al., 2020 ) investigated the overall educational value of maker education. One of these studies was conducted in early childhood education (Hachey et al., 2022 ), and two in the context of inclusion in primary and lower secondary education (Giusti & Bombieri, 2020 ; Martin et al., 2020 ). They all reported positive findings on the development of children’s identity formation and skills beyond subject-specific competencies, such as creativity, innovation, cultural literacy, and learning skills. The studies conducted in the context of inclusion especially emphasized the potential of maker education in pushing students with special needs to achieve goals exceeding their supposed cognitive abilities (Giusti & Bombieri, 2020 ; Martin et al., 2020 ). Three studies (Forbes et al., 2021 ; Kumpulainen et al., 2020 ; Xiang et al., 2023 ) investigated student learning through the Maker Literacies Framework (Marsh et al., 2018 ). They also reported positive findings on student learning and skill development in early childhood and primary education, especially on the operational dimension of the framework, as well as on the cultural and critical dimensions. These positive results were further confirmed by the reviewed studies that investigated more specific learning opportunities provided by maker education on developing young people’s creativity, innovation skills, design thinking and entrepreneurship (Liu & Li, 2023 ; Timotheou & Ioannou, 2021 ; Weng et al., 2022a , b ), as well as their 21st-century skills (Iwata et al., 2020 ; Tan et al., 2021 ), and critical data literacies and critical thinking (Stornaiuolo, 2020 ; Weng et al., 2022a ).

Studies that investigated subject-specific learning most often focused on STEM subjects or programming and computational thinking. Based on the findings of these studies, maker-centered learning activities are effective but underused (Mørch et al., 2023 ). Furthermore, in early childhood education, such activities may support children taking on the role of a STEM practitioner (Hachey et al., 2022 ) and, on the other hand, provide them access to learning about STEM subjects beyond their grade level, even in upper secondary education (Tofel-Grehl et al., 2021 ; Winters et al., 2022 ). However, two studies (Falloon et al., 2022 ; Forbes et al., 2021 ) highlighted that it cannot be assumed that students naturally learn science and mathematics conceptual knowledge through making. To achieve learning in STEM subjects, especially science and mathematics, teachers need to specifically identify, design, and focus the making tasks on these areas. One study also looked at the effects of the COVID-19 pandemic on STEM disciplines and found the restrictions on the use of common makerspaces and the changes in the technologies used to have been detrimental to student’s learning in these areas (Dúo-Terrón et al., 2022 ).

Only positive findings emerged from the reviewed studies on how digital making activities promote the development of programming and computational thinking skills and practices (Iwata et al., 2020 ; Liu & Li, 2023 ; Yin et al., 2020 ) and understanding of programming methods used in AI and machine learning (Ng et al., 2023 ). Experiences of fun provided by the making activities were also found to enhance further student learning about programming (Tisza & Markopoulos, 2021 ). One study also reported positive results on student learning of academic writing skills (Stewart et al., 2023 ). There were also three studies (Brownell, 2020 ; Greenberg et al., 2020 ; Wargo, 2021 ) that investigated the potential of maker education to promote equity and learning about social justice and injustice, as well as one study that examined learning opportunities on sustainability (Impedovo & Cederqvist, 2023 ). All these studies found making activities and makerspaces to be fertile ground for learning as well as identity and community building around these topics.

The studies with research interests in the second largest category, facilitation and teaching practices ( n  = 13), investigated a multitude of different aspects of this area. The studies on assessment methods highlighted the educational value of process-based portfolios (Fields et al., 2021 ; Riikonen, Kangas et al., 2020 ) and connected portfolios that are digital portfolios aligned with a connected learning framework (Keune et al., 2022 ). On the other hand, Walan and Brink ( 2023 ) concentrated on developing and analyzing the outcomes of a self-assessment tool for maker-centered learning activities designed to promote 21st-century skills. Several research interests emerged from the review related to scaffolding and implementation of maker education in schools. Riikonen, Kangas, and colleagues ( 2020 ) investigated the pedagogical infrastructures of knowledge-creating, maker-centered learning. It emphasized longstanding iterative, socio-material projects, where real-time support and embedded scaffolding are provided to the participants by a multi-disciplinary teacher team and ideally also by peer tutors. Multi-disciplinary collaboration was also emphasized by Pitkänen and colleagues ( 2020 ) in their study on the role of facilitators as educators in Fab Labs. Cross-age peer tutoring was investigated by five studies and found to be highly effective in promoting learning in maker education (Kumpulainen et al., 2020 ; Riikonen, Kangas, et al., 2020 ; Tenhovirta et al., 2022 ; Weng et al., 2022a ; Winters et al., 2022 ). Kajamaa and colleagues ( 2020 ) further highlighted the importance of team teaching and emphasized moving from authoritative interaction with students to collaboration. Sormunen and colleagues’ ( 2020 ) findings on teacher support in an inclusive setting demonstrated how teacher-directed scaffolding and facilitation of student cooperation and reflective discussions are essential in promoting inclusion-related participation, collaboration skills, and student competence building. One study (Andersen et al., 2022 ) took a different approach and investigated the possibilities of automatic scaffolding of making activities through AI. They concluded that automated scaffolding has excellent potential in maker education and went as far as to suggest that a transition should be made to it. One study also recognized the potential of combining making activities with drama education (Walan, 2021 ).

Versatile aspects of different processes, activities, and practices in maker-centered learning projects were studied by 11 qualitative studies included in this review. Two interlinked studies (Davies et al., 2023 ; Riikonen, Seitamaa-Hakkarainen et al., 2020 ) investigated practices and processes related to collaborative invention, making, and knowledge-creation in lower secondary education. Their findings highlighted the multifaceted and iterative nature of such processes as well as the potential of maker education to offer students authentic opportunities for knowledge creation. Sinervo and colleagues ( 2021 ) also investigated the nature of the co-invention processes from the point of view of how children themselves describe and reflect their own processes. Their findings showed how children could recognize different external constraints involved in their design and the importance of iterative ideation processes and testing the ideas through prototyping. Innovation and invention practices were also studied by two other studies in both formal and informal settings with children from the primary level of education (Leskinen et al., 2023 ; Skåland et al., 2020 ). Skåland and colleagues’ ( 2020 ) findings suggest that narrative framing, that is, storytelling with the children, is an especially fruitful approach in a library setting and helps children understand their process of inventing. Similar findings were made in the study on the role of play in early childhood maker education (Fleer, 2022 ), where play enhanced design cognition and related processes and helped young children make sense of design. On the other hand, Leskinen and colleagues ( 2023 ) showed how innovations are jointly practiced in the interaction between students and teachers. They also emphasized the importance of using manifold information sources and material elements in creative innovation processes.

One study (Kajamaa & Kumpulainen, 2020 ) investigated collaborative knowledge practices and how those are mediated in school makerspaces. They identified four types of knowledge practices involved in maker-centered learning activities: orienting, interpreting, concretizing, and expanding knowledge, and how discourse, materials, embodied actions, and the physical space mediate these practices. Their findings also showed that due to the complexity of these practices, students might find maker-centered learning activities difficult. The sophisticated epistemic practices involved in collaborative invention processes were also demonstrated by the findings of Mehto, Riikonen, Hakkarainen, and colleagues ( 2020a ). Other investigators examined how art-based (Lindberg et al., 2020 ), touch-related (Friend & Mills, 2021 ), and information (Li, 2021 ) practices affect and can be incorporated into making. All three studies reported positive findings on the effects of these practices on student learning and, on the other hand, on the further development of the practices themselves.

Research interests related to student motivation, interests, attitudes, engagement, and mindset were studied by eight reviewed articles, all conducted with either mixed (n = 6) or quantitative methods (n = 2). The studies that investigated student motivation and engagement in making activities (Lee, 2021 ; Martin et al., 2020 ; Ng et al., 2023 ; Nikou, 2023 ) highlighted the importance of social interactions and collaboration as highly influential factors in these areas. On the other hand, positive attitudes towards collaboration also developed through these activities (Nguyen et al., 2023 ). Making activities conducted in the context of equity-oriented pedagogy were found to have great potential in sustaining non-dominant youths’, especially girls’, positive attitudes toward science (Hsu et al., 2022 ). On the other hand, a similar potential was not found in the development of interest in STEM subjects with autistic students (Martin et al., 2020 ). Two studies investigated student mindsets in maker-centered learning activities (Doss & Bloom, 2023 ; Vongkulluksn et al., 2021 ). Doss and Bloom ( 2023 ) identified seven different student mindset profiles present in making activities. Over half (56.67%) of the students in their study were found to share the same mindset profile, characterized as: ‘Flexible, Goal-Oriented, Persistent, Optimistic, Humorous, Realistic about Final Product’ (Doss & Bloom, 2023 , p. 4). In turn, Vongkulluksn and colleagues ( 2021 ) investigated the growth mindset trends for students who participated in a makerspace program for two years in an elementary school. Their findings revealed positive results of how makerspace environments can potentially improve students’ growth mindset.

Six studies included in this review analyzed collaboration within making activities. Students were found to be supportive and respectful towards each other as well as recognize and draw on each other’s expertise (Giusti & Bombieri, 2020 ; Herro et al., 2021a , b ). The making activities and outcomes were found to act as mediators in promoting mutual recognition between students with varying cognitive capabilities and special needs in inclusive settings (Herro et al., 2021a ). Furthermore, a community of interest that emerges through collaborative making activities was also found to be effective in supporting interest development and sustainability (Tan et al., 2021 ). Students were observed to divide work and share roles during their team projects, usually based on students’ interests, expertise, and skills (Herro et al., 2021a , b ). The findings of Stewart et al.‘s ( 2023 ) study suggested that when roles are preassigned to the team members by teachers, it decreases student stress in maker activities. However, if dominating leadership roles emerged in a team, that was found to lead to less advanced forms of collaboration than shared leadership within the team (Leskinen et al., 2021 ).

Sociomaterial aspects of making activities were in the interest of three reviewed studies (Kumpulainen & Kajamaa, 2020 ; Mehto et al., 2020a ; Mehto et al., 2020b ). Materials were shown to have an active role in knowledge-creation and ideation in open-ended maker-centered learning (Mehto et al., 2020a ), which allows for thinking together with the materials (Mehto et al., 2020b ). The task-related physical materials act as a focal point for team collaboration and invite participation (Mehto et al., 2020b ). Furthermore, a study by Kumpulainen and Kajamaa ( 2020 ) emphasized the sociomaterial dynamics of agency, where agency flows in any combination between students, teachers, and materials. However, the singularity or multiplicity of the materials potentially affects the opportunities for access and control of the process (Mehto et al., 2020b ).

In addition to empirical research interests, five studies focused on developing research methods for measuring and analyzing different aspects of maker education. Biometric measurements were investigated as a potential data source to detect engagement in making activities (Lee, 2021 ). Yin and colleagues ( 2020 ) focused on developing instruments for the quantitative measurement of computational thinking skills. On the other hand, Timotheou and Ioannou ( 2021 ) designed and tested an analytic framework and coding scheme to analyze learning and innovation skills from qualitative interviews and video data. Artificial intelligence as a potential, partially automated tool for analyzing CSCL artifacts was also investigated by one study (Andersen et al., 2022 ). Finally, Riikonen, Seitamaa-Hakkarainen, and colleagues ( 2020 ) developed visual video data analysis methods for investigating collaborative design and making activities.

Slightly over half of the reviewed studies ( n  = 33) made clear suggestions for future research. Expectedly, these studies suggested further investigation of their own research interests. However, across the studies, five themes of recommendations for future research interests and designs emerged from the data:

1. Studies conducted with diverse range of participants , pedagogical designs , and contexts (Hartikainen et al., 2023 ; Kumpulainen & Kajamaa, 2020 ; Leskinen et al., 2023 ; Lindberg et al., 2020 ; Liu & Li, 2023 Martin et al., 2020 ; Mehto et al., 2020b ; Nguyen et al., 2023 ; Sormunen et al., 2020 ; Tan et al., 2021 ; Weng et al., 2022a , b ; Yin et al., 2020 ).

2. Longitudinal studies to confirm the existing research findings, further develop pedagogical approaches to making, and to better understand the effects of maker education on students later in their lives (Davies et al., 2023 ; Fields et al., 2021 ; Kumpulainen et al., 2020 ; Kumpulainen & Kajamaa, 2020 ; Stornaiuolo, 2020 ; Tisza & Markopoulos, 2021 ; Walan & Brink, 2023 ; Weng et al., 2022a ).

3. Development of new methods and applying existing methods in different conditions (Doss & Bloom, 2023 ; Kumpulainen et al., 2020 ; Leskinen et al., 2021 ; Mehto et al., 2020b ; Mørch et al., 2023 ; Tan et al., 2021 ; Timotheou & Ioannou, 2021 ; Tisza & Markopoulos, 2021 ).

4. Identifying optimal conditions and practices for learning, skill, and identity development through making (Davies et al., 2023 ; Fields et al., 2021 ; Hartikainen et al., 2023 ; Tofel-Grehl et al., 2021 ).

5. Collaboration from the perspectives of how it affects processes and outcomes of making activities and, on the other hand, how such activities affect collaboration (Pitkänen et al., 2020 ; Tisza & Markopoulos, 2021 ; Weng et al., 2022a ).

Discussion and conclusions

This systematic literature review was conducted to describe the development of research on maker education in the early 2020s. Sixty-two studies from the initial 700 studies identified from the three major educational research databases were included in the review. The qualitative analysis of the reviewed studies revealed some interesting developments in the field. Overall, the research on maker education appears to be active. Maker education seems to be attracting interest from researchers around the globe. However, two epicenters of research, North America and Scandinavia, namely Finland, appear to have an active role in maker research.

Most studies relied on rich qualitative data, often collected using several methods. Video recordings have become a popular way to collect data in maker education research. Although qualitative methods remained the dominant methodological approach in the field (Papavlasopoulou et al., 2017 ; Rouse & Rouse, 2022 ; Schad & Jones, 2020 ), mixed and quantitative methods were used in nearly a third of the reviewed studies. These studies mainly measured learning outcomes or participants’ motivation, interests, attitudes, engagement, and mindsets. There was a great variety in the duration of the maker projects and the number of participants. The projects lasted from less than a day up to five years, and the number of participants varied similarly from one to nearly six hundred. Methodological development was also within the research interests of several studies in this review. Developments were made both in qualitative and quantitative methodologies. Such methodological development was one of the research gaps identified in the previous literature reviews (e.g., Schad & Jones, 2020 ).

The analysis of the reviewed studies revealed an interesting shift in research on maker education from informal settings to formal education. Our review revealed that most studies were conducted exclusively in formal education and often as part of the curricular activity. The need for this development was called for in the previous literature reviews (Papavlasopoulou et al., 2017 ; Rouse & Rouse, 2022 ). However, only a handful of studies were conducted in early childhood education. Winters and colleagues’ ( 2022 ) study adopted a very interesting setting where children from early childhood education worked together and were mentored by students from lower secondary education. This type of research setting could have great potential for future research in maker education.

Another research gap identified in the previous literature reviews was the need to study and measure a wide variety of potential learning opportunities and outcomes of maker education (Lin et al., 2020 ; Rouse & Rouse, 2022 ; Schad & Jones, 2020 ). The analysis revealed that new research in the field is actively filling this gap. Skills that go beyond subject-specific content and the development of participants’ identities through making activities were especially actively studied from various perspectives. The findings of these studies were distinctively positive, corresponding with the conclusions of the previous literature reviews (e.g., Papavlasopoulou et al., 2017 ; Schad & Jones, 2020 ; Vossoughi & Bevan, 2014 ). This potential of maker education should be recognized by educators and policymakers, especially when the advancements in AI technologies will forefront the need for the humane skills of working creatively with knowledge and different ways of knowing, empathic engagement, and collaboration (e.g., Liu et al., 2024 ; Markauskaite et al., 2022 ; Qadir, 2023 ; World Economic Forum, 2023 ). Some of these studies also addressed the issue of promoting equity through maker education, which was called for in the previous literature review (Rouse & Rouse, 2022 ; Vossoughi & Bevan, 2014 ). However, considering the small number of these studies, more research will still be needed.

The two other popular research interest categories that emerged from the analysis were facilitation and teaching practices as well as processes, activities, and practices involved in making – both identified as research gaps in the previous literature reviews (Iivari et al., 2016 ; Papavlasopoulou et al., 2017 ; Rouse & Rouse, 2022 ; Schad & Jones, 2020 ; Vossoughi & Bevan, 2014 ). The teaching practices and scaffolding of making activities were investigated from different aspects, such as assessment methods, implementation of maker education in schools, and cross-age peer tutoring. The results of these studies highlighted the positive effects of multi-disciplinary collaboration and peer tutoring. Such pedagogical approaches should be more widely promoted as integral parts of the pedagogical infrastructure in schools. However, this calls for measures from policymakers and school authorities to enable such collaborative ways of teaching that extend beyond the traditional structures of school organizations. Furthermore, although research on this area has been active and multi-faceted, the facilitation of maker education in inclusive settings especially calls for further investigation. In terms of processes, practices, and activities involved in making, the reviewed studies investigated a variety of aspects that revealed the sophisticated epistemic practices involved and the importance of concrete making, prototyping, and iterative ideation in maker-centered learning activities. These studies further highlighted the potential of maker education to offer students authentic opportunities for knowledge creation. Studies also examined collaboration and sociomateriality involved in maker education. Especially sociomateriality is a relatively new, emerging area of research in maker education.

The reviewed studies identified five research gaps that require further investigation: (1) conducting studies with a diverse range of participants, pedagogical designs, and contexts; (2) carrying out longitudinal studies; (3) developing new methods and applying existing methods in different settings; (4) identifying the most effective conditions and practices for learning, skill development and identity formation in maker education, and (5) understanding how collaboration affects the processes and outcomes of making activities and vice versa. In addition to the research gaps identified by reviewed studies, the analysis revealed additional gaps. Studies conducted in early childhood education and inclusive settings remain especially under-represented, although maker pedagogies have been found to have great potential in these areas. Similarly, many researchers have recognized the potential of maker education to promote equality between children from different backgrounds and genders. Still, only a handful of studies investigated these issues. Thus, more research is needed, especially on best practices and pedagogical approaches in this area. Furthermore, the processes involved in and affecting maker-centered learning call for further investigations.

The field has matured based on the analysis of the reviewed studies. It is moving from striving to understand what can be achieved to investigating the underlying conditions behind learning through making, how desired outcomes can be best achieved, as well as how the processes involved in making unfold, what the effects are in the long run, and how to understand best and measure different phenomena related to making. Furthermore, researchers are looking into more and more opportunities to expand the learning opportunities of maker education by combining them with other creative pedagogies and applying them to projects that seek to introduce subject-specific content beyond STEM to students.

This systematic literature review has several limitations. The typical limitations of most review studies, the potential loss of search results due to limited search terms and databases used, apply to this review. For example, more culturally diverse search results might have been reached with the addition of other databases and further search terms. However, the search string was carefully designed and tested to include as many common terms used in maker education research as possible, including possible variations. Furthermore, the three databases used in the search, Scopus, ERIC, and EBSCO, are regarded as the most comprehensive databases of educational research available. Thus, although some studies might not have been identified because of these limitations, it can be assumed that this review gives a comprehensive enough snapshot of research on maker education in the early years of the 2020s.

Andersen, R., Mørch, A. I., & Litherland, K. T. (2022). Collaborative learning with block-based programming: Investigating human-centered artificial intelligence in education. Behaviour & Information Technology , 41 (9), 1830–1847. https://doi.org/10.1080/0144929X.2022.2083981

Article   Google Scholar  

Blikstein, P. (2013). Digital fabrication and ‘making’ in education: The democratization of invention. In C. Büching & J. Walter-Herrmann (Eds.), FabLabs: Of machines, makers and inventors (pp. 203–222). Transcript Publishers. https://doi.org/10.1515/transcript.9783839423820.203

Brownell, C. J. (2020). Keep walls down instead of up: Interrogating writing/making as a vehicle for black girls’ literacies. Education Sciences , 10 (6), 159. https://doi.org/10.3390/educsci10060159

Chawla, L., & Heft, H. (2002). Children’s competence and the ecology of communities: A functional approach to the evaluation of participation. Journal of Environmental Psychology , 22 (1–2), 201–216. https://doi.org/10.1006/jevp.2002.0244

Critical Appraisal Skills Programme (2023). CASP Qualitative Studies Checklist . https://casp-uk.net/checklists/casp-qualitative-studies-checklist-fillable.pdf

Davies, S., Seitamaa-Hakkarainen, P., & Hakkarainen, K. (2023). Idea generation and knowledge creation through maker practices in an artifact-mediated collaborative invention project. Learning, Culture and Social Interaction, 39 , 100692. https://doi.org/10.1016/j.lcsi.2023.100692

Doss, K., & Bloom, L. (2023). Mindset and the desire for feedback during creative tasks. Journal of Creativity , 33 (1), 100047. https://doi.org/10.1016/j.yjoc.2023.100047

Dúo-Terrón, P., Hinojo-Lucena, F. J., Moreno-Guerrero, A. J., & López-Belmonte, J. (2022). Impact of the pandemic on STEAM disciplines in the sixth grade of primary education. European Journal of Investigation in Health Psychology and Education , 12 (8), 989–1005. https://doi.org/10.3390/ejihpe12080071

Falloon, G., Forbes, A., Stevenson, M., Bower, M., & Hatzigianni, M. (2022). STEM in the making? Investigating STEM learning in junior school makerspaces. Research in Science Education , 52 (2), 511–537. https://doi.org/10.1007/s11165-020-09949-3

Fields, D. A., Lui, D., Kafai, Y. B., Jayathirtha, G., Walker, J., & Shaw, M. (2021). Communicating about computational thinking: Understanding affordances of portfolios for assessing high school students’ computational thinking and participation practices. Computer Science Education , 31 (2), 224–258. https://doi.org/10.1080/08993408.2020.1866933

Fleer, M. (2022). The genesis of design: Learning about design, learning through design to learning design in play. International Journal of Technology and Design Education , 32 (3), 1441–1468. https://doi.org/10.1007/s10798-021-09670-w

Forbes, A., Falloon, G., Stevenson, M., Hatzigianni, M., & Bower, M. (2021). An analysis of the nature of young students’ STEM learning in 3D technology-enhanced makerspaces. Early Education and Development , 32 (1), 172–187. https://doi.org/10.1080/10409289.2020.1781325

Friend, L., & Mills, K. A. (2021). Towards a typology of touch in multisensory makerspaces. Learning Media and Technology , 46 (4), 465–482. https://doi.org/10.1080/17439884.2021.1928695

Giusti, T., & Bombieri, L. (2020). Learning inclusion through makerspace: A curriculum approach in Italy to share powerful ideas in a meaningful context. The International Journal of Information and Learning Technology , 37 (3), 73–86. https://doi.org/10.1108/IJILT-10-2019-0095

Greenberg, D., Calabrese Barton, A., Tan, E., & Archer, L. (2020). Redefining entrepreneurialism in the maker movement: A critical youth approach. Journal of the Learning Sciences , 29 (4–5), 471–510. https://doi.org/10.1080/10508406.2020.1749633

Hachey, A. C., An, S. A., & Golding, D. E. (2022). Nurturing kindergarteners’ early STEM academic identity through makerspace pedagogy. Early Childhood Education Journal , 50 (3), 469–479. https://doi.org/10.1007/s10643-021-01154-9

Haddaway, N. R., Page, M. J., Pritchard, C. C., & McGuinness, L. A. (2022). PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and open synthesis. Campbell Systematic Reviews , 18 (2). https://doi.org/10.1002/cl2.1230

Hagerman, M. S., Cotnam-Kappel, M., Turner, J. A., & Hughes, J. M. (2022). Literacies in the making: Exploring elementary students’ digital-physical meaning-making practices while crafting musical instruments from recycled materials. Technology Pedagogy and Education , 31 (1), 63–84. https://doi.org/10.1080/1475939X.2021.1997794

Hartikainen, H., Ventä-Olkkonen, L., Kinnula, M., & Iivari, N. (2023). We were proud of our idea: How teens and teachers gained value in an entrepreneurship and making project. International Journal of Child-Computer Interaction , 35 , 100552. https://doi.org/10.1016/j.ijcci.2022.100552

Herro, D., Quigley, C., & Abimbade, O. (2021a). Assessing elementary students’ collaborative problem-solving in makerspace activities. Information and Learning Sciences , 122 (11/12), 774–794. https://doi.org/10.1108/ILS-08-2020-0176

Herro, D., Quigley, C., Plank, H., & Abimbade, O. (2021b). Understanding students’ social interactions during making activities designed to promote computational thinking. The Journal of Educational Research , 114 (2), 183–195. https://doi.org/10.1080/00220671.2021.1884824

Hsu, P. S., Lee, E. M., & Smith, T. J. (2022). Exploring the influence of equity-oriented pedagogy on non-dominant youths’ attitudes toward science through making. RMLE Online , 45 (8), 1–16. https://doi.org/10.1080/19404476.2022.2116668

Iivari, N., Molin-Juustila, T., & Kinnula, M. (2016). The future digital innovators: Empowering the young generation with digital fabrication and making completed research paper. Proceedings of the 37th International Conference on Information Systems, ICIS 2016. 2 .

Impedovo, M., & Cederqvist, A. M. (2023). Socio-(im)material-making activities in minecraft: Retracing digital literacy applied to ESD. Research in Science & Technological Education , 1–21. https://doi.org/10.1080/02635143.2023.2245355

International Organization for Standardization (2020). ISO 3166-2:2020 - Codes for the representation of names of countries and their subdivisions — Part 2: Country subdivision code . https://www.iso.org/standard/72483.html

Iwata, M., Pitkänen, K., Laru, J., & Mäkitalo, K. (2020). Exploring potentials and challenges to develop twenty-first century skills and computational thinking in K-12 maker education. Frontiers in Education , 5 . https://doi.org/10.3389/feduc.2020.00087

Kafai, Y. B. (1996). Learning through artifacts: Communities of practice in classrooms. AI and Society , 10 (1), 89–100. https://doi.org/10.1007/BF02716758

Kafai, Y. B., & Peppler, K. A. (2011). Youth, technology, and DIY: Developing participatory competencies in creative media production. Review of Research in Education , 35 (1), 89–119. https://doi.org/10.3102/0091732X10383211

Kafai, Y., Fields, D. A., & Searle, K. (2014). Electronic textiles as disruptive designs: Supporting and challenging maker activities in schools. Harvard Educational Review , 84 (4), 532–556. https://doi.org/10.17763/haer.84.4.46m7372370214783

Kajamaa, A., & Kumpulainen, K. (2020). Students’ multimodal knowledge practices in a makerspace learning environment. International Journal of Computer-Supported Collaborative Learning , 15 (4), 411–444. https://doi.org/10.1007/s11412-020-09337-z

Kajamaa, A., Kumpulainen, K., & Olkinuora, H. (2020). Teacher interventions in students’ collaborative work in a technology-rich educational makerspace. British Journal of Educational Technology , 51 (2), 371–386. https://doi.org/10.1111/bjet.12837

Kangas, K., Seitamaa-Hakkarainen, P., & Hakkarainen, K. (2013). Figuring the world of designing: Expert participation in elementary classroom. International Journal of Technology and Design Education, 23 (2), 425–442. https://doi.org/10.1007/s10798-011-9187-z

Keune, A., Peppler, K., & Dahn, M. (2022). Connected portfolios: Open assessment practices for maker communities. Information and Learning Sciences , 123 (7/8), 462–481. https://doi.org/10.1108/ILS-03-2022-0029

Koh, J. H. L., Chai, C. S., Wong, B., & Hong, H. Y. (2015). Design thinking for education: Conceptions and applications in teaching and learning . Springer. https://doi.org/10.1007/978-981-287-444-3

Kumpulainen, K., & Kajamaa, A. (2020). Sociomaterial movements of students’ engagement in a school’s makerspace. British Journal of Educational Technology , 51 (4), 1292–1307. https://doi.org/10.1111/bjet.12932

Kumpulainen, K., Kajamaa, A., Leskinen, J., Byman, J., & Renlund, J. (2020). Mapping digital competence: Students’ maker literacies in a school’s makerspace. Frontiers in Education , 5 . https://doi.org/10.3389/feduc.2020.00069

Lee, V. R. (2021). Youth engagement during making: Using electrodermal activity data and first-person video to generate evidence-based conjectures. Information and Learning Sciences , 122 (3/4), 270–291. https://doi.org/10.1108/ILS-08-2020-0178

Leskinen, J., Kajamaa, A., & Kumpulainen, K. (2023). Learning to innovate: Students and teachers constructing collective innovation practices in a primary school’s makerspace. Frontiers in Education , 7 . https://doi.org/10.3389/feduc.2022.936724

Leskinen, J., Kumpulainen, K., Kajamaa, A., & Rajala, A. (2021). The emergence of leadership in students’ group interaction in a school-based makerspace. European Journal of Psychology of Education , 36 (4), 1033–1053. https://doi.org/10.1007/s10212-020-00509-x

Lindberg, L., Fields, D. A., & Kafai, Y. B. (2020). STEAM maker education: Conceal/reveal of personal, artistic and computational dimensions in high school student projects. Frontiers in Education , 5 . https://doi.org/10.3389/feduc.2020.00051

Lin, Q., Yin, Y., Tang, X., Hadad, R., & Zhai, X. (2020). Assessing learning in technology-rich maker activities: A systematic review of empirical research. Computers and Education , 157 . https://doi.org/10.1016/j.compedu.2020.103944

Liu, S., & Li, C. (2023). Promoting design thinking and creativity by making: A quasi-experiment in the information technology course. Thinking Skills and Creativity , 49 , 101335. https://doi.org/10.1016/j.tsc.2023.101335

Liu, W., Fu, Z., Zhu, Y., Li, Y., Sun, Y., Hong, X., Li, Y., & Liu, M. (2024). Co-making the future: Crafting tomorrow with insights and perspectives from the China-U.S. young maker competition. International Journal of Technology and Design Education . https://doi.org/10.1007/s10798-024-09887-5

Li, X. (2021). Young people’s information practices in library makerspaces. Journal of the Association for Information Science and Technology , 72 (6), 744–758. https://doi.org/10.1002/asi.24442

Markauskaite, L., Marrone, R., Poquet, O., Knight, S., Martinez-Maldonado, R., Howard, S., Tondeur, J., De Laat, M., Buckingham Shum, S., Gašević, D., & Siemens, G. (2022). Rethinking the entwinement between artificial intelligence and human learning: What capabilities do learners need for a world with AI? Computers and Education: Artificial Intelligence , 3 . https://doi.org/10.1016/j.caeai.2022.100056

Marsh, J., Arnseth, H., & Kumpulainen, K. (2018). Maker literacies and maker citizenship in the MakEY (makerspaces in the early years) project. Multimodal Technologies and Interaction , 2 (3), 50. https://doi.org/10.3390/mti2030050

Martin, L. (2015). The promise of the maker movement for education. Journal of Pre-College Engineering Education Research , 5 (1), 30–39. https://doi.org/10.7771/2157-9288.1099

Martin, W. B., Yu, J., Wei, X., Vidiksis, R., Patten, K. K., & Riccio, A. (2020). Promoting science, technology, and engineering self-efficacy and knowledge for all with an autism inclusion maker program. Frontiers in Education , 5 . https://doi.org/10.3389/feduc.2020.00075

Mehto, V., Riikonen, S., Hakkarainen, K., Kangas, K., & Seitamaa‐Hakkarainen, P. (2020a). Epistemic roles of materiality within a collaborative invention project at a secondary school. British Journal of Educational Technology, 51 (4), 1246–1261. https://doi.org/10.1111/bjet.12942

Mehto, V., Riikonen, S., Kangas, K., & Seitamaa-Hakkarainen, P. (2020b). Sociomateriality of collaboration within a small team in secondary school maker-centered learning project. International Journal of Child-Computer Interaction , 26. https://doi.org/10.1016/j.ijcci.2020.100209

Morado, M. F., Melo, A. E., & Jarman, A. (2021). Learning by making: A framework to revisit practices in a constructionist learning environment. British Journal of Educational Technology , 52 (3), 1093–1115. https://doi.org/10.1111/bjet.13083

Mørch, A. I., Flø, E. E., Litherland, K. T., & Andersen, R. (2023). Makerspace activities in a school setting: Top-down and bottom-up approaches for teachers to leverage pupils’ making in science education. Learning Culture and Social Interaction , 39 , 100697. https://doi.org/10.1016/j.lcsi.2023.100697

Ng, D. T. K., Su, J., & Chu, S. K. W. (2023). Fostering secondary school students’ AI literacy through making AI-driven recycling bins. Education and Information Technologies , 1–32. https://doi.org/10.1007/s10639-023-12183-9

Nguyen, H. B. N., Hong, J. C., Chen, M. L., Ye, J. N., & Tsai, C. R. (2023). Relationship between students’ hands-on making self-efficacy, perceived value, cooperative attitude and competition preparedness in joining an iSTEAM contest. Research in Science & Technological Education , 41 (1), 251–270. https://doi.org/10.1080/02635143.2021.1895100

Nikou, S. A. (2023). Student motivation and engagement in maker activities under the lens of the activity theory: A case study in a primary school. Journal of Computers in Education . https://doi.org/10.1007/s40692-023-00258-y

Ouzzani, M., Hammady, H., Fedorowicz, Z., & Elmagarmid, A. (2016). Rayyan – a web and mobile app for systematic reviews. Systematic Reviews , 5 (1), 210. https://doi.org/10.1186/s13643-016-0384-4

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., & Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Systematic Reviews , 10 (1), 89. https://doi.org/10.1186/s13643-021-01626-4

Papavlasopoulou, S., Giannakos, M. N., & Jaccheri, L. (2017). Empirical studies on the Maker Movement, a promising approach to learning: A literature review. Entertainment Computing , 18 , 57–78. https://doi.org/10.1016/j.entcom.2016.09.002

Papert, S. (1980). Mindstroms: Children, computers, and powerful ideas . Basic Books.

Pitkänen, K., Iwata, M., & Laru, J. (2020). Exploring technology-oriented fab lab facilitators’ role as educators in K-12 education: Focus on scaffolding novice students’ learning in digital fabrication activities. International Journal of Child-Computer Interaction , 26 , 100207. https://doi.org/10.1016/j.ijcci.2020.100207

Qadir, J. (2023). Engineering education in the era of ChatGPT: Promise and pitfalls of generative AI for education. IEEE Global Engineering Education Conference, EDUCON , 2023-May . https://doi.org/10.1109/EDUCON54358.2023.10125121

Resnick, M. (2017). Lifelong kindergarten: Cultivating creativity through projects, passions, peers, and play . MIT Press.

Riikonen, S., Kangas, K., Kokko, S., Korhonen, T., Hakkarainen, K., & Seitamaa-Hakkarainen, P. (2020). The development of pedagogical infrastructures in three cycles of maker-centered learning projects. Design and Technology Education: An International Journal, 25 (2), 29–49.

Riikonen, S., Seitamaa-Hakkarainen, P., & Hakkarainen, K. (2020). Bringing maker practices to school: Tracing discursive and materially mediated aspects of student teams’ collaborative making processes. International Journal of Computer-Supported Collaborative Learning, 15 (3), 319–349. https://doi.org/10.1007/s11412-020-09330-6

Rouse, R., & Rouse, A. G. (2022). Taking the maker movement to school: A systematic review of preK-12 school-based makerspace research. Educational Research Review , 35 . Elsevier Ltd. https://doi.org/10.1016/j.edurev.2021.100413

Schad, M., & Jones, W. M. (2020). The maker movement and education: A systematic review of the literature. Journal of Research on Technology in Education , 52 (1), 65–78. https://doi.org/10.1080/15391523.2019.1688739

Sinervo, S., Sormunen, K., Kangas, K., Hakkarainen, K., Lavonen, J., Juuti, K., Korhonen, T., & Seitamaa-Hakkarainen, P. (2021). Elementary school pupils’ co-inventions: Products and pupils’ reflections on processes. International Journal of Technology and Design Education, 31 (4), 653–676. https://doi.org/10.1007/s10798-020-09577-y

Skåland, G., Arnseth, H. C., & Pierroux, P. (2020). Doing inventing in the library. Analyzing the narrative framing of making in a public library context. Education Sciences , 10 (6), 158. https://doi.org/10.3390/educsci10060158

Sormunen, K., Juuti, K., & Lavonen, J. (2020). Maker-centered project-based learning in inclusive classes: Supporting students’ active participation with teacher-directed reflective discussions. International Journal of Science and Mathematics Education , 18 (4), 691–712. https://doi.org/10.1007/s10763-019-09998-9

Stewart, A., Yuan, J., Kale, U., Valentine, K., & McCartney, M. (2023). Maker activities and academic writing in a middle school science classroom. International Journal of Instruction , 16 (2), 125–144. https://doi.org/10.29333/iji.2023.1628a

Stornaiuolo, A. (2020). Authoring data stories in a media makerspace: Adolescents developing critical data literacies. Journal of the Learning Sciences , 29 (1), 81–103. https://doi.org/10.1080/10508406.2019.1689365

Tan, A. L., Jamaludin, A., & Hung, D. (2021). In pursuit of learning in an informal space: A case study in the Singapore context. International Journal of Technology and Design Education , 31 (2), 281–303. https://doi.org/10.1007/s10798-019-09553-1

Tenhovirta, S., Korhonen, T., Seitamaa-Hakkarainen, P., & Hakkarainen, K. (2022). Cross-age peer tutoring in a technology-enhanced STEAM project at a lower secondary school. International Journal of Technology and Design Education, 32 (3), 1701–1723. https://doi.org/10.1007/s10798-021-09674-6

Timotheou, S., & Ioannou, A. (2021). Learning and innovation skills in making contexts: A comprehensive analytical framework and coding scheme. Educational Technology Research and Development , 69 (6), 3179–3207. https://doi.org/10.1007/s11423-021-10067-8

Tisza, G., & Markopoulos, P. (2021). Understanding the role of fun in learning to code. International Journal of Child-Computer Interaction , 28 , 100270. https://doi.org/10.1016/j.ijcci.2021.100270

Tofel-Grehl, C., Ball, D., & Searle, K. (2021). Making progress: Engaging maker education in science classrooms to develop a novel instructional metaphor for teaching electric potential. The Journal of Educational Research , 114 (2), 119–129. https://doi.org/10.1080/00220671.2020.1838410

Tong, A., Flemming, K., McInnes, E., Oliver, S., & Craig, J. (2012). Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Medical Research Methodology , 12 (1), 181. https://doi.org/10.1186/1471-2288-12-181

UNESCO Institute for Statistics (2012). International standard classification of education: ISCED 2011 . https://uis.unesco.org/sites/default/files/documents/international-standard-classification-of-education-isced-2011-en.pdf

UNESCO Institute for Statistics (2021). Using ISCED Diagrams to Compare Education Systems . https://neqmap.bangkok.unesco.org/wp-content/uploads/2021/06/UIS-ISCED-DiagramsCompare-web.pdf

Vongkulluksn, V. W., Matewos, A. M., & Sinatra, G. M. (2021). Growth mindset development in design-based makerspace: A longitudinal study. The Journal of Educational Research , 114 (2), 139–154. https://doi.org/10.1080/00220671.2021.1872473

Vossoughi, S., & Bevan, B. (2014). Making and tinkering: A review of the literature. National Research Council Committee on Out of School Time STEM , 67 , 1–55.

Google Scholar  

Vuopala, E., Guzmán Medrano, D., Aljabaly, M., Hietavirta, D., Malacara, L., & Pan, C. (2020). Implementing a maker culture in elementary school – students’ perspectives. Technology Pedagogy and Education , 29 (5), 649–664. https://doi.org/10.1080/1475939X.2020.1796776

Walan, S. (2021). The dream performance – a case study of young girls’ development of interest in STEM and 21st century skills, when activities in a makerspace were combined with drama. Research in Science & Technological Education , 39 (1), 23–43. https://doi.org/10.1080/02635143.2019.1647157

Walan, S., & Brink, H. (2023). Students’ and teachers’ responses to use of a digital self-assessment tool to understand and identify development of twenty-first century skills when working with makerspace activities. International Journal of Technology and Design Education . https://doi.org/10.1007/s10798-023-09845-7

Wargo, J. M. (2021). Sound civics, heard histories: A critical case of young children mobilizing digital media to write (right) injustice. Theory & Research in Social Education , 49 (3), 360–389. https://doi.org/10.1080/00933104.2021.1874582

Weng, X., Chiu, T. K. F., & Jong, M. S. Y. (2022a). Applying relatedness to explain learning outcomes of STEM maker activities. Frontiers in Psychology , 12 . https://doi.org/10.3389/fpsyg.2021.800569

Weng, X., Chiu, T. K. F., & Tsang, C. C. (2022b). Promoting student creativity and entrepreneurship through real-world problem-based maker education. Thinking Skills and Creativity , 45 , 101046. https://doi.org/10.1016/j.tsc.2022.101046

Winters, K. L., Gallagher, T. L., & Potts, D. (2022). Creativity, collaboration, and cross-age mentorships using STEM-infused texts. Elementary STEM Journal , 27 (2), 7–14.

World Economic Forum (2023). The future of jobs report 2023 . https://www.weforum.org/reports/the-future-of-jobs-report-2023/

Xiang, S., Yang, W., & Yeter, I. H. (2023). Making a makerspace for children: A mixed-methods study in Chinese kindergartens. International Journal of Child-Computer Interaction , 36 , 100583. https://doi.org/10.1016/j.ijcci.2023.100583

Yin, Y., Hadad, R., Tang, X., & Lin, Q. (2020). Improving and assessing computational thinking in maker activities: The integration with physics and engineering learning. Journal of Science Education and Technology , 29 (2), 189–214. https://doi.org/10.1007/s10956-019-09794-8

Yulis San Juan, A., & Murai, Y. (2022). Turning frustration into learning opportunities during maker activities: A review of literature. International Journal of Child-Computer Interaction , 33 , 100519. https://doi.org/10.1016/j.ijcci.2022.100519

Download references

Open Access funding provided by University of Helsinki (including Helsinki University Central Hospital). This work has been funded by the Strategic Research Council (SRC) established within the Research Council of Finland, grants #312527, #352859, and # 352971.

Open Access funding provided by University of Helsinki (including Helsinki University Central Hospital).

Author information

Authors and affiliations.

Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland

Sini Davies & Pirita Seitamaa-Hakkarainen

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sini Davies .

Ethics declarations

Conflict of interest.

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Davies, S., Seitamaa-Hakkarainen, P. Research on K-12 maker education in the early 2020s – a systematic literature review. Int J Technol Des Educ (2024). https://doi.org/10.1007/s10798-024-09921-6

Download citation

Accepted : 02 July 2024

Published : 27 August 2024

DOI : https://doi.org/10.1007/s10798-024-09921-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Maker education
  • K-12 education
  • Systematic literature review
  • Maker-centered learning
  • Maker culture
  • Design and making
  • Find a journal
  • Publish with us
  • Track your research

UroToday

  • Sign Up Free

difference between traditional and systematic literature review

  • Stone disease

Cost-effectiveness and health economics for ureteral and kidney stone disease: a systematic review of literature.

To systematically review costs associated with endourological procedures (ureteroscopy, URS; shockwave lithotripsy, SWL; and percutaneous nephrolithotomy, PCNL) for kidney stone disease (KSD), providing an overview of cost-effectiveness and health economics strategies.

A systematic review of the literature was performed, retrieving 83 English-written full-text studies for inclusion. Papers were labelled according to the respective area of interest: 'costs of different procedures: SWL, URS, PCNL', 'costs of endourological devices and new technologies: reusable and disposable scopes, lasers, other devices', 'costs of KSD treatment in the emergency setting: emergency stenting versus primary URS'. Forty-three papers reported on associated cost for different procedures, revealing URS to be the most cost-effective. PCNL follows with higher hospitalization costs, while SWL appears to be least cost effective due to high need of additional procedures. The role of disposable and reusable scope is investigated by 15 articles, while other 16 reported on the role of different lasers, devices and techniques. The last nine studies included discussed the best and more cost-effective treatment for acute stone presentation, with promising results for primary URS versus emergency stenting and delayed URS.

Cost-effective and cost-conscious intervention is equally imperative to consider whilst weighing in clinical efficacy for endourological procedures. When a decision-making choice of SWL, URS or PCNL is offered to a patient, the outcomes must be balanced with a deeper understanding of additional cost burden of retreatment, reimbursement, repeated interventions, and recurrence. In todays' practice, investing in endourological devices for KSD management must consider carefully the direct and hidden costs of using reusable and disposable technology. Cost control measures should not in any way compromise the quality of life or safety of the patient.

Current opinion in urology. 2024 Aug 20 [Epub ahead of print]

Carlotta Nedbal, Pietro Tramanzoli, Daniele Castellani, Vineet Gauhar, Andrea Gregori, Bhaskar Somani

Urology Unit, ASST Fatebenefratelli Sacco, Milano., Urology Unit, Azienda Ospedaliero-Universitaria delle Marche, Polytechnic University of Le Marche, Ancona, Italy., Department of Urology, Ng Teng Fong General Hospital, NUHS, Singapore., Department of Urology, University Hospitals Southampton, NHS Trust, Southampton, UK.

PubMed http://www.ncbi.nlm.nih.gov/pubmed/39162117

Login to update email address, newsletter preferences and use bookmarks.

  • Forgot Password?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

The impact of comprehensive geriatric assessment on postoperative outcomes in elderly surgery: A systematic review and meta-analysis

Roles Conceptualization, Data curation, Project administration, Software, Writing – original draft

* E-mail: [email protected]

Affiliation Anesthesia and Surgery Department, Chengdu Second People’s Hospital, Chengdu, Sichuan, China

ORCID logo

Roles Data curation, Formal analysis, Writing – review & editing

Affiliation Department of Critical Care Medicine, First Affiliated Hospital of Xuzhou Medical University, Xuzhou, Jangsu, China

Roles Data curation, Formal analysis, Software

Affiliation Endocrinology and Metabolism Department, Changsha People’s Hospital, Changsha, Hunan, China

Roles Project administration, Supervision, Validation

  • Lin Chen, 
  • Wei Zong, 
  • Manyue Luo, 

PLOS

  • Published: August 28, 2024
  • https://doi.org/10.1371/journal.pone.0306308
  • Peer Review
  • Reader Comments

Fig 1

Introduction

The elderly population experiences more postoperative complications. A comprehensive geriatric assessment, which is multidimensional and coordinated, could help reduce these unfavorable outcomes. However, its effectiveness is still uncertain.

We searched multiple online databases, including Medline, PubMed, Web of Science, Cochrane Library, Embase, CINAL, ProQuest, and Wiley, for relevant literature from their inception to October 2023. We included randomized trials of individuals aged 65 and older undergoing surgery. These trials compared comprehensive geriatric assessment with usual surgical care and reported on postoperative outcomes. Two researchers independently screened the literature, extracted data, and assessed the certainty of evidence from the identified articles. We conducted a meta-analysis using RevMan 5.3 to calculate the Odds Ratio (OR) and Mean Difference (MD) of the pooled data.

The study included 1325 individuals from seven randomized trials. Comprehensive geriatric assessment reduced the rate of postoperative delirium (28.5% vs. 37.0%; OR: 0.63; CI: 0.47–0.85; I2: 54%; P = 0.003) based on pooled data. However, it did not significantly improve other parameters such as length of stay (MD: -0.36; 95% CI: -0.376, 3.05; I2: 96%; P = 0.84), readmission rate (18.6% vs. 15.4%; OR: 1.26; CI: 0.86–1.84; I2: 0%; P = 0.24), and ADL function (MD: -0.24; 95% CI: -1.27, 0.19; I2: 0%; P = 0.64).

Conclusions

Apart from reducing delirium, it is still unclear whether comprehensive geriatric assessment improves other postoperative outcomes. More evidence from higher-quality randomized trials is needed.

Citation: Chen L, Zong W, Luo M, Yu H (2024) The impact of comprehensive geriatric assessment on postoperative outcomes in elderly surgery: A systematic review and meta-analysis. PLoS ONE 19(8): e0306308. https://doi.org/10.1371/journal.pone.0306308

Editor: Barry Kweh, National Trauma Research Institute, AUSTRALIA

Received: April 6, 2024; Accepted: June 15, 2024; Published: August 28, 2024

Copyright: © 2024 Chen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the manuscript and its Supporting Information files.

Funding: The author(s) received no specific funding for this work.

Competing interests: The authors have declared that no competing interests exist.

The World Health Organization estimates that 1 billion people worldwide were 60 years of age or older in 2019. This figure is projected to increase to 1.4 billion by 2030 [ 1 ]. As the population ages, a growing number of elderly individuals will need surgical care [ 2 ]. However, most older individuals have many chronic illnesses that hinder their ability to heal and function [ 3 ]. Additionally, non-disease-related issues can complicate surgery and the healing process. These issues include frailty, multiple medications, degenerative organ changes, poor nutrition, and cognitive decline [ 4 – 6 ].

Elderly individuals face higher surgical mortality, longer hospital stays, more frequent in-hospital adverse events (such as delirium, pressure ulcers, urinary incontinence, and functional decline), and more readmissions after discharge compared to younger patients [ 7 – 10 ]. Therefore, it is essential for surgical practitioners to identify factors that could lead to unfavorable outcomes in senior patients through timely assessments [ 11 ]. However, the current standard evaluations have significant limitations. They cannot measure the body’s reserve and capacity for compensation because they mainly focus on specific organ systems or are highly subjective [ 12 ].

Comprehensive geriatric assessment (CGA) involves a multidisciplinary team of geriatric physicians, nurses, anesthesiologists, surgeons, physiotherapists, occupational therapists, and nutritionists [ 13 ]. This approach uses a multidimensional perspective to evaluate the physical condition, functional status, mental well-being, and social environment of older adults. Based on this evaluation, a comprehensive and coordinated plan is created to improve the quality of life for elderly patients [ 14 , 15 ].

It is logical that older individuals undergoing elective surgery may benefit from a CGA. This study aimed to compare the impact of CGA with conventional treatment on unfavorable outcomes in older patients having elective surgery. A methodical search and meta-analysis of relevant literature were conducted for this investigation.

The study was reported in the International Prospective Systematic Reviews Register (PROSPERO) (registration number CRD42023478608) and conducted in compliance with the PRISMA standards [ 16 ] for systematic reviews and meta-analyses.

Search strategy

The databases searched included Medline, PubMed, Web of Science, Cochrane Library, Embase, CINAHL, ProQuest, and Wiley Online Library. Both published and unpublished publications were included. A combination of free text and Medical Subject Headings (MeSH) was used to find relevant material, with modifications made for specific databases. The search criteria were based on PICOS and included terms such as Elderly, Comprehensive geriatric evaluation, Postoperative, and Study types. The search period was up to October 2023. Detailed electronic search strategies are provided in the supplemental material.

Literature inclusion criteria

(i) Patients over 60 years old undergoing elective non-cardiac high-risk surgery served as research participants; (ii) The study employed a randomized controlled trial design. The control group received standard care (standard preoperative evaluation), while the intervention group used CGA as a fundamental component of geriatric care; (iii) The study reported at least one of the following postoperative outcomes: length of stay, 30-day readmission rate, 30-day mortality, and any other postoperative complications (such as delirium, pressure ulcers, urinary incontinence, and functional decline). Exclusion criteria: (i) Studies that used CGA solely as a risk assessment tool for postoperative unfavorable outcomes; (ii) Studies that did not use a comprehensive multidomain evaluation and optimization plan, but only evaluated one CGA field, such as nutritional status; (iii) Non-English publications; (iv) Studies focused on outpatient or emergency surgery or non-elderly populations; (v) Study registration protocols.

Data extraction

Two researchers independently screened the literature based on the title, abstract, and full text. They retrieved data and evaluated the risk of bias. All gathered information was documented uniformly according to the content of the literature. In cases of disagreement, the two researchers consulted a third researcher or discussed the issue together to decide whether to include the content.

Quality assessment

The Cochrane Handbook [ 17 ] was used to evaluate the quality of the listed randomized controlled trials, focusing on: (i) Randomization sequence generation; (ii) Allocation concealment; (iii) Blinding of outcome assessment; (iv) Selection reporting bias; (v) Completeness of outcome data; (vi) Other sources of bias.

The research group decided to exclude blinding of researchers and participants from the quality assessment, as it was deemed impractical during the CGA intervention. The risk categories were rated as ’insufficient,’ ’sufficient,’ or ’don’t know’. Studies that did not meet the quality standards were excluded.

Data analysis

RevMan 5.3 was used for the meta-analysis. A weighted odds ratio (OR) was used for count data, and a weighted mean difference (WMD) for measurement data if the same instruments were used. Otherwise, a standardized mean difference (SMD) was used. All analyses were performed with 95% confidence intervals (CI). Heterogeneity between studies was assessed using the I 2 statistic and the chi-square test.

A fixed-effects model was used if there was no statistical heterogeneity (P > 0.1, I 2 < 50%). A random-effects model was used if there was statistical heterogeneity but no clinical heterogeneity (P < 0.1, I 2 ≥ 50%). If I 2 was inconsistent with P (P < 0.1, I 2 < 50% or P > 0.1, I 2 ≥ 50%), a model with I 2 as the reference was used. If the cause of heterogeneity could not be identified, a descriptive analysis was employed.

Egger’s test was used to evaluate publication bias. The impact of each study on heterogeneity was investigated by removing studies one at a time and recalculating heterogeneity.

Search results

A total of 6249 items were found. After removing 1152 duplicates using EndNote, 24 articles were initially included. Reviews, clinical record comparisons, case reports, and other literature that did not meet the inclusion criteria were removed based on their titles and abstracts. After a closer examination of the full text, one low-quality article and sixteen that did not meet the inclusion criteria were excluded. Ultimately, 7 publications [ 18 – 24 ] were included in the meta-analysis ( Fig 1 ).

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0306308.g001

Inter-rater reliability was assessed using Cohen’s kappa when selecting studies. The findings showed satisfactory agreement between the evaluators (Kappa = 0.76; 95% CI: 0.56, 0.96; p < 0.001). An overview of patient characteristics from the seven trials is compiled in Table 1 , and the quality evaluation results are shown in Table 2 .

thumbnail

https://doi.org/10.1371/journal.pone.0306308.t001

thumbnail

https://doi.org/10.1371/journal.pone.0306308.t002

Study characteristics

Seven studies, published between 2012 and 2022, comprise this qualitative analysis. The studies were conducted in the Netherlands, China, Sweden, Norway, and the United Kingdom. The average age of the participants was between 75.5 and 85 years old. The surgeries studied included three hip fractures, two solid tumors, and colorectal and vascular cancer. Sample sizes ranged from 64 to 329 participants.

The outcome variables examined included cognitive function, delirium, length of stay, mortality, mobility, activities of daily living (ADL), weight changes, readmissions, new comorbid diagnoses, reoperation within 30 days, survival at 30 days and 3 months, care dependency, quality of life, return to an independent preoperative living situation, operation rate within 48 hours, preoperative waiting time, and postoperative complications such as pneumonia and wound infections ( Table 1 ).

Effect sizes

Postoperative delirium prevalence was studied in five trials [ 18 , 19 , 21 – 23 ] involving 473 elderly patients in the intervention group and 511 elderly patients in the control group. The analysis found that the rate of delirium was significantly lower in the CGA group compared to the control group (28.5% vs. 37.0%; OR: 0.63; CI: 0.47–0.85; I 2 : 54%; P = 0.003) ( Fig 2 ).

thumbnail

https://doi.org/10.1371/journal.pone.0306308.g002

Length of stay.

Six trials examined the impact of CGA on the length of hospital stay following surgery. However, Ommundsen et al.’s study [ 20 ] was excluded because it only provided the median and lacked additional data, making meta-analysis impossible. Ultimately, five studies [ 18 , 19 , 21 – 23 ] with 473 patients in the intervention group and 511 patients in the control group were included. The findings indicated that although the intervention group’s average hospital stay was shorter than the control group’s, this difference was not statistically significant (mean difference: -0.36; 95% CI: -0.376, 3.05; I 2 : 96%; p = 0.84). A sensitivity analysis revealed that the pooled estimates were not significant due to excessive heterogeneity ( Fig 3 ).

thumbnail

https://doi.org/10.1371/journal.pone.0306308.g003

Re-admissions.

Four studies [ 18 – 20 , 24 ] reported the rate of readmissions within 30 days following surgery, involving 396 patients in the control group and 365 in the intervention group. The short-term readmission rate was slightly higher in the intervention group (18.6%) compared to the control group (15.4%). However, the pooled data showed no statistically significant difference between the two groups (18.6% vs. 15.4%; OR: 1.26; CI: 0.86–1.84; I 2 : 0%; P = 0.24) ( Fig 4 ).

thumbnail

https://doi.org/10.1371/journal.pone.0306308.g004

Activities of daily living (ADL) functioning.

Four studies provided data on ADL function, all using the Nursing Dependency Scale [ 25 ] for assessment. A meta-analysis was conducted on three trials [ 18 , 22 , 23 ], including 215 patients in the intervention group and 236 in the control group, since Hempenius et al.’s study [ 24 ] only provided baseline ADL scores. The analysis did not reveal any significant differences in ADL between the two groups (mean difference: -0.24; 95% CI: -1.27, 0.19; I 2 : 0%; p = 0.64) ( Fig 5 ).

thumbnail

https://doi.org/10.1371/journal.pone.0306308.g005

Other postoperative outcomes.

Although not statistically significant, other outcome measures discussed included quality of life [ 21 , 24 ], care reliance [ 21 , 24 ], mortality [ 18 , 21 , 24 ], mobility [ 18 ], reoperation within 30 days [ 20 ], survival at 30 days and 3 months [ 20 ], and cognitive function [ 18 , 21 , 24 ]. Significant improvements were observed in the operation rate within 48 hours [ 22 ] (p < 0.001), preoperative waiting time [ 22 ] (p < 0.001), wound infection [ 19 ] (p = 0.032), cardiac complications [ 19 ] (p = 0.001), bowel and bladder problems [ 19 ] (p = 0.003), and tract infections [ 23 ] (p = 0.001). However, the intervention group experienced a significantly greater number of new comorbid diagnoses [ 19 ], including cognitive impairment, chronic renal illness stage 3 or above, and chronic obstructive pulmonary disease (p < 0.001). Additionally, two studies [ 21 , 24 ] on postoperative recovery were conducted; one [ 21 ] found that a higher proportion of patients in the control group reverted to their preoperative way of life (OR: 1.84; 95% CI: 1.01–3.37).

Publication bias.

Egger’s test was used to ass1ess publication bias in delirium studies, and the results suggest that publication bias may be a cause for concern (z = 2.85, p = 0.004).

1.This article reviewed randomized controlled studies on the impact of Comprehensive Geriatric Assessment (CGA) on the postoperative prognosis of elderly patients undergoing elective surgery. Seven studies met the inclusion criteria. They all optimized perioperative care by managing cognitive impairment, frailty, and comorbidities. This was achieved through preoperative assessments and targeted interventions, such as medication reviews, nutritional support, and consultations. These studies demonstrate the importance of CGA as a preventive measure to improve postoperative outcomes.

Unfortunately, among the many outcome indicators considered, only the incidence of postoperative delirium showed that the intervention was effective, which differed from the expected conclusion. This review conducted a meta-analysis only on postoperative delirium, length of stay, ADL function, and readmission rate because few studies reported additional indicators, limiting the pooled results.

2. A more comprehensive assessment of perioperative risk may have led more patients in the intervention group to choose conservative treatment, which could limit the effectiveness of CGA. For example, two studies [ 18 , 21 ] reported no significant differences in length of stay or care dependence, but found that more intervention patients were transferred to nursing homes for better rehabilitation care. The lack of follow-up observations may result in an inaccurate estimate of the intervention’s effect.

The long duration of the CGA and its reliance on geriatric knowledge might contribute to inadequate recognition of patient risk factors. This is especially true for patients who are cognitively impaired, take multiple medications, have multiple chronic conditions, or lack social support. Additionally, the lack of a longitudinal relationship between the evaluator and the patient may prevent the evaluator from addressing undiagnosed cognitive impairments or psychological problems. These issues can reduce the accuracy of assessing the benefits of interventions for outcomes influenced by multiple factors, such as readmission rates and postoperative complications.

3. During a review of the literature, it was found that most studies focused only on the assessment part of CGA and did not include specific care models or management plans for modifiable risk factors aimed at improving postoperative outcomes. Therefore, these studies did not provide a complete CGA.

More than half of the interventions in this review were multidisciplinary care pathways [ 18 , 20 , 22 , 23 ]. The other two were the Proactive Care of Older People Having Surgery (POPS) [ 19 ] and the Liaison Intervention in Frail Elderly (LIFE) pathways [ 21 , 24 ]. Additional reported models included the Perioperative Optimization of Senior Health (POSH) [ 26 ], the Hospital Elder Life Program (HELP) pathway [ 27 ], and the Person-Centered Care (PCC) pathway [ 28 ]. However, no relevant randomized controlled trials were identified.

4. In surgical care for elderly patients, many preoperative risk stratification methods are similar to CGA. All aim to evaluate surgical risks for senior citizens, allowing medical professionals to tailor preventative measures more effectively ( Table 3 ). Common methods include the Charlson Comorbidity Index (CCI) [ 29 ], the Modified Frailty Index (MFI) [ 30 ], the ASA Physical Status Classification [ 31 ], the Surgical Apgar Score (SAS) [ 32 ], and the Elixhauser Comfort Index (ECI) [ 33 ].

thumbnail

https://doi.org/10.1371/journal.pone.0306308.t003

However, unlike CGA, these methods focus on clinical and physiological parameters that directly affect surgical risk, helping to decide whether patients should proceed with surgery and what level of care they need afterward. They provide faster evaluations but may miss subtle differences that a complete CGA can capture. CGA has a broader scope and is more widely used to assess the overall health status of patients. These aspects may not directly affect surgery but are crucial for patients’ overall health and postoperative recovery.

5. Bias is inevitable. First, the potential for bias increased when the same researchers were involved in both the intervention and control groups, as the included studies were single-center and could not blind investigators and participants during the CGA intervention. Second, a time gap between pre- and post-operative evaluations could introduce extraneous variables [ 34 ]. Some studies addressed this by shortening the observation period, leading researchers to select quickly examinable items. However, this may lead to publication bias. For example, studies on immediate postoperative delirium may be published earlier than those on long-term cognitive effects [ 35 ]. Insufficient follow-up time can also result in false-negative outcomes.

Third, this review is limited to English-language research, which may introduce bias. Despite following guidelines for meta-analyses of epidemiological observational studies, there remains some subjectivity in the consensus protocol for assessing the quality of included studies.

6. Even so, the value of preoperative CGA and collaborative geriatric management in helping medical professionals address the complexities of elderly care is undeniable [ 36 – 39 ]. However, evaluating preoperative CGA is challenging, as current information is inconclusive and requires definitive trials. These trials must comply with the CONSORT guidelines [ 40 ], using rigorous, randomized, controlled designs and outcome measures.

The research should fully implement CGA interventions, including optimization and assessment, select a specific strategy model, and standardize the evaluation process and instruments. Additionally, researchers should identify the specific elements needing intervention and the subtle observational variables affected by CGA. It is crucial to find suitable individuals who are likely to benefit from CGA.

Current evidence suggests that few studies have thoroughly applied CGA to surgical patients. It is unclear how CGA affects postoperative outcomes, other than delirium, in older patients undergoing elective non-cardiac surgery. More research through higher-quality randomized controlled trials is needed.

Supporting information

S1 file. prisma checklist..

https://doi.org/10.1371/journal.pone.0306308.s001

S2 File. Detailed search criteria in PubMed.

https://doi.org/10.1371/journal.pone.0306308.s002

  • View Article
  • Google Scholar
  • PubMed/NCBI
  • 17. Higgins JP, Altman DG. Assessing risk of bias in included studies. In: Cochrane handbook for systematic reviews of interventions. Wiley; 2008. p. 187–241. https://doi.org/10.1002/9780470712184.ch8

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

cancers-logo

Article Menu

difference between traditional and systematic literature review

  • Subscribe SciFeed
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Methods for overcoming chemoresistance in head and neck squamous cell carcinoma: keeping the focus on cancer stem cells, a systematic review.

difference between traditional and systematic literature review

Simple Summary

1. introduction, 2. materials and methods, 3.1. natural products.

Mechanism of Action on CSCsEffect on Conventional Chemotherapy
[ , ]Fungal immunomodulatory proteins block IL-6/Stat3 signaling.Decreased resistance to CDDP.
[ , ]Stimulation of caspase-dependent apoptotic pathway, inhibition of Sonic Hedgehog (SHH) pathway and decreased expression of SOX2 and OCT4.Enhances the effectiveness of CDDP and 5-fluorouracil (5-FU) chemotherapy.
[ , , , ]Lowers CD44 expression and ALDH1 activity in CSCs; Suppresses the phosphorylation of STAT3, possibly through the decrease of IL-6, a growth factor that stimulates STAT3.Increases effects of CDDP, and it has an additive effect on 5-fluorouracil.
[ ]Reduces miR-21 expression in a dose-dependent manner; decreases CSCs marker expression.Increases effects of CDDP.
[ , ]Downregulates GRP78 and decreases CSCs marker expression; Decreases ABCG2 expression.Increases effects of CDDP.
[ ]Downregulates HSP27; Disrupts the p38 MAPK-Hsp27 axis.Increases effects of CDDP.
[ ]Dysregulates the JAK2/STAT3 signaling pathway.Increases effects of CDDP.
[ ]Spindle Pole body Component 25 (SPC25) inhibitor.Increases effects of CDDP.
[ ]Decreases the expression of stemness markers and reverses the CD44 + and side population (SP) cell ratios by inhibiting RXRα.Increases effects of CDDP.
[ , , ]Suppresses mitochondrial respiration at complex I of the electron transport chain; inhibits the IL-6/Stat3 pathway.Increases effects of CDDP.
[ , ]Upregulates miR-181c-5p expression, which decreases DERL-1 expression.Enhances cisplatin-induced cell death in mesenchymal-like CD44 cells.

3.2. Adjuvant Molecules to Traditional Chemotherapy

Mechanism of Action on CSCsEffect on Conventional Chemotherapy
[ ]Inhibitor of NRF2; reduces CD44 expressionIncreases effects of CDDP.
[ ]Reduces CD44 expressionIncreases effects of CDDP.
[ , , ]Pharmaceutical inhibitor of HDAC6Increases effects of CDDP.
[ ]Reduces the expression of NF-κB p50 and p65 subunits.Potential for increased effects of CDDP.
[ , , ]Inhibits the WNT pathway in vitro and lowers tumorigenicity in vivoUnknown
[ , ]Downregulates the expression of β-cateninUnknown
[ , , ]Pan-HDAC inhibitor; reduces nanog expression in HPV-positive and -negative HNSCCIncreases effects of CDDP.
[ ]WNT antagonistUnknown
[ , , , ]Synthetic analog of curcumin.Unknown
[ , , , ]Inhibits HDACsIncreases effects of CDDP.
[ ]Humanized anti-IL-6R monoclonal antibody; inhibits Bmi-1 function.Increases effects of CDDP.
[ , ]PI3k/mTOR signaling inhibitors; decreases Bmi-1 expression.Increases effects of CDDP.
[ ]Decreases Bmi-1 expressionIncreases effects of CDDP.

3.3. CSCs Targeting from Patient’s Fresh Biopsies for Functional Precision Therapy

4. discussion, 5. conclusions, author contributions, conflicts of interest, abbreviations.

  • Williams, S.A.; Anderson, W.C.; Santaguida, M.T.; Dylla, S.J. Patient-derived xenografts, the cancer stem cell paradigm, and cancer pathobiology in the 21st century. Lab. Investig. 2013 , 93 , 970–982. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Reya, T.; Morrison, S.J.; Clarke, M.F.; Weissman, I.L. Stem cells, cancer, and cancer stem cells. Nature 2001 , 414 , 105–111. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Rastogi, P. Emergence of cancer stem cells in head and neck squamous cell carcinoma: A therapeutic insight with literature review. Dent. Res. J. 2012 , 9 , 239–244. [ Google Scholar ]
  • O’Brien, C.A.; Kreso, A.; Jamieson, C.H. Cancer stem cells and self-renewal. Clin. Cancer Res. 2010 , 16 , 3113–3120. [ Google Scholar ] [ CrossRef ]
  • Singh, P.; Augustine, D.; Rao, R.S.; Patil, S.; Awan, K.H.; Sowmya, S.V.; Haragannavar, V.C.; Prasad, K. Role of cancer stem cells in head-and-neck squamous cell carcinoma-A systematic review. J. Carcinog. 2021 , 20 , 12. [ Google Scholar ] [ CrossRef ]
  • Guo, K.; Xiao, W.; Chen, X.; Zhao, Z.; Lin, Y.; Chen, G. Epidemiological Trends of Head and Neck Cancer: A Population-Based Study. Biomed. Res. Int. 2021 , 2021 , 1738932. [ Google Scholar ] [ CrossRef ]
  • Siegel, R.L.; Miller, K.D.; Jemal, A. Cancer statistics, 2019. CA Cancer J Clin 2019 , 69 , 7–34. [ Google Scholar ] [ CrossRef ]
  • Hashim, D.; Sartori, S.; La Vecchia, C.; Serraino, D.; Maso, L.D.; Negri, E.; Smith, E.; Levi, F.; Boccia, S.; Cadoni, G.; et al. Hormone factors play a favorable role in female head and neck cancer risk. Cancer Med. 2017 , 6 , 1998–2007. [ Google Scholar ] [ CrossRef ]
  • Wang, S.; Liu, Y.; Feng, Y.; Zhang, J.; Swinnen, J.; Li, Y.; Ni, Y. A Review on Curability of Cancers: More Efforts for Novel Therapeutic Options Are Needed. Cancers 2019 , 11 , 1782. [ Google Scholar ] [ CrossRef ]
  • Dorna, D.; Paluszczak, J. Targeting cancer stem cells as a strategy for reducing chemotherapy resistance in head and neck cancers. J. Cancer Res. Clin. Oncol. 2023 , 149 , 13417–13435. [ Google Scholar ] [ CrossRef ]
  • Ciardiello, F.; Tortora, G. A novel approach in the treatment of cancer: Targeting the epidermal growth factor receptor. Clin. Cancer Res. 2001 , 7 , 2958–2970. [ Google Scholar ] [ PubMed ]
  • Kumar, H.A.; Desai, A.; Mohiddin, G.; Mishra, P.; Bhattacharyya, A.; Nishat, R. Cancer Stem Cells in Head and Neck Squamous Cell Carcinoma. J. Pharm. Bioallied Sci. 2023 , 15 , S826–S830. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • de Camargo, M.R.; Frazon, T.F.; Inacio, K.K.; Smiderle, F.R.; Amôr, N.G.; Dionísio, T.J.; Santos, C.F.; Rodini, C.O.; Lara, V.S. Ganoderma lucidum polysaccharides inhibit in vitro tumorigenesis, cancer stem cell properties and epithelial-mesenchymal transition in oral squamous cell carcinoma. J. Ethnopharmacol. 2022 , 286 , 114891. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sola, A.M.; Johnson, D.E.; Grandis, J.R. Investigational multitargeted kinase inhibitors in development for head and neck neoplasms. Expert. Opin. Investig. Drugs 2019 , 28 , 351–363. [ Google Scholar ] [ CrossRef ]
  • Pattabiraman, D.R.; Weinberg, R.A. Tackling the cancer stem cells—What challenges do they pose? Nat. Rev. Drug Discov. 2014 , 13 , 497–512. [ Google Scholar ] [ CrossRef ]
  • Loaiza, B.; Rojas, E.; Valverde, M. The New Model of Carcinogenesis: The Cancer Stem Cell Hypothesis ; Pesheva, M., Dimitrov, M., Stoycheva, T.S., Eds.; InTech: London, UK, 2012; Volume 1. [ Google Scholar ]
  • Aimola, P.; Desiderio, V.; Graziano, A.; Claudio, P.P. Stem cells in cancer therapy: From their role in pathogenesis to their use as therapeutic agents. Drug News Perspect. 2010 , 23 , 175–183. [ Google Scholar ] [ CrossRef ]
  • McCullough, M.J.; Prasad, G.; Farah, C.S. Oral mucosal malignancy and potentially malignant lesions: An update on the epidemiology, risk factors, diagnosis and management. Aust. Dent. J. 2010 , 55 (Suppl. S1), 61–65. [ Google Scholar ] [ CrossRef ]
  • Yin, W.; Wang, J.; Jiang, L.; James Kang, Y. Cancer and stem cells. Exp. Biol. Med. 2021 , 246 , 1791–1801. [ Google Scholar ] [ CrossRef ]
  • Yang, L.; Shi, P.; Zhao, G.; Xu, J.; Peng, W.; Zhang, J.; Zhang, G.; Wang, X.; Dong, Z.; Chen, F.; et al. Targeting cancer stem cell pathways for cancer therapy. Signal Transduct. Target. Ther. 2020 , 5 , 8. [ Google Scholar ] [ CrossRef ]
  • Walcher, L.; Kistenmacher, A.K.; Suo, H.; Kitte, R.; Dluczek, S.; Strauss, A.; Blaudszun, A.R.; Yevsa, T.; Fricke, S.; Kossatz-Boehlert, U. Cancer Stem Cells-Origins and Biomarkers: Perspectives for Targeted Personalized Therapies. Front. Immunol. 2020 , 11 , 1280. [ Google Scholar ] [ CrossRef ]
  • Cirillo, N.; Wu, C.; Prime, S.S. Heterogeneity of cancer stem cells in tumorigenesis, metastasis, and resistance to antineoplastic treatment of head and neck tumours. Cells 2021 , 10 , 3068. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Zhou, P.; Li, B.; Liu, F.; Zhang, M.; Wang, Q.; Liu, Y.; Yao, Y.; Li, D. The epithelial to mesenchymal transition (EMT) and cancer stem cells: Implication for treatment resistance in pancreatic cancer. Mol. Cancer 2017 , 16 , 52. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Attramadal, C.G.; Kumar, S.; Boysen, M.E.; Dhakal, H.P.; Nesland, J.M.; Bryne, M. Tumor Budding, EMT and Cancer Stem Cells in T1-2/N0 Oral Squamous Cell Carcinomas. Anticancer. Res. 2015 , 35 , 6111–6120. [ Google Scholar ]
  • Biddle, A.; Mackenzie, I.C. Cancer stem cells and EMT in carcinoma. Cancer Metastasis Rev. 2012 , 31 , 285–293. [ Google Scholar ] [ CrossRef ]
  • Sahoo, S.; Ashraf, B.; Duddu, A.S.; Biddle, A.; Jolly, M.K. Interconnected high-dimensional landscapes of epithelial-mesenchymal plasticity and stemness in cancer. Clin. Exp. Metastasis 2022 , 39 , 279–290. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Cortese, A.; Pantaleo, G.; Amato, M.; Lawrence, L.; Mayes, V.; Brown, L.; Sarno, M.R.; Valluri, J.; Claudio, P.P. A new complementary procedure for patients affected by head and neck cancer: Chemo-predictive assay. Int. J. Surg. Case Rep. 2016 , 26 , 42–46. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Mathis, S.E.; Alberico, A.; Nande, R.; Neto, W.; Lawrence, L.; McCallister, D.R.; Denvir, J.; Kimmey, G.A.; Mogul, M.; Oakley, G., 3rd; et al. Chemo-predictive assay for targeting cancer stem-like cells in patients affected by brain tumors. PLoS ONE 2014 , 9 , e105710. [ Google Scholar ] [ CrossRef ]
  • Howard, C.M.; Valluri, J.; Alberico, A.; Julien, T.; Mazagri, R.; Marsh, R.; Alastair, H.; Cortese, A.; Griswold, M.; Wang, W.; et al. Analysis of Chemopredictive Assay for Targeting Cancer Stem Cells in Glioblastoma Patients. Transl. Oncol. 2017 , 10 , 241–254. [ Google Scholar ] [ CrossRef ]
  • Howard, C.M.; Zgheib, N.B.; Bush, S., 2nd; DeEulis, T.; Cortese, A.; Mollo, A.; Lirette, S.T.; Denning, K.; Valluri, J.; Claudio, P.P. Clinical relevance of cancer stem cell chemotherapeutic assay for recurrent ovarian cancer. Transl. Oncol. 2020 , 13 , 100860. [ Google Scholar ] [ CrossRef ]
  • Ranjan, T.; Howard, C.M.; Yu, A.; Xu, L.; Aziz, K.; Jho, D.; Leonardo, J.; Hameed, M.A.; Karlovits, S.M.; Wegner, R.E.; et al. Cancer Stem Cell Chemotherapeutics Assay for Prospective Treatment of Recurrent Glioblastoma and Progressive Anaplastic Glioma: A Single-Institution Case Series. Transl. Oncol. 2020 , 13 , 100755. [ Google Scholar ] [ CrossRef ]
  • Ranjan, T.; Sengupta, S.; Glantz, M.J.; Green, R.M.; Yu, A.; Aregawi, D.; Chaudhary, R.; Chen, R.; Zuccarello, M.; Lu-Emerson, C.; et al. Cancer stem cell assay-guided chemotherapy improves survival of patients with recurrent glioblastoma in a randomized trial. Cell Rep. Med. 2023 , 4 , 101025. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ranjan, T.; Yu, A.; Elhamdani, S.; Howard, C.M.; Lirette, S.T.; Denning, K.L.; Valluri, J.; Claudio, P.P. Treatment of unmethylated MGMT-promoter recurrent glioblastoma with cancer stem cell assay-guided chemotherapy and the impact on patients’ healthcare costs. Neurooncol Adv. 2023 , 5 , vdad055. [ Google Scholar ] [ CrossRef ]
  • Spirito, F.; Claudio, P.P.; Howard, C.M.; Valluri, J.; Denning, K.L.; Muzio, L.L.; Cortese, A. A New and Effective Procedure for Advanced Oral Cancer Therapy: The Potential of a Cancer Stem Cell Assay in Guiding Chemotherapy. Transl. Med. UniSa 2023 , 25 , 16–27. [ Google Scholar ] [ CrossRef ]
  • Howard, C.M.; Bush, S., 2nd; Zgheib, N.B.; Lirette, S.T.; Cortese, A.; Mollo, A.; Valluri, J.; Claudio, P.P. Cancer Stem Cell Assay for the Treatment of Platinum-Resistant Recurrent Ovarian Cancer. HSOA J. Stem Cells Res. Dev. Ther. 2021 , 7 , 076. [ Google Scholar ] [ CrossRef ]
  • Ezzat, Y.E.; Sharka, R.M.; Huzaimi, A.A.; Al-Zahrani, K.M.; Abed, H.H. The role of exercise therapy in managing post-radiotherapy trismus in head and neck cancer. J. Taibah Univ. Med. Sci. 2021 , 16 , 127–133. [ Google Scholar ] [ CrossRef ]
  • Bharadwaj, R.; Sahu, B.P.; Haloi, J.; Laloo, D.; Barooah, P.; Keppen, C.; Deka, M.; Medhi, S. Combinatorial therapeutic approach for treatment of oral squamous cell carcinoma. Artif. Cells Nanomed. Biotechnol. 2019 , 47 , 572–585. [ Google Scholar ] [ CrossRef ]
  • Wong, T.; Wiesenfeld, D. Oral Cancer. Aust. Dent. J. 2018 , 63 (Suppl. S1), S91–S99. [ Google Scholar ] [ CrossRef ]
  • Dehghani Nazhvani, A.; Sarafraz, N.; Askari, F.; Heidari, F.; Razmkhah, M. Anti-Cancer Effects of Traditional Medicinal Herbs on Oral Squamous Cell Carcinoma. Asian Pac. J. Cancer Prev. 2020 , 21 , 479–484. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Deng, Y.; Ma, J.; Tang, D.; Zhang, Q. Dynamic biomarkers indicate the immunological benefits provided by Ganoderma spore powder in post-operative breast and lung cancer patients. Clin. Transl. Oncol. 2021 , 23 , 1481–1490. [ Google Scholar ] [ CrossRef ]
  • Sohretoglu, D.; Huang, S. Ganoderma lucidum Polysaccharides as An Anti-cancer Agent. Anticancer. Agents Med. Chem. 2018 , 18 , 667–674. [ Google Scholar ] [ CrossRef ]
  • Elsayed, E.A.; El Enshasy, H.; Wadaan, M.A.; Aziz, R. Mushrooms: A potential natural source of anti-inflammatory compounds for medical applications. Mediat. Inflamm. 2014 , 2014 , 805841. [ Google Scholar ] [ CrossRef ]
  • Yuen, J.W.; Gohel, M.D. Anticancer effects of Ganoderma lucidum: A review of scientific evidence. Nutr. Cancer 2005 , 53 , 11–17. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Lin, Z.B.; Zhang, H.N. Anti-tumor and immunoregulatory activities of Ganoderma lucidum and its possible mechanisms. Acta Pharmacol. Sin. 2004 , 25 , 1387–1395. [ Google Scholar ] [ PubMed ]
  • Wang, T.Y.; Yu, C.C.; Hsieh, P.L.; Liao, Y.W.; Yu, C.H.; Chou, M.Y. GMI ablates cancer stemness and cisplatin resistance in oral carcinomas stem cells through IL-6/Stat3 signaling inhibition. Oncotarget 2017 , 8 , 70422–70430. [ Google Scholar ] [ CrossRef ]
  • Bu, L.L.; Zhao, Z.L.; Liu, J.F.; Ma, S.R.; Huang, C.F.; Liu, B.; Zhang, W.F.; Sun, Z.J. STAT3 blockade enhances the efficacy of conventional chemotherapeutic agents by eradicating head neck stemloid cancer cell. Oncotarget 2015 , 6 , 41944–41958. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Kaliyaperumal, K.; Sharma, A.K.; McDonald, D.G.; Dhindsa, J.S.; Yount, C.; Singh, A.K.; Won, J.S.; Singh, I. S-Nitrosoglutathione-mediated STAT3 regulation in efficacy of radiotherapy and cisplatin therapy in head and neck squamous cell carcinoma. Redox Biol. 2015 , 6 , 41–50. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Geiger, J.L.; Grandis, J.R.; Bauman, J.E. The STAT3 pathway as a therapeutic target in head and neck cancer: Barriers and innovations. Oral. Oncol. 2016 , 56 , 84–92. [ Google Scholar ] [ CrossRef ]
  • Mali, S.B. Review of STAT3 (Signal Transducers and Activators of Transcription) in head and neck cancer. Oral. Oncol. 2015 , 51 , 565–569. [ Google Scholar ] [ CrossRef ]
  • Sriuranpong, V.; Park, J.I.; Amornphimoltham, P.; Patel, V.; Nelkin, B.D.; Gutkind, J.S. Epidermal growth factor receptor-independent constitutive activation of STAT3 in head and neck squamous cell carcinoma is mediated by the autocrine/paracrine stimulation of the interleukin 6/gp130 cytokine system. Cancer Res. 2003 , 63 , 2948–2956. [ Google Scholar ]
  • van der Zee, M.; Sacchetti, A.; Cansoy, M.; Joosten, R.; Teeuwssen, M.; Heijmans-Antonissen, C.; Ewing-Graham, P.C.; Burger, C.W.; Blok, L.J.; Fodde, R. IL6/JAK1/STAT3 Signaling Blockade in Endometrial Cancer Affects the ALDHhi/CD126+ Stem-like Component and Reduces Tumor Burden. Cancer Res. 2015 , 75 , 3608–3622. [ Google Scholar ] [ CrossRef ]
  • Han, Z.; Wang, X.; Ma, L.; Chen, L.; Xiao, M.; Huang, L.; Cao, Y.; Bai, J.; Ma, D.; Zhou, J.; et al. Inhibition of STAT3 signaling targets both tumor-initiating and differentiated cell populations in prostate cancer. Oncotarget 2014 , 5 , 8416–8428. [ Google Scholar ] [ CrossRef ]
  • Mi, L.; Wang, X.; Govind, S.; Hood, B.L.; Veenstra, T.D.; Conrads, T.P.; Saha, D.T.; Goldman, R.; Chung, F.L. The role of protein binding in induction of apoptosis by phenethyl isothiocyanate and sulforaphane in human non-small lung cancer cells. Cancer Res. 2007 , 67 , 6409–6416. [ Google Scholar ] [ CrossRef ]
  • Elkashty, O.A.; Ashry, R.; Elghanam, G.A.; Pham, H.M.; Su, X.; Stegen, C.; Tran, S.D. Broccoli extract improves chemotherapeutic drug efficacy against head-neck squamous cell carcinomas. Med. Oncol. 2018 , 35 , 124. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Elkashty, O.A.; Tran, S.D. Broccoli extract increases drug-mediated cytotoxicity towards cancer stem cells of head and neck squamous cell carcinoma. Br. J. Cancer 2020 , 123 , 1395–1403. [ Google Scholar ] [ CrossRef ]
  • Clarke, J.D.; Hsu, A.; Riedl, K.; Bella, D.; Schwartz, S.J.; Stevens, J.F.; Ho, E. Bioavailability and inter-conversion of sulforaphane and erucin in human subjects consuming broccoli sprouts or broccoli supplement in a cross-over study design. Pharmacol. Res. 2011 , 64 , 456–463. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Esumi, T.; Makado, G.; Zhai, H.; Shimizu, Y.; Mitsumoto, Y.; Fukuyama, Y. Efficient synthesis and structure-activity relationship of honokiol, a neurotrophic biphenyl-type neolignan. Bioorg Med. Chem. Lett. 2004 , 14 , 2621–2625. [ Google Scholar ] [ CrossRef ]
  • Lai, I.C.; Shih, P.H.; Yao, C.J.; Yeh, C.T.; Wang-Peng, J.; Lui, T.N.; Chuang, S.E.; Hu, T.S.; Lai, T.Y.; Lai, G.M. Elimination of cancer stem-like cells and potentiation of temozolomide sensitivity by Honokiol in glioblastoma multiforme cells. PLoS ONE 2015 , 10 , e0114830. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Huang, J.S.; Yao, C.J.; Chuang, S.E.; Yeh, C.T.; Lee, L.M.; Chen, R.M.; Chao, W.J.; Whang-Peng, J.; Lai, G.M. Honokiol inhibits sphere formation and xenograft growth of oral cancer side population cells accompanied with JAK/STAT signaling pathway suppression and apoptosis induction. BMC Cancer 2016 , 16 , 245. [ Google Scholar ] [ CrossRef ]
  • Chang, M.T.; Lee, S.P.; Fang, C.Y.; Hsieh, P.L.; Liao, Y.W.; Lu, M.Y.; Tsai, L.L.; Yu, C.C.; Liu, C.M. Chemosensitizing effect of honokiol in oral carcinoma stem cells via regulation of IL-6/Stat3 signaling. Environ. Toxicol. 2018 , 33 , 1105–1112. [ Google Scholar ] [ CrossRef ]
  • Dong, Y.; Ochsenreither, S.; Cai, C.; Kaufmann, A.M.; Albers, A.E.; Qian, X. Aldehyde dehydrogenase 1 isoenzyme expression as a marker of cancer stem cells correlates to histopathological features in head and neck cancer: A meta-analysis. PLoS ONE 2017 , 12 , e0187615. [ Google Scholar ] [ CrossRef ]
  • Chen, Y.W.; Chen, K.H.; Huang, P.I.; Chen, Y.C.; Chiou, G.Y.; Lo, W.L.; Tseng, L.M.; Hsu, H.S.; Chang, K.W.; Chiou, S.H. Cucurbitacin I suppressed stem-like property and enhanced radiation-induced apoptosis in head and neck squamous carcinoma—Derived CD44 + ALDH1 + cells. Mol. Cancer Ther. 2010 , 9 , 2879–2892. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ji, N.; Jiang, L.; Deng, P.; Xu, H.; Chen, F.; Liu, J.; Li, J.; Liao, G.; Zeng, X.; Lin, Y.; et al. Synergistic effect of honokiol and 5-fluorouracil on apoptosis of oral squamous cell carcinoma cells. J. Oral. Pathol. Med. 2017 , 46 , 201–207. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Hwang, B.Y.; Roberts, S.K.; Chadwick, L.R.; Wu, C.D.; Kinghorn, A.D. Antimicrobial constituents from goldenseal (the Rhizomes of Hydrastis canadensis) against selected oral pathogens. Planta Med. 2003 , 69 , 623–627. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Kuo, C.L.; Chi, C.W.; Liu, T.Y. The anti-inflammatory potential of berberine in vitro and in vivo. Cancer Lett. 2004 , 203 , 127–137. [ Google Scholar ] [ CrossRef ]
  • Tsang, C.M.; Lau, E.P.; Di, K.; Cheung, P.Y.; Hau, P.M.; Ching, Y.P.; Wong, Y.C.; Cheung, A.L.; Wan, T.S.; Tong, Y.; et al. Berberine inhibits Rho GTPases and cell migration at low doses but induces G2 arrest and apoptosis at high doses in human cancer cells. Int. J. Mol. Med. 2009 , 24 , 131–138. [ Google Scholar ] [ CrossRef ]
  • Wang, J.; Yang, S.; Cai, X.; Dong, J.; Chen, Z.; Wang, R.; Zhang, S.; Cao, H.; Lu, D.; Jin, T.; et al. Berberine inhibits EGFR signaling and enhances the antitumor effects of EGFR inhibitors in gastric cancer. Oncotarget 2016 , 7 , 76076–76086. [ Google Scholar ] [ CrossRef ]
  • Lin, C.Y.; Hsieh, P.L.; Liao, Y.W.; Peng, C.Y.; Lu, M.Y.; Yang, C.H.; Yu, C.C.; Liu, C.M. Berberine-targeted miR-21 chemosensitizes oral carcinomas stem cells. Oncotarget 2017 , 8 , 80900–80908. [ Google Scholar ] [ CrossRef ]
  • Kim, J.S.; Oh, D.; Yim, M.J.; Park, J.J.; Kang, K.R.; Cho, I.A.; Moon, S.M.; Oh, J.S.; You, J.S.; Kim, C.S.; et al. Berberine induces FasL-related apoptosis through p38 activation in KB human oral cancer cells. Oncol. Rep. 2015 , 33 , 1775–1782. [ Google Scholar ] [ CrossRef ]
  • Chen, G.; Zhu, L.; Liu, Y.; Zhou, Q.; Chen, H.; Yang, J. Isoliquiritigenin, a flavonoid from licorice, plays a dual role in regulating gastrointestinal motility in vitro and in vivo. Phytother. Res. 2009 , 23 , 498–506. [ Google Scholar ] [ CrossRef ]
  • Kim, J.Y.; Park, S.J.; Yun, K.J.; Cho, Y.W.; Park, H.J.; Lee, K.T. Isoliquiritigenin isolated from the roots of Glycyrrhiza uralensis inhibits LPS-induced iNOS and COX-2 expression via the attenuation of NF-kappaB in RAW 264.7 macrophages. Eur. J. Pharmacol. 2008 , 584 , 175–184. [ Google Scholar ] [ CrossRef ]
  • Hou, C.; Li, W.; Li, Z.; Gao, J.; Chen, Z.; Zhao, X.; Yang, Y.; Zhang, X.; Song, Y. Synthetic Isoliquiritigenin Inhibits Human Tongue Squamous Carcinoma Cells through Its Antioxidant Mechanism. Oxid. Med. Cell Longev. 2017 , 2017 , 1379430. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Hu, F.W.; Yu, C.C.; Hsieh, P.L.; Liao, Y.W.; Lu, M.Y.; Chu, P.M. Targeting oral cancer stemness and chemoresistance by isoliquiritigenin-mediated GRP78 regulation. Oncotarget 2017 , 8 , 93912–93923. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Chen, S.F.; Nieh, S.; Jao, S.W.; Liu, C.L.; Wu, C.H.; Chang, Y.C.; Yang, C.Y.; Lin, Y.S. Quercetin suppresses drug-resistant spheres via the p38 MAPK-Hsp27 apoptotic pathway in oral cancer cells. PLoS ONE 2012 , 7 , e49275. [ Google Scholar ] [ CrossRef ]
  • Langdon, S.P.; Rabiasz, G.J.; Hirst, G.L.; King, R.J.; Hawkins, R.A.; Smyth, J.F.; Miller, W.R. Expression of the heat shock protein HSP27 in human ovarian cancer. Clin. Cancer Res. 1995 , 1 , 1603–1609. [ Google Scholar ]
  • Nair, H.B.; Sung, B.; Yadav, V.R.; Kannappan, R.; Chaturvedi, M.M.; Aggarwal, B.B. Delivery of antiinflammatory nutraceuticals by nanoparticles for the prevention and treatment of cancer. Biochem. Pharmacol. 2010 , 80 , 1833–1843. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Lin, C.S.; Bamodu, O.A.; Kuo, K.T.; Huang, C.M.; Liu, S.C.; Wang, C.H.; Tzeng, Y.M.; Chao, T.Y.; Yeh, C.T. Investigation of ovatodiolide, a macrocyclic diterpenoid, as a potential inhibitor of oral cancer stem-like cells properties via the inhibition of the JAK2/STAT3/JARID1B signal circuit. Phytomedicine 2018 , 46 , 93–103. [ Google Scholar ] [ CrossRef ]
  • Chen, Y.J.; You, G.R.; Lai, M.Y.; Lu, L.S.; Chen, C.Y.; Ting, L.L.; Lee, H.L.; Kanno, Y.; Chiou, J.F.; Cheng, A.J. A Combined Systemic Strategy for Overcoming Cisplatin Resistance in Head and Neck Cancer: From Target Identification to Drug Discovery. Cancers 2020 , 12 , 3482. [ Google Scholar ] [ CrossRef ]
  • Jiang, P.; Xu, C.; Zhou, M.; Zhou, H.; Dong, W.; Wu, X.; Chen, A.; Feng, Q. RXRalpha-enriched cancer stem cell-like properties triggered by CDDP in head and neck squamous cell carcinoma (HNSCC). Carcinogenesis 2018 , 39 , 252–262. [ Google Scholar ] [ CrossRef ]
  • Li, M.; Zhang, F.; Wang, X.; Wu, X.; Zhang, B.; Zhang, N.; Wu, W.; Wang, Z.; Weng, H.; Liu, S.; et al. Magnolol inhibits growth of gallbladder cancer cells through the p53 pathway. Cancer Sci. 2015 , 106 , 1341–1350. [ Google Scholar ] [ CrossRef ]
  • Ranaware, A.M.; Banik, K.; Deshpande, V.; Padmavathi, G.; Roy, N.K.; Sethi, G.; Fan, L.; Kumar, A.P.; Kunnumakkara, A.B. Magnolol: A Neolignan from the Magnolia Family for the Prevention and Treatment of Cancer. Int. J. Mol. Sci. 2018 , 19 , 2362. [ Google Scholar ] [ CrossRef ]
  • Ikeda, K.; Sakai, Y.; Nagase, H. Inhibitory effect of magnolol on tumour metastasis in mice. Phytother. Res. 2003 , 17 , 933–937. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Zhang, Q.; Cheng, G.; Pan, J.; Zielonka, J.; Xiong, D.; Myers, C.R.; Feng, L.; Shin, S.S.; Kim, Y.H.; Bui, D.; et al. Magnolia extract is effective for the chemoprevention of oral cancer through its ability to inhibit mitochondrial respiration at complex I. Cell Commun. Signal 2020 , 18 , 58. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bui, D.; Li, L.; Yin, T.; Wang, X.; Gao, S.; You, M.; Singh, R.; Hu, M. Pharmacokinetic and Metabolic Profiling of Key Active Components of Dietary Supplement Magnolia officinalis Extract for Prevention against Oral Carcinoma. J. Agric. Food Chem. 2020 , 68 , 6576–6587. [ Google Scholar ] [ CrossRef ]
  • Peng, C.Y.; Yu, C.C.; Huang, C.C.; Liao, Y.W.; Hsieh, P.L.; Chu, P.M.; Yu, C.H.; Lin, S.S. Magnolol inhibits cancer stemness and IL-6/Stat3 signaling in oral carcinomas. J. Formos. Med. Assoc. 2022 , 121 , 51–57. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Pandi-Perumal, S.R.; Srinivasan, V.; Maestroni, G.J.; Cardinali, D.P.; Poeggeler, B.; Hardeland, R. Melatonin: Nature’s most versatile biological signal? FEBS J. 2006 , 273 , 2813–2838. [ Google Scholar ] [ CrossRef ]
  • Haghi-Aminjan, H.; Asghari, M.H.; Farhood, B.; Rahimifard, M.; Hashemi Goradel, N.; Abdollahi, M. The role of melatonin on chemotherapy-induced reproductive toxicity. J. Pharm. Pharmacol. 2018 , 70 , 291–306. [ Google Scholar ] [ CrossRef ]
  • Plaimee, P.; Weerapreeyakul, N.; Barusrux, S.; Johns, N.P. Melatonin potentiates cisplatin-induced apoptosis and cell cycle arrest in human lung adenocarcinoma cells. Cell Prolif. 2015 , 48 , 67–77. [ Google Scholar ] [ CrossRef ]
  • Shigeishi, H.; Yokoyama, S.; Murodumi, H.; Sakuma, M.; Fukada, S.; Okuda, S.; Yamakado, N.; Ono, S.; Takechi, M.; Ohta, K. Melatonin enhances cisplatin-induced cell death through inhibition of DERL1 in mesenchymal-like CD44(high) OSCC cells. J. Oral. Pathol. Med. 2022 , 51 , 281–289. [ Google Scholar ] [ CrossRef ]
  • Praharaj, P.P.; Singh, A.; Patra, S.; Bhutia, S.K. Co-targeting autophagy and NRF2 signaling triggers mitochondrial superoxide to sensitize oral cancer stem cells for cisplatin-induced apoptosis. Free Radic. Biol. Med. 2023 , 207 , 72–88. [ Google Scholar ] [ CrossRef ]
  • Sakuma, T.; Uzawa, K.; Onda, T.; Shiiba, M.; Yokoe, H.; Shibahara, T.; Tanzawa, H. Aberrant expression of histone deacetylase 6 in oral squamous cell carcinoma. Int. J. Oncol. 2006 , 29 , 117–124. [ Google Scholar ] [ CrossRef ]
  • Pham, T.Q.; Robinson, K.; Xu, L.; Pavlova, M.N.; Skapek, S.X.; Chen, E.Y. HDAC6 promotes growth, migration/invasion, and self-renewal of rhabdomyosarcoma. Oncogene 2021 , 40 , 578–591. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Tavares, M.O.; Milan, T.M.; Bighetti-Trevisan, R.L.; Leopoldino, A.M.; de Almeida, L.O. Pharmacological inhibition of HDAC6 overcomes cisplatin chemoresistance by targeting cancer stem cells in oral squamous cell carcinoma. J. Oral. Pathol. Med. 2022 , 51 , 529–537. [ Google Scholar ] [ CrossRef ]
  • Su, T.R.; Yu, C.C.; Chao, S.C.; Huang, C.C.; Liao, Y.W.; Hsieh, P.L.; Yu, C.H.; Lin, S.S. Fenofibrate diminishes the self-renewal and metastasis potentials of oral carcinoma stem cells through NF-κB signaling. J. Formos. Med. Assoc. 2022 , 121 , 1900–1907. [ Google Scholar ] [ CrossRef ]
  • Li, L.J.; Li, C.H.; Chang, P.M.H.; Lai, T.C.; Yong, C.Y.; Feng, S.W.; Hsiao, M.; Chang, W.M.; Huang, C.Y.F. Dehydroepiandrosterone (DHEA) Sensitizes Irinotecan to Suppress Head and Neck Cancer Stem-Like Cells by Downregulation of WNT Signaling. Front. Oncol. 2022 , 12 , 775541. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Lopez-Marure, R.; Zapata-Gomez, E.; Rocha-Zavaleta, L.; Aguilar, M.C.; Espinosa Castilla, M.; Melendez Zajgla, J.; Meraz-Cruz, N.; Huesca-Gomez, C.; Gamboa-Avila, R.; Gomez-Gonzalez, E.O. Dehydroepiandrosterone inhibits events related with the metastatic process in breast tumor cell lines. Cancer Biol. Ther. 2016 , 17 , 915–924. [ Google Scholar ] [ CrossRef ]
  • Lopez-Marure, R.; Contreras, P.G.; Dillon, J.S. Effects of dehydroepiandrosterone on proliferation, migration, and death of breast cancer cells. Eur. J. Pharmacol. 2011 , 660 , 268–274. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Roy, S.; Roy, S.; Kar, M.; Chakraborty, A.; Kumar, A.; Delogu, F.; Asthana, S.; Hande, M.P.; Banerjee, B. Combined treatment with cisplatin and the tankyrase inhibitor XAV-939 increases cytotoxicity, abrogates cancer-stem-like cell phenotype and increases chemosensitivity of head-and-neck squamous-cell carcinoma cells. Mutat. Res. Genet. Toxicol. Environ. Mutagen. 2019 , 846 , 503084. [ Google Scholar ] [ CrossRef ]
  • Sinnung, S.; Janvilisri, T.; Kiatwuthinon, P. Reversal of cisplatin sensitization and abrogation of cisplatin-enriched cancer stem cells in 5-8F nasopharyngeal carcinoma cell line through a suppression of Wnt/beta-catenin-signaling pathway. Mol. Cell Biochem. 2021 , 476 , 1663–1672. [ Google Scholar ] [ CrossRef ]
  • Kumar, B.; Yadav, A.; Lang, J.C.; Teknos, T.N.; Kumar, P. Suberoylanilide hydroxamic acid (SAHA) reverses chemoresistance in head and neck cancer cells by targeting cancer stem cells via the downregulation of nanog. Genes. Cancer 2015 , 6 , 169–181. [ Google Scholar ] [ CrossRef ]
  • Marks, P.A. Discovery and development of SAHA as an anticancer agent. Oncogene 2007 , 26 , 1351–1356. [ Google Scholar ] [ CrossRef ]
  • Chaturvedi, A.K.; Engels, E.A.; Pfeiffer, R.M.; Hernandez, B.Y.; Xiao, W.; Kim, E.; Jiang, B.; Goodman, M.T.; Sibug-Saber, M.; Cozen, W.; et al. Human Papillomavirus and Rising Oropharyngeal Cancer Incidence in the United States. J. Clin. Oncol. 2023 , 41 , 3081–3088. [ Google Scholar ] [ CrossRef ]
  • Warrier, S.; Bhuvanalakshmi, G.; Arfuso, F.; Rajan, G.; Millward, M.; Dharmarajan, A. Cancer stem-like cells from head and neck cancers are chemosensitized by the Wnt antagonist, sFRP4, by inducing apoptosis, decreasing stemness, drug resistance and epithelial to mesenchymal transition. Cancer Gene Ther. 2014 , 21 , 381–388. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Padhye, S.; Yang, H.; Jamadar, A.; Cui, Q.C.; Chavan, D.; Dominiak, K.; McKinney, J.; Banerjee, S.; Dou, Q.P.; Sarkar, F.H. New difluoro Knoevenagel condensates of curcumin, their Schiff bases and copper complexes as proteasome inhibitors and apoptosis inducers in cancer cells. Pharm. Res. 2009 , 26 , 1874–1880. [ Google Scholar ] [ CrossRef ]
  • Roy, S.; Levi, E.; Majumdar, A.P.; Sarkar, F.H. Expression of miR-34 is lost in colon cancer which can be re-expressed by a novel agent CDF. J. Hematol. Oncol. 2012 , 5 , 58. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Li, L.; Braiteh, F.S.; Kurzrock, R. Liposome-encapsulated curcumin: In vitro and in vivo effects on proliferation, apoptosis, signaling, and angiogenesis. Cancer 2005 , 104 , 1322–1331. [ Google Scholar ] [ CrossRef ]
  • Basak, S.K.; Zinabadi, A.; Wu, A.W.; Venkatesan, N.; Duarte, V.M.; Kang, J.J.; Dalgard, C.L.; Srivastava, M.; Sarkar, F.H.; Wang, M.B.; et al. Liposome encapsulated curcumin-difluorinated (CDF) inhibits the growth of cisplatin resistant head and neck cancer stem cells. Oncotarget 2015 , 6 , 18504–18517. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Gottlicher, M.; Minucci, S.; Zhu, P.; Kramer, O.H.; Schimpf, A.; Giavara, S.; Sleeman, J.P.; Lo Coco, F.; Nervi, C.; Pelicci, P.G.; et al. Valproic acid defines a novel class of HDAC inhibitors inducing differentiation of transformed cells. EMBO J. 2001 , 20 , 6969–6978. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ropero, S.; Fraga, M.F.; Ballestar, E.; Hamelin, R.; Yamamoto, H.; Boix-Chornet, M.; Caballero, R.; Alaminos, M.; Setien, F.; Paz, M.F.; et al. A truncating mutation of HDAC2 in human cancers confers resistance to histone deacetylase inhibition. Nat. Genet. 2006 , 38 , 566–569. [ Google Scholar ] [ CrossRef ]
  • Giudice, F.S.; Pinto, D.S., Jr.; Nor, J.E.; Squarize, C.H.; Castilho, R.M. Inhibition of histone deacetylase impacts cancer stem cells and induces epithelial-mesenchyme transition of head and neck cancer. PLoS ONE 2013 , 8 , e58672. [ Google Scholar ] [ CrossRef ]
  • Lee, S.H.; Nam, H.J.; Kang, H.J.; Samuels, T.L.; Johnston, N.; Lim, Y.C. Valproic acid suppresses the self-renewal and proliferation of head and neck cancer stem cells. Oncol. Rep. 2015 , 34 , 2065–2071. [ Google Scholar ] [ CrossRef ]
  • Herzog, A.E.; Warner, K.A.; Zhang, Z.; Bellile, E.; Bhagat, M.A.; Castilho, R.M.; Wolf, G.T.; Polverini, P.J.; Pearson, A.T.; Nör, J.E. The IL-6R and Bmi-1 axis controls self-renewal and chemoresistance of head and neck cancer stem cells. Cell Death Dis. 2021 , 12 , 988. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Nakano, T.; Warner, K.A.; Oklejas, A.E.; Zhang, Z.; Rodriguez-Ramirez, C.; Shuman, A.G.; Nor, J.E. mTOR Inhibition Ablates Cisplatin-Resistant Salivary Gland Cancer Stem Cells. J. Dent. Res. 2021 , 100 , 377–386. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Jia, L.; Zhang, W.; Wang, C.Y. BMI1 Inhibition Eliminates Residual Cancer Stem Cells after PD1 Blockade and Activates Antitumor Immunity to Prevent Metastasis and Relapse. Cell Stem Cell 2020 , 27 , 238–253.e236. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Kelly, S.E.; Di Benedetto, A.; Greco, A.; Howard, C.M.; Sollars, V.E.; Primerano, D.A.; Valluri, J.V.; Claudio, P.P. Rapid selection and proliferation of CD133+ cells from cancer cell lines: Chemotherapeutic implications. PLoS ONE 2010 , 5 , e10035. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Hatok, J.; Babusikova, E.; Matakova, T.; Mistuna, D.; Dobrota, D.; Racay, P. In vitro assays for the evaluation of drug resistance in tumor cells. Clin. Exp. Med. 2009 , 9 , 1–7. [ Google Scholar ] [ CrossRef ]
  • Yu, Y.; Ramena, G.; Elble, R.C. The role of cancer stem cells in relapse of solid tumors. Front. Biosci. (Elite Ed) 2012 , 4 , 1528–1541. [ Google Scholar ] [ CrossRef ]
  • Korkaya, H.; Wicha, M.S. Selective targeting of cancer stem cells: A new concept in cancer therapeutics. BioDrugs 2007 , 21 , 299–310. [ Google Scholar ] [ CrossRef ]
  • Hagemeister, F.B. Treatment of relapsed aggressive lymphomas: Regimens with and without high-dose therapy and stem cell rescue. Cancer Chemother. Pharmacol. 2002 , 49 (Suppl. S1), 13–20. [ Google Scholar ] [ CrossRef ]
  • Suntharalingam, M.; Haas, M.L.; Van Echo, D.A.; Haddad, R.; Jacobs, M.C.; Levy, S.; Gray, W.C.; Ord, R.A.; Conley, B.A. Predictors of response and survival after concurrent chemotherapy and radiation for locally advanced squamous cell carcinomas of the head and neck. Cancer 2001 , 91 , 548–554. [ Google Scholar ] [ CrossRef ]
  • Qian, X.; Ma, C.; Nie, X.; Lu, J.; Lenarz, M.; Kaufmann, A.M.; Albers, A.E. Biology and immunology of cancer stem(-like) cells in head and neck cancer. Crit. Rev. Oncol. Hematol. 2015 , 95 , 337–345. [ Google Scholar ] [ CrossRef ]
  • Hennequin, C.; Guillerm, S.; Quero, L. Combination of chemotherapy and radiotherapy: A thirty years evolution. Cancer Radiother. 2019 , 23 , 662–665. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Szturz, P.; Cristina, V.; Herrera Gomez, R.G.; Bourhis, J.; Simon, C.; Vermorken, J.B. Cisplatin Eligibility Issues and Alternative Regimens in Locoregionally Advanced Head and Neck Cancer: Recommendations for Clinical Practice. Front. Oncol. 2019 , 9 , 464. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Carlsson, L.; Bratman, S.V.; Siu, L.L.; Spreafico, A. The Cisplatin Total Dose and Concomitant Radiation in Locoregionally Advanced Head and Neck Cancer: Any Recent Evidence for Dose Efficacy? Curr. Treat. Options Oncol. 2017 , 18 , 39. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Reid, P.; Marcu, L.G.; Olver, I.; Moghaddasi, L.; Staudacher, A.H.; Bezak, E. Diversity of cancer stem cells in head and neck carcinomas: The role of HPV in cancer stem cell heterogeneity, plasticity and treatment response. Radiother. Oncol. 2019 , 135 , 1–12. [ Google Scholar ] [ CrossRef ]
  • Modur, V.; Thomas-Robbins, K.; Rao, K. HPV and CSC in HNSCC cisplatin resistance. Front. Biosci. (Elite Ed) 2015 , 7 , 58–66. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Bizzoca, M.E.; Caponio, V.C.A.; Lo Muzio, L.; Claudio, P.P.; Cortese, A. Methods for Overcoming Chemoresistance in Head and Neck Squamous Cell Carcinoma: Keeping the Focus on Cancer Stem Cells, a Systematic Review. Cancers 2024 , 16 , 3004. https://doi.org/10.3390/cancers16173004

Bizzoca ME, Caponio VCA, Lo Muzio L, Claudio PP, Cortese A. Methods for Overcoming Chemoresistance in Head and Neck Squamous Cell Carcinoma: Keeping the Focus on Cancer Stem Cells, a Systematic Review. Cancers . 2024; 16(17):3004. https://doi.org/10.3390/cancers16173004

Bizzoca, Maria Eleonora, Vito Carlo Alberto Caponio, Lorenzo Lo Muzio, Pier Paolo Claudio, and Antonio Cortese. 2024. "Methods for Overcoming Chemoresistance in Head and Neck Squamous Cell Carcinoma: Keeping the Focus on Cancer Stem Cells, a Systematic Review" Cancers 16, no. 17: 3004. https://doi.org/10.3390/cancers16173004

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

  • En español – ExME
  • Em português – EME

Traditional reviews vs. systematic reviews

Posted on 3rd February 2016 by Weyinmi Demeyin

difference between traditional and systematic literature review

Millions of articles are published yearly (1) , making it difficult for clinicians to keep abreast of the literature. Reviews of literature are necessary in order to provide clinicians with accurate, up to date information to ensure appropriate management of their patients. Reviews usually involve summaries and synthesis of primary research findings on a particular topic of interest and can be grouped into 2 main categories; the ‘traditional’ review and the ‘systematic’ review with major differences between them.

Traditional reviews provide a broad overview of a research topic with no clear methodological approach (2) . Information is collected and interpreted unsystematically with subjective summaries of findings. Authors aim to describe and discuss the literature from a contextual or theoretical point of view. Although the reviews may be conducted by topic experts, due to preconceived ideas or conclusions, they could be subject to bias.

Systematic reviews are overviews of the literature undertaken by identifying, critically appraising and synthesising results of primary research studies using an explicit, methodological approach(3). They aim to summarise the best available evidence on a particular research topic.

The main differences between traditional reviews and systematic reviews are summarised below in terms of the following characteristics: Authors, Study protocol, Research question, Search strategy, Sources of literature, Selection criteria, Critical appraisal, Synthesis, Conclusions, Reproducibility, and Update.

Traditional reviews

  • Authors: One or more authors usually experts in the topic of interest
  • Study protocol: No study protocol
  • Research question: Broad to specific question, hypothesis not stated
  • Search strategy: No detailed search strategy, search is probably conducted using keywords
  • Sources of literature: Not usually stated and non-exhaustive, usually well-known articles. Prone to publication bias
  • Selection criteria: No specific selection criteria, usually subjective. Prone to selection bias
  • Critical appraisal: Variable evaluation of study quality or method
  • Synthesis: Often qualitative synthesis of evidence
  • Conclusions: Sometimes evidence based but can be influenced by author’s personal belief
  • Reproducibility: Findings cannot be reproduced independently as conclusions may be subjective
  • Update: Cannot be continuously updated

Systematic reviews

  • Authors: Two or more authors are involved in good quality systematic reviews, may comprise experts in the different stages of the review
  • Study protocol: Written study protocol which includes details of the methods to be used
  • Research question: Specific question which may have all or some of PICO components (Population, Intervention, Comparator, and Outcome). Hypothesis is stated
  • Search strategy: Detailed and comprehensive search strategy is developed
  • Sources of literature: List of databases, websites and other sources of included studies are listed. Both published and unpublished literature are considered
  • Selection criteria: Specific inclusion and exclusion criteria
  • Critical appraisal: Rigorous appraisal of study quality
  • Synthesis: Narrative, quantitative or qualitative synthesis
  • Conclusions: Conclusions drawn are evidence based
  • Reproducibility: Accurate documentation of method means results can be reproduced
  • Update: Systematic reviews can be periodically updated to include new evidence

Decisions and health policies about patient care should be evidence based in order to provide the best treatment for patients. Systematic reviews provide a means of systematically identifying and synthesising the evidence, making it easier for policy makers and practitioners to assess such relevant information and hopefully improve patient outcomes.

  • Fletcher RH, Fletcher SW. Evidence-Based Approach to the Medical Literature. Journal of General Internal Medicine. 1997; 12(Suppl 2):S5-S14. doi:10.1046/j.1525-1497.12.s2.1.x. Available from:  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1497222/
  • Rother ET. Systematic literature review X narrative review. Acta paul. enferm. [Internet]. 2007 June [cited 2015 Dec 25]; 20(2): v-vi. Available from: http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-21002007000200001&lng=en. http://dx.doi.org/10.1590/S0103-21002007000200001
  • Khan KS, Ter Riet G, Glanville J, Sowden AJ, Kleijnen J. Undertaking systematic reviews of research on effectiveness: CRD’s guidance for carrying out or commissioning reviews. NHS Centre for Reviews and Dissemination; 2001.

' src=

Weyinmi Demeyin

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

No Comments on Traditional reviews vs. systematic reviews

' src=

THE INFORMATION IS VERY MUCH VALUABLE, A LOT IS INDEED EXPECTED IN ORDER TO MASTER SYSTEMATIC REVIEW

' src=

Thank you very much for the information here. My question is : Is it possible for me to do a systematic review which is not directed toward patients but just a specific population? To be specific can I do a systematic review on the mental health needs of students?

' src=

Hi Rosemary, I wonder whether it would be useful for you to look at Module 1 of the Cochrane Interactive Learning modules. This is a free module, open to everyone (you will just need to register for a Cochrane account if you don’t already have one). This guides you through conducting a systematic review, with a section specifically around defining your research question, which I feel will help you in understanding your question further. Head to this link for more details: https://training.cochrane.org/interactivelearning

I wonder if you have had a search on the Cochrane Library as yet, to see what Cochrane systematic reviews already exist? There is one review, titled “Psychological interventions to foster resilience in healthcare students” which may be of interest: https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD013684/full You can run searches on the library by the population and intervention you are interested in.

I hope these help you start in your investigations. Best wishes. Emma.

' src=

La revisión sistemática vale si hay solo un autor?

HI Alex, so sorry for the delay in replying to you. Yes, that is a very good point. I have copied a paragraph from the Cochrane Handbook, here, which does say that for a Cochrane Review, you should have more than one author.

“Cochrane Reviews should be undertaken by more than one person. In putting together a team, authors should consider the need for clinical and methodological expertise for the review, as well as the perspectives of stakeholders. Cochrane author teams are encouraged to seek and incorporate the views of users, including consumers, clinicians and those from varying regions and settings to develop protocols and reviews. Author teams for reviews relevant to particular settings (e.g. neglected tropical diseases) should involve contributors experienced in those settings”.

Thank you for the discussion point, much appreciated.

' src=

Hello, I’d like to ask you a question: what’s the difference between systematic review and systematized review? In addition, if the screening process of the review was made by only one author, is still a systematic or is a systematized review? Thanks

Hi. This article from Grant & Booth is a really good one to look at explaining different types of reviews: https://onlinelibrary.wiley.com/doi/10.1111/j.1471-1842.2009.00848.x It includes Systematic Reviews and Systematized Reviews. In answer to your second question, have a look at this Chapter from the Cochrane handbook. It covers the question about ‘Who should do a systematic review’. https://training.cochrane.org/handbook/current/chapter-01

A really relevant part of this chapter is this: “Systematic reviews should be undertaken by a team. Indeed, Cochrane will not publish a review that is proposed to be undertaken by a single person. Working as a team not only spreads the effort, but ensures that tasks such as the selection of studies for eligibility, data extraction and rating the certainty of the evidence will be performed by at least two people independently, minimizing the likelihood of errors.”

I hope this helps with the question. Best wishes. Emma.

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Related Articles

""

What do trialists do about participants who are ‘lost to follow-up’?

Participants in clinical trials may exit the study prior to having their results collated; in this case, what do we do with their results?

Family therapy walking outdoors

Family Therapy approaches for Anorexia Nervosa

Is Family Therapy effective in the treatment of Anorexia Nervosa? Emily summarises a recent Cochrane Review in this blog and examines the evidence.

Blood pressure tool

Antihypertensive drugs for primary prevention – at what blood pressure do we start treatment?

In this blog, Giorgio Karam examines the evidence on antihypertensive drugs for primary prevention – when do we start treatment?

  • Research Process
  • Manuscript Preparation
  • Manuscript Review
  • Publication Process
  • Publication Recognition
  • Language Editing Services
  • Translation Services

Elsevier QRcode Wechat

Systematic Literature Review or Literature Review?

  • 3 minute read
  • 58.3K views

Table of Contents

As a researcher, you may be required to conduct a literature review. But what kind of review do you need to complete? Is it a systematic literature review or a standard literature review? In this article, we’ll outline the purpose of a systematic literature review, the difference between literature review and systematic review, and other important aspects of systematic literature reviews.

What is a Systematic Literature Review?

The purpose of systematic literature reviews is simple. Essentially, it is to provide a high-level of a particular research question. This question, in and of itself, is highly focused to match the review of the literature related to the topic at hand. For example, a focused question related to medical or clinical outcomes.

The components of a systematic literature review are quite different from the standard literature review research theses that most of us are used to (more on this below). And because of the specificity of the research question, typically a systematic literature review involves more than one primary author. There’s more work related to a systematic literature review, so it makes sense to divide the work among two or three (or even more) researchers.

Your systematic literature review will follow very clear and defined protocols that are decided on prior to any review. This involves extensive planning, and a deliberately designed search strategy that is in tune with the specific research question. Every aspect of a systematic literature review, including the research protocols, which databases are used, and dates of each search, must be transparent so that other researchers can be assured that the systematic literature review is comprehensive and focused.

Most systematic literature reviews originated in the world of medicine science. Now, they also include any evidence-based research questions. In addition to the focus and transparency of these types of reviews, additional aspects of a quality systematic literature review includes:

  • Clear and concise review and summary
  • Comprehensive coverage of the topic
  • Accessibility and equality of the research reviewed

Systematic Review vs Literature Review

The difference between literature review and systematic review comes back to the initial research question. Whereas the systematic review is very specific and focused, the standard literature review is much more general. The components of a literature review, for example, are similar to any other research paper. That is, it includes an introduction, description of the methods used, a discussion and conclusion, as well as a reference list or bibliography.

A systematic review, however, includes entirely different components that reflect the specificity of its research question, and the requirement for transparency and inclusion. For instance, the systematic review will include:

  • Eligibility criteria for included research
  • A description of the systematic research search strategy
  • An assessment of the validity of reviewed research
  • Interpretations of the results of research included in the review

As you can see, contrary to the general overview or summary of a topic, the systematic literature review includes much more detail and work to compile than a standard literature review. Indeed, it can take years to conduct and write a systematic literature review. But the information that practitioners and other researchers can glean from a systematic literature review is, by its very nature, exceptionally valuable.

This is not to diminish the value of the standard literature review. The importance of literature reviews in research writing is discussed in this article . It’s just that the two types of research reviews answer different questions, and, therefore, have different purposes and roles in the world of research and evidence-based writing.

Systematic Literature Review vs Meta Analysis

It would be understandable to think that a systematic literature review is similar to a meta analysis. But, whereas a systematic review can include several research studies to answer a specific question, typically a meta analysis includes a comparison of different studies to suss out any inconsistencies or discrepancies. For more about this topic, check out Systematic Review VS Meta-Analysis article.

Language Editing Plus

With Elsevier’s Language Editing Plus services , you can relax with our complete language review of your systematic literature review or literature review, or any other type of manuscript or scientific presentation. Our editors are PhD or PhD candidates, who are native-English speakers. Language Editing Plus includes checking the logic and flow of your manuscript, reference checks, formatting in accordance to your chosen journal and even a custom cover letter. Our most comprehensive editing package, Language Editing Plus also includes any English-editing needs for up to 180 days.

PowerPoint Presentation of Your Research Paper

How to Make a PowerPoint Presentation of Your Research Paper

Strong Research Hypothesis

Step-by-Step Guide: How to Craft a Strong Research Hypothesis

You may also like.

what is a descriptive research design

Descriptive Research Design and Its Myriad Uses

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

Writing in Environmental Engineering

Making Technical Writing in Environmental Engineering Accessible

Risks of AI-assisted Academic Writing

To Err is Not Human: The Dangers of AI-assisted Academic Writing

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

choosing the Right Research Methodology

Choosing the Right Research Methodology: A Guide for Researchers

Why is data validation important in research

Why is data validation important in research?

Writing a good review article

Writing a good review article

Input your search keywords and press Enter.

Covidence website will be inaccessible as we upgrading our platform on Monday 23rd August at 10am AEST, / 2am CEST/1am BST (Sunday, 15th August 8pm EDT/5pm PDT) 

The difference between a systematic review and a literature review

  • Best Practice

Home | Blog | Best Practice | The difference between a systematic review and a literature review

Covidence takes a look at the difference between the two

Most of us are familiar with the terms systematic review and literature review. Both review types synthesise evidence and provide summary information. So what are the differences? What does systematic mean? And which approach is best 🤔 ?

‘ Systematic ‘ describes the review’s methods. It means that they are transparent, reproducible and defined before the search gets underway. That’s important because it helps to minimise the bias that would result from cherry-picking studies in a non-systematic way. 

This brings us to literature reviews. Literature reviews don’t usually apply the same rigour in their methods. That’s because, unlike systematic reviews, they don’t aim to produce an answer to a clinical question. Literature reviews can provide context or background information for a new piece of research. They can also stand alone as a general guide to what is already known about a particular topic. 

Interest in systematic reviews has grown in recent years and the frequency of ‘systematic reviews’ in Google books has overtaken ‘literature reviews’ (with all the usual Ngram Viewer warnings – it searches around 6% of all books, no journals). 

difference between traditional and systematic literature review

Let’s take a look at the two review types in more detail to highlight some key similarities and differences 👀.

🙋🏾‍♂️ What is a systematic review?

Systematic reviews ask a specific question about the effectiveness of a treatment and answer it by summarising evidence that meets a set of pre-specified criteria. 

The process starts with a research question and a protocol or research plan. A review team searches for studies to answer the question using a highly sensitive search strategy. The retrieved studies are then screened for eligibility using the inclusion and exclusion criteria (this is done by at least two people working independently). Next, the reviewers extract the relevant data and assess the quality of the included studies. Finally, the review team synthesises the extracted study data and presents the results. The process is shown in figure 2 .

difference between traditional and systematic literature review

The results of a systematic review can be presented in many ways and the choice will depend on factors such as the type of data. Some reviews use meta-analysis to produce a statistical summary of effect estimates. Other reviews use narrative synthesis to present a textual summary.

Covidence accelerates the screening, data extraction, and quality assessment stages of your systematic review. It provides simple workflows and easy collaboration with colleagues around the world.

When is it appropriate to do a systematic review?

If you have a clinical question about the effectiveness of a particular treatment or treatments, you could answer it by conducting a systematic review. Systematic reviews in clinical medicine often follow the PICO framework, which stands for:

👦 Population (or patients)

💊 Intervention

💊 Comparison

Here’s a typical example of a systematic review title that uses the PICO framework: Alarms [intervention] versus drug treatments [comparison] for the prevention of nocturnal enuresis [outcome] in children [population]

Key attributes

  • Systematic reviews follow prespecified methods
  • The methods are explicit and replicable
  • The review team assesses the quality of the evidence and attempts to minimise bias
  • Results and conclusions are based on the evidence

🙋🏻‍♀️ What is a literature review?

Literature reviews provide an overview of what is known about a particular topic. They evaluate the material, rather than simply restating it, but the methods used to do this are not usually prespecified and they are not described in detail in the review. The search might be comprehensive but it does not aim to be exhaustive. Literature reviews are also referred to as narrative reviews.

Literature reviews use a topical approach and often take the form of a discussion. Precision and replicability are not the focus, rather the author seeks to demonstrate their understanding and perhaps also present their work in the context of what has come before. Often, this sort of synthesis does not attempt to control for the author’s own bias. The results or conclusion of a literature review is likely to be presented using words rather than statistical methods.

When is it appropriate to do a literature review?

We’ve all written some form of literature review: they are a central part of academic research ✍🏾. Literature reviews often form the introduction to a piece of writing, to provide the context. They can also be used to identify gaps in the literature and the need to fill them with new research 📚.

  • Literature reviews take a thematic approach
  • They do not specify inclusion or exclusion criteria
  • They do not answer a clinical question
  • The conclusions might be influenced by the author’s own views

🙋🏽 Ok, but what is a systematic literature review?

A quick internet search retrieves a cool 200 million hits for ‘systematic literature review’. What strange hybrid is this 🤯🤯 ?

Systematic review methodology has its roots in evidence-based medicine but it quickly gained traction in other areas – the social sciences for example – where researchers recognise the value of being methodical and minimising bias. Systematic review methods are increasingly applied to the more traditional types of review, including literature reviews, hence the proliferation of terms like ‘systematic literature review’ and many more.

Beware of the labels 🚨. The terminology used to describe review types can vary by discipline and changes over time. To really understand how any review was done you will need to examine the methods critically and make your own assessment of the quality and reliability of each synthesis 🤓.

Review methods are evolving constantly as researchers find new ways to meet the challenge of synthesising the evidence. Systematic review methods have influenced many other review types, including the traditional literature review. 

Covidence is a web-based tool that saves you time at the screening, selection, data extraction and quality assessment stages of your systematic review. It supports easy collaboration across teams and provides a clear overview of task status.

Get a glimpse inside Covidence and how it works

Picture of Laura Mellor. Portsmouth, UK

Laura Mellor. Portsmouth, UK

Perhaps you'd also like....

Data Extraction Communicate Regularly & Keep a Log for Reporting Checklists

Data Extraction Tip 5: Communicate Regularly

The Covidence Global Scholarship recipients are putting evidence-based research into practice. We caught up with some of the winners to discover the impact of their work and find out more about their experiences.

Data Extraction: Extract the right amount of data

Data Extraction Tip 4: Extract the Right Amount of Data

Data Extraction Pilot The Template

Data Extraction Tip 3: Pilot the Template

Better systematic review management, head office, working for an institution or organisation.

Find out why over 350 of the world’s leading institutions are seeing a surge in publications since using Covidence!

Request a consultation with one of our team members and start empowering your researchers: 

By using our site you consent to our use of cookies to measure and improve our site’s performance. Please see our Privacy Policy for more information. 

Penn State University Libraries

  • Home-Articles and Databases
  • Asking the clinical question
  • PICO & Finding Evidence
  • Evaluating the Evidence
  • Systematic Review vs. Literature Review
  • Fall 2024 Workshops
  • Nursing Library Instruction Course
  • Ethical & Legal Issues for Nurses
  • Useful Nursing Resources
  • Writing Resources
  • LionSearch and Finding Articles
  • The Catalog and Finding Books

Know the Difference! Systematic Review vs. Literature Review

It is common to confuse systematic and literature reviews as both are used to provide a summary of the existent literature or research on a specific topic.  Even with this common ground, both types vary significantly.  Please review the following chart (and its corresponding poster linked below) for the detailed explanation of each as well as the differences between each type of review.

Systematic vs. Literature Review
Systematic Review Literature Review
Definition High-level overview of primary research on a focused question that identifies, selects, synthesizes, and appraises all high quality research evidence relevant to that question Qualitatively summarizes evidence on a topic using informal or subjective methods to collect and interpret studies
Goals Answers a focused clinical question
Eliminate bias
Provide summary or overview of topic
Question Clearly defined and answerable clinical question
Recommend using PICO as a guide
Can be a general topic or a specific question
Components Pre-specified eligibility criteria
Systematic search strategy
Assessment of the validity of findings
Interpretation and presentation of results
Reference list
Introduction
Methods
Discussion
Conclusion
Reference list
Number of Authors Three or more One or more
Timeline Months to years
Average eighteen months
Weeks to months
Requirement Thorough knowledge of topic
Perform searches of all relevant databases
Statistical analysis resources (for meta-analysis)

Understanding of topic
Perform searches of one or more databases

Value Connects practicing clinicians to high quality evidence
Supports evidence-based practice
Provides summary of literature on the topic
  • What's in a name? The difference between a Systematic Review and a Literature Review, and why it matters by Lynn Kysh, MLIS, University of Southern California - Norris Medical Library
  • << Previous: Evaluating the Evidence
  • Next: Fall 2024 Workshops >>
  • Last Updated: Aug 27, 2024 1:21 PM
  • URL: https://guides.libraries.psu.edu/nursing

Usc Upstate Library Home

Literature Review: Know the Difference! Systematic Review vs. Literature Review

  • Literature Review
  • Purpose of a Literature Review
  • Work in Progress
  • Compiling & Writing
  • Books, Articles, & Web Pages
  • Types of Literature Reviews
  • Departmental Differences
  • Citation Styles & Plagiarism
  • Know the Difference! Systematic Review vs. Literature Review

Systemic Review & Literature Review

A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question

  • What's in a name? The difference between a Systematic Review and a Literature Review, and why it matters.

Evidence Pyramid

The evidence pyramid (image above) visually depicts the evidential strength of different research designs. Studies with the highest internal validity, characterized by a high degree of quantitative analysis, review, analysis, and stringent scientific methodology, are at the top of the pyramid. Observational research and expert opinion reside at the bottom of the pyramid. In evidence-based practice the systematic review is considered the highest level of information and is at the top of the pyramid.  ( The pyramid was produced by  HLWIKI Canada  and is CC).

  • << Previous: Citation Styles & Plagiarism
  • Last Updated: Aug 27, 2024 11:14 AM
  • URL: https://uscupstate.libguides.com/Literature_Review

5 differences between a systematic review and other types of literature review

September 26, 2017.

difference between traditional and systematic literature review

There are many types of reviews of the medical and public health evidence, each with its own benefits and challenges. In this blog post, we detail five key differences between a systematic review and other types of reviews, including narrative and comprehensive reviews.

First, we must define some terms. “Literature review” is a general term that describes a summary of the evidence on a certain topic. Literature reviews can be very simple or highly complex, and they can use a variety of methods for finding, assessing, and presenting evidence. A “systematic review” is a specific type of review that uses rigorous and transparent methods in an effort to summarize all of the available evidence with little to no bias. A good systematic review adheres to the international standards set forth in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 27-item checklist. 1 Reviews that are less rigorous are often called “narrative,” “comprehensive,” or simply “literature reviews.”

So, what are the 5 key differences between a systematic review and other types of review?

1. The goal of the review The goal of a literature review can be broad and descriptive (example: “ Describe the available treatments for sleep apnea ”) or it can be to answer a specific question (example: “ What is the efficacy of CPAP for people with sleep apnea? ”). The goal of a systematic review is to answer a specific and focused question (example: “ Which treatment for sleep apnea reduces the apnea-hypopnea index more: CPAP or mandibular advancement device? ”). People seeking to make evidence-based decisions look to systematic reviews due to their completeness and reduced risk of bias.

2. Searching for evidence Where and how one searches for evidence is an important difference. While literature reviews require only one database or source, systematic reviews require more comprehensive efforts to locate evidence. Multiple databases are searched, each with a specifically tailored search strategy (usually designed and implemented by a specialist librarian). In addition, systematic reviews often include attempts to find data beyond typical databases. Systematic reviewers might search conference abstracts or the web sites of professional associations or pharmaceutical companies, and they may contact study authors to obtain additional or unpublished data. All of these extra steps reflect an attempt to minimize bias in the summary of the evidence. 3. Assessing search results In a systematic review, the parameters for inclusion are established at the start of the project and applied consistently to search results. Usually, such parameters take the form of PICOs (population, intervention, comparison, outcomes). Reviewers hold search results against strict criteria based on the PICOs to determine appropriateness for inclusion. Another key component of a systematic review is dual independent review of search results; each search result is reviewed by at least two people independently. In many other literature reviews, there is only a single reviewer. This can result in bias (even if it is unintentional) and missed studies.

4. Summary of findings In a systematic review, an effort is usually made to assess the quality of the evidence, often using risk of bias assessment, at the study level and often across studies. Other literature reviews rarely assess and report any formal quality assessment by individual study. Risk of bias assessment is important to a thorough summary of the evidence, since conclusions based on biased results can be incorrect (and dangerous, at worst). Results from a systematic review can sometimes be pooled quantitatively (e.g., in a meta-analysis) to provide numeric estimates of treatment effects, for example.

5. Utility of results Due to the rigor and transparency applied to a systematic review, it is not surprising that the results are usually of higher quality and at lower risk of bias than results from other types of literature review. Literature reviews can be useful to inform background sections of papers and reports and to give the reader an overview of a topic. Systematic reviews are used by professional associations and government agencies to issue guidelines and recommendations; such important activities are rarely based on a non-systematic review. Clinicians may also rely on high quality systematic reviews to make evidence-based decisions about patient care.

Each type of review has a place in the scientific literature. For narrow, specific research questions, a systematic review can provide a thorough summary and assessment of all of the available evidence. For broader research questions, other types of literature review can summarize the best available evidence using targeted search strategies. Ultimately, the choice of methodology depends on the research question and the goal of the review.

[1] Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group (2009). Preferred Reporting Items for Systematic Reviews and Meta-Analyse s: The PRISMA Statement. PLoS Med 6(7): e1000097. doi:10.1371/journal.pmed1000097.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

difference between traditional and systematic literature review

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved August 29, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Literature Review vs Systematic Review

Literature review vs. systematic review, your librarian.

Profile Photo

It’s common to confuse systematic and literature reviews because both are used to provide a summary of the existent literature or research on a specific topic. Regardless of this commonality, both types of review vary significantly. The following table provides a detailed explanation as well as the differences between systematic and literature reviews. 

Kysh, Lynn (2013): Difference between a systematic review and a literature review. [figshare]. Available at:  http://dx.doi.org/10.6084/m9.figshare.766364

Primary vs. Secondary Research

difference between traditional and systematic literature review

Parts of the Article

difference between traditional and systematic literature review

  • Last Updated: Jun 3, 2024 2:30 PM
  • URL: https://libguides.sjsu.edu/LitRevVSSysRev

Literature Review Research

Literature review vs. systematic review.

  • Literature Review Process
  • Finding Literature Reviews
  • Helpful Tips and Resources
  • Citing Sources This link opens in a new window

Resources for Systematic Reviews

  • NIH Systematic Review Protocols and Protocol Registries Systematic review services and information from the National Institutes of Health.
  • Purdue University Systematic Reviews LibGuide Purdue University has created this helpful online research guide on systematic reviews. Most content is available publicly but please note that some links are accessible only to Purdue students.

It is common to confuse literature and systematic reviews because both are used to provide a summary of the existing literature or research on a specific topic. Despite this commonality, these two reviews vary significantly. The table below highlights the differences.

Qualitatively summarizes evidence on a topic using informal or subjective methods to collect and interpret studies High-level overview of primary research on a focused question that identifies, selects, synthesizes, and appraises all high quality research evidence to that question
Provide summary or overview of topic

Answer a focused clinical question

Eliminate bias

Can be a general topic or specific question

Clearly defined and answerable clinical question

Introduction

Methods

Discussion

Conclusion

Reference List

Pre-specified eligibility criteria

Systematic search strategy

Assessment of the validity of findings

Interpretation and presentation of results

Reference list

One or more Three or more

Weeks to months

Months to years (average 18 months)

Understanding of topic

Perform searches of one or more databases

Thorough knowledge of topic

Perform searches of all relevant databases

Statistical analysis resources (for meta-analysis)

Provides summary of literature on a topic

Connects practicing clinicians to high-quality evidence

Supports evidence-based practice

Kysh, Lynn (2013). Difference between a systematic review and a literature review. figshare. Poster. https://doi.org/10.6084/m9.figshare.766364.v1

  • << Previous: Home
  • Next: Literature Review Process >>
  • Last Updated: May 6, 2024 4:11 PM
  • URL: https://tcsedsystem.libguides.com/literature_review

DistillerSR Logo

About Systematic Reviews

Understanding the Differences Between a Systematic Review vs Literature Review

difference between traditional and systematic literature review

Automate every stage of your literature review to produce evidence-based research faster and more accurately.

Let’s look at these differences in further detail.

Goal of the Review

The objective of a literature review is to provide context or background information about a topic of interest. Hence the methodology is less comprehensive and not exhaustive. The aim is to provide an overview of a subject as an introduction to a paper or report. This overview is obtained firstly through evaluation of existing research, theories, and evidence, and secondly through individual critical evaluation and discussion of this content.

A systematic review attempts to answer specific clinical questions (for example, the effectiveness of a drug in treating an illness). Answering such questions comes with a responsibility to be comprehensive and accurate. Failure to do so could have life-threatening consequences. The need to be precise then calls for a systematic approach. The aim of a systematic review is to establish authoritative findings from an account of existing evidence using objective, thorough, reliable, and reproducible research approaches, and frameworks.

Level of Planning Required

The methodology involved in a literature review is less complicated and requires a lower degree of planning. For a systematic review, the planning is extensive and requires defining robust pre-specified protocols. It first starts with formulating the research question and scope of the research. The PICO’s approach (population, intervention, comparison, and outcomes) is used in designing the research question. Planning also involves establishing strict eligibility criteria for inclusion and exclusion of the primary resources to be included in the study. Every stage of the systematic review methodology is pre-specified to the last detail, even before starting the review process. It is recommended to register the protocol of your systematic review to avoid duplication. Journal publishers now look for registration in order to ensure the reviews meet predefined criteria for conducting a systematic review [1].

Search Strategy for Sourcing Primary Resources

Learn more about distillersr.

(Article continues below)

difference between traditional and systematic literature review

Quality Assessment of the Collected Resources

A rigorous appraisal of collected resources for the quality and relevance of the data they provide is a crucial part of the systematic review methodology. A systematic review usually employs a dual independent review process, which involves two reviewers evaluating the collected resources based on pre-defined inclusion and exclusion criteria. The idea is to limit bias in selecting the primary studies. Such a strict review system is generally not a part of a literature review.

Presentation of Results

Most literature reviews present their findings in narrative or discussion form. These are textual summaries of the results used to critique or analyze a body of literature about a topic serving as an introduction. Due to this reason, literature reviews are sometimes also called narrative reviews. To know more about the differences between narrative reviews and systematic reviews , click here.

A systematic review requires a higher level of rigor, transparency, and often peer-review. The results of a systematic review can be interpreted as numeric effect estimates using statistical methods or as a textual summary of all the evidence collected. Meta-analysis is employed to provide the necessary statistical support to evidence outcomes. They are usually conducted to examine the evidence present on a condition and treatment. The aims of a meta-analysis are to determine whether an effect exists, whether the effect is positive or negative, and establish a conclusive estimate of the effect [2].

Using statistical methods in generating the review results increases confidence in the review. Results of a systematic review are then used by clinicians to prescribe treatment or for pharmacovigilance purposes. The results of the review can also be presented as a qualitative assessment when the end goal is issuing recommendations or guidelines.

Risk of Bias

Literature reviews are mostly used by authors to provide background information with the intended purpose of introducing their own research later. Since the search for included primary resources is also less exhaustive, it is more prone to bias.

One of the main objectives for conducting a systematic review is to reduce bias in the evidence outcome. Extensive planning, strict eligibility criteria for inclusion and exclusion, and a statistical approach for computing the result reduce the risk of bias.

Intervention studies consider risk of bias as the “likelihood of inaccuracy in the estimate of causal effect in that study.” In systematic reviews, assessing the risk of bias is critical in providing accurate assessments of overall intervention effect [3].

With numerous review methods available for analyzing, synthesizing, and presenting existing scientific evidence, it is important for researchers to understand the differences between the review methods. Choosing the right method for a review is crucial in achieving the objectives of the research.

[1] “Systematic Review Protocols and Protocol Registries | NIH Library,” www.nihlibrary.nih.gov . https://www.nihlibrary.nih.gov/services/systematic-review-service/systematic-review-protocols-and-protocol-registries

[2] A. B. Haidich, “Meta-analysis in medical research,” Hippokratia , vol. 14, no. Suppl 1, pp. 29–37, Dec. 2010, [Online]. Available: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3049418/#:~:text=Meta%2Danalyses%20are%20conducted%20to

3 Reasons to Connect

difference between traditional and systematic literature review

UHN Library Logo

Types of Literature Reviews : Home

  • Health Science Information Consortium
  • University Health Network - New
  • Types of Literature Reviews

Need More Help?

  • Knowledge Synthesis Support
  • Literature Search Request
  • The Right Review for You - Workshop Recording (YouTube) 27min video From UHN Libraries Recorded Nov 2021
  • The Screening Phase for Reviews Tutorial This tutorial presents information on the screening process for systematic reviews or other knowledge syntheses, and contains a variety of resources for successfully preparing to complete this important research stage.
  • Workshops Find more UHN Libraries workshops, live and on-demand, and other learning opportunities helpful for knowledge synthesis projects.

How to Choose Your Review Method

TREAD* Lightly and Consider...

  • Available T ime for conducting your review
  • Any R esource constraints within which you must deliver your review
  • Any requirements for specialist E xpertise in order to complete the review
  • The requirements of the A udience for your review and its intended purpose
  • The richness, thickness and availability of D ata within included studies

* Booth A, Sutton A, Papaioannou D.  Systematic approaches to a successful literature review.  2nd edition.  Los Angeles, CA:  Sage, 2016.  (p.36)

How do I write a Review Protocol?

  • What is a Protocol? (UofT)
  • Guidance on Registering a Review with PROSPERO

Writing Resources

  • Advice on Academic Writing (University of Toronto)
  • How to write a great research paper using reporting guidelines (EQUATOR Network)
  • Instructions to Authors in the Health Sciences
  • Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly work in Medical Journals (ICMJE)
  • Writing resources guide (BMC)

What is a Literature Review?

A literature review provides an overview of what's been written about a specific topic. It is a generic term. There are many different types of literature reviews which can cover a wide range of subjects at various levels of completeness and comprehensiveness. Choosing the type of review you wish to conduct will depend on the purpose of your review, and the time and resources you have available.

This page will provide definitions of some of the most common review types in the health sciences and links to relevant reporting guidelines or methodological papers.

Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies . Health Information & Libraries Journal . 2009 Jun 1;26(2):91-108. 

  • Summary of Five Types of Reviews Table summarizing the characteristics, guidelines etc. of 5 common types of review article.

Traditional (Narrative) Review

Traditional (narrative) literature reviews provide a broad overview of a research topic with no clear methodological approach. Information is collected and interpreted unsystematically with subjective summaries of findings. Authors aim to describe and discuss the literature from a contextual or theoretical point of view. Although the reviews may be conducted by topic experts, due to preconceived ideas or conclusions, they could be subject to bias. This sort of literature review can be appropriate if you have a broad topic area, are working on your own, or have time constraints.

Agarwal S, Charlesworth M, Elrakhawy M. How to write a narrative review . Anaesthesia . 2023;78(9):1162-1166. doi:10.1111/anae.16016

Green BN, Johnson CD, Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade . Journal of Chiropractic Medicine . 2006;5(3):101-117. doi:10.1016/S0899-3467(07)60142-6.

Ferrari R. Writing narrative style literature reviews . Medical Writing. 2015 Dec 1;24(4):230-5.

Greenhalgh T, Thorne S, Malterud K. Time to challenge the spurious hierarchy of systematic over narrative reviews ? European journal of clinical investigation . 2018;48:e12931.

Knowledge Synthesis

  • What is Knowledge Synthesis?
  • Learn more!

blue umbrella labeled Knowledge synthesis with systematic review, rapid review, meta-analysis, mapping reivew, critical review, scoping review and mixed methods review sheltered under it

CIHR Definition of Knowledge Syntheses:

“The contextualization and integration of research findings of individual research studies within the larger body of knowledge on the topic. A synthesis must be reproducible and transparent in its methods, using quantitative and/or qualitative methods.”  - A Guide to Knowledge Synthesis, CIHR

Grimshaw J. A Guide to Knowledge Synthesis [Internet]. CIHR. Canadian Institutes of Health Research; 2010.

Canadian Institutes of Health Research. Synthesis Resources [Internet]. CIHR. Canadian Institutes of Health Research; 2013.

Booth A, Noyes J, Flemming K, Gerhardus A, Wahlster P, van der Wilt, Gert Jan, et al.  Structured methodology review identified seven (RETREAT) criteria for selecting qualitative evidence synthesis approaches . Journal of clinical epidemiology. 2018;99:41-52.

Kastner M, Tricco AC, Soobiah C, et al. What is the most appropriate knowledge synthesis method to conduct a review? Protocol for a scoping review . BMC Medical Research Methodology . 2012;12:114. doi:10.1186/1471-2288-12-114.

Kastner M, Antony J, Soobiah C, Straus SE, Tricco AC. Conceptual recommendations for selecting the most appropriate knowledge synthesis method to answer research questions related to complex evidence . Journal of Clinical Epidemiology . 2016;73:43-49.

Knowledge Synthesis Support at UHN

Common Types of Knowledge Syntheses

  • Systematic Reviews
  • Meta-Analysis
  • Scoping Reviews
  • Rapid or Restricted Reviews
  • Clinical Practice Guidelines
  • Realist Reviews
  • Mixed Methods Reviews
  • Qualitative Synthesis
  • Narrative Synthesis

A systematic review attempts to identify, appraise and synthesize all the empirical evidence that meets pre-specified eligibility criteria to answer a given research question. Researchers conducting systematic reviews use explicit methods aimed at minimizing bias, in order to produce more reliable findings that can be used to inform decision making. (See Section 1.2 in the Cochrane Handbook for Systematic Reviews of Interventions .)

A systematic review is not the same as a traditional (narrative) review or a literature review. Unlike other kinds of reviews, systematic reviews must be as thorough and unbiased as possible, and must also make explicit how the search was conducted. Systematic reviews may or may not include a meta-analysis.

On average, a systematic review project takes a year. If your timelines are shorter, you may wish to consider other types of synthesis projects or a traditional (narrative) review. See suggested timelines for a Cochrane Review for reference.

Systematic Review Overview (UHN)

Systematic Review Overview workshop recording (UHN)

Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)

Greyson D, Rafferty E, Slater L, et al. Systematic review searches must be systematic, comprehensive, and transparent: a critique of Perman et al. BMC public health . 2019;19:153.

Ioannidis J. P. (2016). The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses . The Milbank quarterly , 94 (3), 485-514.

A subset of systematic reviews. Meta-analysis is a technique that statistically combines the results of quantitative studies to provide a more precise effect of the results.

"..a form of knowledge synthesis that addresses an exploratory research question aimed at mapping key concepts, types of evidence, and gaps in research related to a defined area or field by systematically searching, selecting and synthesizing existing knowledge." (Colquhoun, HL et al., 2014)

Arksey, H., & O'Malley, L. (2005). Scoping studies: Towards a methodological framework .   International Journal of Social Research Methodology: Theory and Practice , 8 (1), 19-32. doi:10.1080/1364557032000119616.

Levac, D., Colquhoun, H. & O'Brien, K.K. Scoping studies: advancing the methodology . Implementation Sci 5 , 69 (2010). https://doi.org/10.1186/1748-5908-5-69

Colquhoun, H. L., Levac, D., O'Brien, K. K., Straus, S., Tricco, A. C., Perrier, L., . . . Moher, D. (2014). Scoping reviews: Time for clarity in definition, methods, and reporting . Journal of Clinical Epidemiology , 67(12), 1291-1294. doi:10.1016/j.jclinepi.2014.03.013.

Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews . Int.J.Evid Based.Healthc . 2015 Sep;13(3):141-146.

Peters MDJ, Godfrey C, McInerney P, Munn Z, Tricco AC, Khalil, H. Chapter 11: Scoping Reviews (2020 version). In: Aromataris E, Munn Z (Editors). JBI Manual for Evidence Synthesis , JBI, 2020.

Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation . Ann Intern Med . [Epub ahead of print ] doi: 10.7326/M18-0850.

“…a type of knowledge synthesis in which systematic review processes are accelerated and methods are streamlined to complete the review more quickly than is the case for typical systematic reviews. Rapid reviews take an average of 5–12 weeks to complete, thus providing evidence within a shorter time frame required for some health policy and systems decisions.” (Tricco AC et al., 2017)

Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews . Implementation Science : IS . 2010;5:56. doi:10.1186/1748-5908-5-56.

Langlois EV, Straus SE, Antony J, King VJ, Tricco AC. Using rapid reviews to strengthen health policy and systems and progress towards universal health coverage . BMJ Global Health . 2019;4:e001178.

Tricco AC, Langlois EV, Straus SE, editors. Rapid reviews to strengthen health policy and systems: a practical guide . Geneva: World Health Organization; 2017. Licence: CC BY-NC-SA 3.0 IGO.

Watt A, Cameron A, Sturm L, Lathlean T , Babidge W, Blamey S, et al. Rapid reviews versus full systematic reviews: An inventory of current methods and practice in health technology assessment . International Journal of Technology Assessment in Health Care . 2008;24(2):133-9.

“Clinical practice guidelines are systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances.” Source: Institute of Medicine. (1990). Clinical Practice Guidelines: Directions for a New Program, M.J. Field and K.N. Lohr (eds.) Washington, DC: National Academy Press. Page 38.

–Disclosure of any author conflicts of interest

AGREE Reporting Checklist

Alonso-Coello, P., Oxman, A. D., Moberg, J., Brignardello-Petersen, R., Akl, E. A., Davoli, M., ... & Guyatt, G. H. (2016). GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 2: Clinical practice guidelines . BMJ , 353 , i2089.

Pawson R, Greenhalgh T, Harvey G, Walshe K. Realist review - a new method of systematic review designed for complex policy interventions . Journal of Health Services Research & Policy . 2005;10:21-34.

Rycroft-Malone J, McCormack B, Hutchinson AM, et al. Realist synthesis: illustrating the method for implementation research . Implementation science : IS . 2012;7:33.

Wong G, Greenhalgh T, Westhorp G, Pawson R. Realist methods in medical education research: what are they and what can they contribute? Medical Education . 2012;46(1):89-96.

"Mixed-methods systematic reviews can be defined as combining the findings of qualitative and quantitative studies within a single systematic review to address the same overlapping or complementary review questions." (Harden A, 2010)

Harden A.   Mixed-Methods Systematic Reviews: Integrating quantitative and qualitative findings .   NCDDR:FOCUS. 2010.

Lizarondo L, Stern C, Apostolo J, et al. Five common pitfalls in mixed methods systematic reviews: lessons learned . J Clin Epidemiol . 2022;148:178-183. doi:10.1016/j.jclinepi.2022.03.014

Pluye P, Hong QN. Combining the power of stories and the power of numbers: mixed methods research and mixed studies reviews . Annual review of public health . 2014;35:29-45.

Pearson A, White H, Bath-Hextall F, Salmond S, Apostolo J, Kirkpatrick P. A mixed-methods approach to systematic reviews . International journal of evidence-based healthcare . 2015;13:121-131.

The Joanna Briggs Institute 2014 Reviewers Manual: Methodology for JBI Mixed Methods Systematic Reviews .

There are various methods for integrating the results from qualitative studies. "Systematic reviews of qualitative research have an important role in informing the delivery of evidence-based healthcare. Qualitative systematic reviews have investigated the culture of communities, exploring how consumers experience, perceive and manage their health and journey through the health system, and can evaluate components and activities of health services such as health promotion and community development." (Lockwood C et al., 2015)

Booth A, Noyes J, Flemming K, Gerhardus A, Wahlster P, van der Wilt, Gert Jan, et al. Structured methodology review identified seven (RETREAT) criteria for selecting qualitative evidence synthesis approaches . Journal of clinical epidemiology. 2018;99:41-52.

Ring N, Jepson R, Ritchie K. Methods of synthesizing qualitative research studies for health technology assessment . International Journal of Technology Assessment in Health Care . 2011;27:384-390.

Lockwood C, Munn Z, Porritt K. Qualitative research synthesis: methodological guidance for systematic reviewers utilizing meta-aggregation . International journal of evidence-based healthcare . 2015;13:179-187.

France EF, Cunningham M, Ring N, et al. Improving reporting of meta-ethnography: The eMERGe reporting guidance . Journal of advanced nursing . 2019.

Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review . BMC Med Res Methodol . 2009;9:59. Published 2009 Aug 11. doi:10.1186/1471-2288-9-59.

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews . BMC Med Res Methodol . 2008;8:45. Published 2008 Jul 10. doi:10.1186/1471-2288-8-45

Lewin S, Booth A, Glenton C, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings: introduction to the series . Implementation science : IS . 2018;13:2.

"Narrative synthesis refers to an approach to the systematic review and synthesis of findings from multiple studies that relies primarily on the use of words and text to summarise and explain the findings of the synthesis. Whilst narrative synthesis can involve the manipulation of statistical data, the defining characteristic is that it adopts a textual approach to the process of synthesis to ‘tell the story’ of the findings from the included studies." (Popay J, 2006)

Tricco AC, Soobiah C, Antony J, et al. A scoping review identifies multiple emerging knowledge synthesis methods, but few studies operationalize the method . Journal of Clinical Epidemiology . 2016;73:19-28.

Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M, et al. Guidance on the conduct of narrative synthesis in systematic reviews . Lancaster: ESRC Research Methods Programme; 2006.

Snilstveit B, Oliver S, Vojtkova M. Narrative approaches to systematic review and synthesis of evidence for international development policy and practice . Journal of development effectiveness . 2012 Sep 1;4(3):409-29. 

Lucas PJ, Baird J, Arai L, Law C, Roberts HM. Worked examples of alternative methods for the synthesis of qualitative and quantitative research in systematic reviews . BMC medical research methodology . 2007 Dec;7(1):4.

Ryan R. Cochrane Consumer sand Communication Review Group. Cochrane Consumers and Communication Review Group: data synthesis and analysis . June 2013.

  • Last Updated: Jul 19, 2024 4:53 PM
  • URL: https://guides.hsict.library.utoronto.ca/c.php?g=705263

We acknowledge this sacred land on which the University Health Network operates. For thousands of years it has been the traditional territory of the Huron-Wendat, the Haudenosaunee, and most recently, the Mississaugas of the Credit River. This territory was the subject of the Dish With One Spoon Wampum Belt Covenant, an agreement between the Haudenosaunee Confederacy and the Confederacy of the Ojibwe and allied nations to peaceably share and care for the resources around the Great Lakes. Today, the meeting place of Toronto is still the home to many Indigenous people from across Turtle Island and we are grateful to have the opportunity to work and learn on this territory

UHN Library and Information Services

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Literature reviews and systematic reviews: what is the difference?

  • PMID: 24255144

PubMed Disclaimer

Similar articles

  • Abstracting information. Hayes P. Hayes P. Clin Nurs Res. 2003 May;12(2):123-6. doi: 10.1177/1054773803012002001. Clin Nurs Res. 2003. PMID: 12741665 No abstract available.
  • On writing: write the abstract, and a manuscript will emerge from it! Baillie J. Baillie J. Endoscopy. 2004 Jul;36(7):648-50. doi: 10.1055/s-2004-825674. Endoscopy. 2004. PMID: 15243890 No abstract available.
  • Abstract thinking. Halloran L. Halloran L. Adv Nurse Pract. 2005 Oct;13(10):84. Adv Nurse Pract. 2005. PMID: 16231555 No abstract available.
  • Writing a research abstract: eloquence in miniature. Papanas N, Georgiadis GS, Maltezos E, Lazarides MK. Papanas N, et al. Int Angiol. 2012 Jun;31(3):297-302. Int Angiol. 2012. PMID: 22634986 Review.
  • Presenting research to clinicians: strategies for writing about research findings. Oermann MH, Galvin EA, Floyd JA, Roop JC. Oermann MH, et al. Nurse Res. 2006;13(4):66-74. doi: 10.7748/nr2006.07.13.4.66.c5990. Nurse Res. 2006. PMID: 16897941 Review.
  • The Impact of COVID-19 Pandemic on Flipped Classroom for EFL Courses: A Systematic Literature Review. Linling Z, Abdullah R. Linling Z, et al. Sage Open. 2023 Jan 18;13(1):21582440221148149. doi: 10.1177/21582440221148149. eCollection 2023 Jan-Mar. Sage Open. 2023. PMID: 36699544 Free PMC article.
  • Feminism and literary translation: A systematic review. Irshad I, Yasmin M. Irshad I, et al. Heliyon. 2022 Mar 16;8(3):e09082. doi: 10.1016/j.heliyon.2022.e09082. eCollection 2022 Mar. Heliyon. 2022. PMID: 35846477 Free PMC article.
  • Perception of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement of authors publishing reviews in nursing journals: a cross-sectional online survey. Tam WWS, Tang A, Woo B, Goh SYS. Tam WWS, et al. BMJ Open. 2019 Apr 20;9(4):e026271. doi: 10.1136/bmjopen-2018-026271. BMJ Open. 2019. PMID: 31005930 Free PMC article.

Publication types

  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Educational resources and simple solutions for your research journey

Systematic review vs literature review: Some essential differences

Systematic Review vs. Literature Review: Some Essential Differences

Most budding researchers are confused between systematic review vs. literature review. As a PhD student or early career researcher, you must by now be well versed with the fact that literature review is the most important aspect of any scientific research, without which a study cannot be commenced. However, literature review is in itself an ‘umbrella term’, and there are several types of reviews, such as systematic literature reviews , that you may need to perform during your academic publishing journey, based upon their specific relevance to each study.   

Your research goal, approach, and design will finally influence your choice of systematic review vs literature review . Apart from systematic literature review , some other common types of literature review are 1 :   

  • Narrative literature review – used to identify gaps in the existing knowledge base  
  • Scoping literature review – used to identify the scope of a particular study  
  • Integrative literature review – used to generate secondary data that upon integration can be used to define new frameworks and perspectives  
  • Theoretical literature review – used to pool all kinds of theories associated with a particular concept  

The most commonly used form of review, however, is the systematic literature review . Compared to the other types of literature reviews described above, this one requires a more rigorous and well-defined approach. The systematic literature review can be divided into two main categories: meta-analysis and meta-synthesis. Meta-analysis is related to identifying patterns and relationships within the data, by using statistical procedures. Meta-synthesis on the other hand, is concerned with integrating findings of multiple qualitative research studies, not necessarily needing statistical procedures.  

difference between traditional and systematic literature review

Table of Contents

Difference between systematic review and literature review

In spite of having this basic understanding, however, there might still be a lot of confusion when it comes to finalizing between a systematic review vs literature review of any other kind. Since these two types of reviews serve a similar purpose, they are often used interchangeably and the difference between systematic review and literature review is overlooked.  In order to ease this confusion and smoothen the process of decision-making it is essential to have a closer look at a systematic review vs. literature review and the differences between them 2.3 :   

     
Goal  Provides answers to a focused question, most often a clinical question  Provides a general overview regarding any particular topic or concept 

 

Methodology  Pre-specified methods, may or may not include statistical analysis, but methods are usually reproducible. The results and conclusion are usually evidence-based. 

 

Methods are not as rigorous, do not have inclusion and exclusion criteria and may follow a thematic approach. The conclusions may be subjective and qualitative, based upon the individual author’s perspective of the data. 

 

Content 

 

The main components of the systematic literature review include:  

Prespecified criteria, search strategy, assessment of the validity of the findings, interpretation and presentation of the results, and references. 

 

The main components of this review include:  

Introduction, methods, discussion, conclusion, and references.  

Author limit 

 

Three or more  One or more 
Value  Valuable for clinicians, experts, and practitioners who are looking for evidence-based data. 

 

Valuable for a broader group of researchers and scientists who are looking to summarize and understand a particular topic in depth 

 

  Tips to keep in mind when performing a literature review  

While the above illustrated similarities and differences between systematic review and literature review might be helpful as an overview, here are some additional pointers that you can keep in mind while performing a review for your research study 4 :  

  • Check the authenticity of the source thoroughly while using an article in your review.  
  • Regardless of the type of review that you intend to perform, i t is important to ensure that the landmark literature, the one that first spoke about your topic of interest, is given prominence in your review. These can be identified with a simple Google Scholar search and checking the most cited articles.  
  • Make sure to include all the latest literature that focuses on your research question.   
  • Avoid including irrelevant data by revisiting your aims, objectives, and research questions as often as possible during the review process.  
  • If you intend to submit your review in any peer-reviewed journal, make sure to have a defined structure based upon your selected type of review .  
  • If it is a systematic literature review , make sure that the research question is clear and cri sp and framed in a manner that is subjected to quantitative analysis.  
  • If it is a literature review of any other kind, make sure that you include enough checkpoints to minimize biases in your conclusions . You can use an integrative approach to show how different data points fit together, however, it is also essential to mention and describe data that doesn’t fit together in order to produce a balanced review. This can also help identify gaps and pave the way for designing future studies on the topic.   

We hope that the above article was helpful for you in understanding the basics of literature review and to know the use of systemic review vs. literature review.

Q: When to do a systematic review?

A systematic review is conducted to synthesize and analyze existing research on a specific question. It’s valuable when a comprehensive assessment of available evidence is required to answer a well-defined research question. Systematic reviews follow a predefined protocol, rigorous methodology, and aim to minimize bias. They’re especially useful for informing evidence-based decisions in healthcare and policy-making.

Q: When to do a literature review?

A literature review surveys existing literature on a topic, providing an overview of key concepts and findings. It’s conducted when exploring a subject, identifying gaps, and contextualizing research. Literature reviews are valuable at the beginning of a study to establish the research landscape and justify the need for new research.

Q: What is the difference between a literature review and a scoping review?

A literature review summarizes existing research on a topic, while a scoping review maps the literature to identify research gaps and areas for further investigation. While both assess existing literature, a scoping review tends to have broader inclusion criteria and aims to provide an overview of the available research, helping researchers understand the breadth of a topic before narrowing down a research question.

Q: What’ is the difference between systematic Literature Review and Meta Analysis?

A systematic literature review aims to comprehensively identify, select, and analyze all relevant studies on a specific research question using a rigorous methodology. It summarizes findings qualitatively. On the other hand, a meta-analysis is a statistical technique applied within a systematic review. It involves pooling and analyzing quantitative data from multiple studies to provide a more precise estimate of an effect size. In essence, a meta-analysis is a quantitative synthesis that goes beyond the qualitative summary of a systematic literature review.

References:  

  • Types of Literature Review – Business Research Methodology. https://research-methodology.net/research-methodology/types-literature-review/  
  • Mellor, L. The difference between a systematic review and a literature review. Covidence. https://www.covidence.org/blog/the-difference-between-a-systematic-review-and-a-literature-review \
  • Basu, G. SJSU Research Guides – Literature Review vs Systematic Review.  https://libguides.sjsu.edu/LitRevVSSysRev/definitions  
  • Jansen, D., Phair, D. Writing A Literature Review: 7 Common (And Costly) Mistakes To Avoid. Grad Coach, June 2021. https://gradcoach.com/literature-review-mistakes/  

R Discovery is a literature search and research reading platform that accelerates your research discovery journey by keeping you updated on the latest, most relevant scholarly content. With 250M+ research articles sourced from trusted aggregators like CrossRef, Unpaywall, PubMed, PubMed Central, Open Alex and top publishing houses like Springer Nature, JAMA, IOP, Taylor & Francis, NEJM, BMJ, Karger, SAGE, Emerald Publishing and more, R Discovery puts a world of research at your fingertips.  

Try R Discovery Prime FREE for 1 week or upgrade at just US$72 a year to access premium features that let you listen to research on the go, read in your language, collaborate with peers, auto sync with reference managers, and much more. Choose a simpler, smarter way to find and read research – Download the app and start your free 7-day trial today !  

Related Posts

Interplatform Capability

How Does R Discovery’s Interplatform Capability Enhance Research Accessibility 

convenience sampling

What is Convenience Sampling: Definition, Method, and Examples 

  • Locations and Hours
  • UCLA Library
  • Research Guides
  • Biomedical Library Guides

Systematic Reviews

  • Types of Literature Reviews

What Makes a Systematic Review Different from Other Types of Reviews?

  • Planning Your Systematic Review
  • Database Searching
  • Creating the Search
  • Search Filters and Hedges
  • Grey Literature
  • Managing and Appraising Results
  • Further Resources

Reproduced from Grant, M. J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26: 91–108. doi:10.1111/j.1471-1842.2009.00848.x

Aims to demonstrate writer has extensively researched literature and critically evaluated its quality. Goes beyond mere description to include degree of analysis and conceptual innovation. Typically results in hypothesis or mode Seeks to identify most significant items in the field No formal quality assessment. Attempts to evaluate according to contribution Typically narrative, perhaps conceptual or chronological Significant component: seeks to identify conceptual contribution to embody existing or derive new theory
Generic term: published materials that provide examination of recent or current literature. Can cover wide range of subjects at various levels of completeness and comprehensiveness. May include research findings May or may not include comprehensive searching May or may not include quality assessment Typically narrative Analysis may be chronological, conceptual, thematic, etc.
Mapping review/ systematic map Map out and categorize existing literature from which to commission further reviews and/or primary research by identifying gaps in research literature Completeness of searching determined by time/scope constraints No formal quality assessment May be graphical and tabular Characterizes quantity and quality of literature, perhaps by study design and other key features. May identify need for primary or secondary research
Technique that statistically combines the results of quantitative studies to provide a more precise effect of the results Aims for exhaustive, comprehensive searching. May use funnel plot to assess completeness Quality assessment may determine inclusion/ exclusion and/or sensitivity analyses Graphical and tabular with narrative commentary Numerical analysis of measures of effect assuming absence of heterogeneity
Refers to any combination of methods where one significant component is a literature review (usually systematic). Within a review context it refers to a combination of review approaches for example combining quantitative with qualitative research or outcome with process studies Requires either very sensitive search to retrieve all studies or separately conceived quantitative and qualitative strategies Requires either a generic appraisal instrument or separate appraisal processes with corresponding checklists Typically both components will be presented as narrative and in tables. May also employ graphical means of integrating quantitative and qualitative studies Analysis may characterise both literatures and look for correlations between characteristics or use gap analysis to identify aspects absent in one literature but missing in the other
Generic term: summary of the [medical] literature that attempts to survey the literature and describe its characteristics May or may not include comprehensive searching (depends whether systematic overview or not) May or may not include quality assessment (depends whether systematic overview or not) Synthesis depends on whether systematic or not. Typically narrative but may include tabular features Analysis may be chronological, conceptual, thematic, etc.
Method for integrating or comparing the findings from qualitative studies. It looks for ‘themes’ or ‘constructs’ that lie in or across individual qualitative studies May employ selective or purposive sampling Quality assessment typically used to mediate messages not for inclusion/exclusion Qualitative, narrative synthesis Thematic analysis, may include conceptual models
Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research Completeness of searching determined by time constraints Time-limited formal quality assessment Typically narrative and tabular Quantities of literature and overall quality/direction of effect of literature
Preliminary assessment of potential size and scope of available research literature. Aims to identify nature and extent of research evidence (usually including ongoing research) Completeness of searching determined by time/scope constraints. May include research in progress No formal quality assessment Typically tabular with some narrative commentary Characterizes quantity and quality of literature, perhaps by study design and other key features. Attempts to specify a viable review
Tend to address more current matters in contrast to other combined retrospective and current approaches. May offer new perspectives Aims for comprehensive searching of current literature No formal quality assessment Typically narrative, may have tabular accompaniment Current state of knowledge and priorities for future investigation and research
Seeks to systematically search for, appraise and synthesis research evidence, often adhering to guidelines on the conduct of a review Aims for exhaustive, comprehensive searching Quality assessment may determine inclusion/exclusion Typically narrative with tabular accompaniment What is known; recommendations for practice. What remains unknown; uncertainty around findings, recommendations for future research
Combines strengths of critical review with a comprehensive search process. Typically addresses broad questions to produce ‘best evidence synthesis’ Aims for exhaustive, comprehensive searching May or may not include quality assessment Minimal narrative, tabular summary of studies What is known; recommendations for practice. Limitations
Attempt to include elements of systematic review process while stopping short of systematic review. Typically conducted as postgraduate student assignment May or may not include comprehensive searching May or may not include quality assessment Typically narrative with tabular accompaniment What is known; uncertainty around findings; limitations of methodology
Specifically refers to review compiling evidence from multiple reviews into one accessible and usable document. Focuses on broad condition or problem for which there are competing interventions and highlights reviews that address these interventions and their results Identification of component reviews, but no search for primary studies Quality assessment of studies within component reviews and/or of reviews themselves Graphical and tabular with narrative commentary What is known; recommendations for practice. What remains unknown; recommendations for future research
  • << Previous: Home
  • Next: Planning Your Systematic Review >>
  • Last Updated: Jul 23, 2024 3:40 PM
  • URL: https://guides.library.ucla.edu/systematicreviews

Los Angeles Mission College logo

Literature Review

  • What is a Literature Review?
  • What is a good literature review?
  • Types of Literature Reviews
  • What are the parts of a Literature Review?
  • What is the difference between a Systematic Review and a Literature Review?

Systematic vs Literature

Systematic reviews and literature reviews are commonly confused. The main difference between the two is that systematic reviews answer a focused question whereas literature reviews contextualize a topic.

Systematic Review Literature Review         

Kysh, Lynn (2013): Difference between a systematic review and a literature review. Available at: https://figshare.com/articles/Difference_between_a_systematic_review_and_a_literature_review/766364

New More Help with Writing?

Visit the writing center via lamc tutoring.

difference between traditional and systematic literature review

Another Writing Tip!

 Review not just what scholars are saying, but how are they saying it. Some questions to ask:

  • How are they organizing their ideas?
  • What methods have they used to study the problem?
  • What theories have been used to explain, predict, or understand their research problem?
  • What sources have they cited to support their conclusions?
  • How have they used non-textual elements [e.g., charts, graphs, figures, etc.] to illustrate key points?

When you begin to write your literature review section, you'll be glad you dug deeper into how the research was designed and constructed because it establishes a means for developing more substantial analysis and interpretation of the research problem.

Hart, Chris.  Doing a Literature Review: Releasing the Social Science Research Imagination . Thousand Oaks, CA: Sage Publications, 1998.

  • << Previous: What are the parts of a Literature Review?
  • Last Updated: Nov 21, 2023 12:49 PM
  • URL: https://libguides.lamission.edu/c.php?g=1190903

Los Angeles Mission College. All rights reserved. - 13356 Eldridge Avenue, Sylmar, CA 91342. 818-364-7600 | LACCD.edu | ADA Compliance Questions or comments about this web site? Please leave Feedback

IMAGES

  1. Where to start

    difference between traditional and systematic literature review

  2. Perbedaan Meta Analisis Dan Systematic Review Jenis Dan Contohnya

    difference between traditional and systematic literature review

  3. Comparison between systematic and traditional review

    difference between traditional and systematic literature review

  4. Overview

    difference between traditional and systematic literature review

  5. literature review vs case study

    difference between traditional and systematic literature review

  6. systematic literature review results section

    difference between traditional and systematic literature review

VIDEO

  1. Systematic Literature Review Paper

  2. Introduction to Literature Review, Systematic Review, and Meta-analysis

  3. Introduction Systematic Literature Review-Various frameworks Bibliometric Analysis

  4. Systematic Literature Review- Part 1, What and Why

  5. How to Conduct a Systematic Literature Review from Keenious AI tool

  6. Systematic Literature Review: An Introduction [Urdu/Hindi]

COMMENTS

  1. Research Guides: Research at NJAES : Literature Reviews

    There are many different types of literature reviews from traditional literature reviews to rigorous systematic reviews. Each has its own methodology. Please review resources on this page and familiarize yourself with the task, commitment, and purpose of each before trying to decide on the type of review best fitting your research question.

  2. The importance of systematic reviews

    The extensive and comprehensive systematic review and meta analysis of Shool et al (Citation 2024) complements the work of Liu et al. (Citation 2008), the only systematic review on motorcycle helmet use and injury outcomes in the Cochrane database. Shool and collaborators aimed to identify the underlying causes of the variation in helmet usage ...

  3. Cluster-based Systematic Literature Review: Understanding FinTech

    This article employs a systematic literature review of a total of 111 publications from 2016 to 2023 on the consumer adoption/acceptance of FinTech products and services. It identifies seven research clusters from the extant literature to note the trends in adoption and enlist the challenges faced by the usage of FinTech along with emerging ...

  4. Understanding the influence of different proxy perspectives in

    This review aimed to identify the role of the proxy perspective in explaining the differences between self-rated and proxy-rated QoL in people living with dementia. Methods: A systematic literate review was conducted by sourcing articles from a previously published review, supplemented by an update of the review in four bibliographic databases.

  5. A meta-analysis of performance advantages on athletes in ...

    This study compared the multiple object tracking (MOT) performance of athletes vs. non-athletes and expert athletes vs. novice athletes by systematically reviewing and meta-analyzing the literature.

  6. Research on K-12 maker education in the early 2020s

    This systematic literature review focuses on the research published on K-12 maker education in the early 2020s, providing a current picture of the field. Maker education is a hands-on approach to learning that encourages students to engage in collaborative and innovative activities, using a combination of traditional design and fabrication tools and digital technologies to explore real-life ...

  7. Cost-effectiveness and health economics for ureteral and kidney stone

    A systematic review of the literature was performed, retrieving 83 English-written full-text studies for inclusion. Papers were labelled according to the respective area of interest: 'costs of different procedures: SWL, URS, PCNL', 'costs of endourological devices and new technologies: reusable and disposable scopes, lasers, other devices ...

  8. Targeted Prostate Biopsy: How, When, and Why? A Systematic Review

    Methods: A systematic literature review was conducted to compare traditional 12-core systematic biopsies guided by transrectal ultrasound with targeted biopsy techniques using mpMRI. We searched electronic databases including PubMed, Scopus, and Web of Science from January 2015 to December 2024 using keywords such as "targeted prostate biopsy ...

  9. The impact of comprehensive geriatric assessment on postoperative

    However, the pooled data showed no statistically significant difference between the two groups (18.6% vs. 15.4%; OR: 1.26; CI: 0.86-1. ... During a review of the literature, it was found that most studies focused only on the assessment part of CGA and did not include specific care models or management plans for modifiable risk factors aimed ...

  10. Cancers

    The majority of the studies we found during our systematic review of the literature were conducted on established cancer cell lines derived from oral cancer that were cultured for several passages and that are described in Section 3.1 and Section 3.2. Unfortunately, these studies do not take into consideration the phenotypic differences between ...

  11. Traditional reviews vs. systematic reviews

    They aim to summarise the best available evidence on a particular research topic. The main differences between traditional reviews and systematic reviews are summarised below in terms of the following characteristics: Authors, Study protocol, Research question, Search strategy, Sources of literature, Selection criteria, Critical appraisal ...

  12. Systematic Literature Review or Literature Review

    The difference between literature review and systematic review comes back to the initial research question. Whereas the systematic review is very specific and focused, the standard literature review is much more general. The components of a literature review, for example, are similar to any other research paper.

  13. Traditional Literature Review vs Systematic Review

    The timeline of traditional review is from weeks to months, whereas a systematic review is from months to years. Systematic reviews are more extensive including much more data and findings. They ...

  14. Literature reviews vs systematic reviews

    Acommon type of submission at any Journal is a review of the published information related to a topic.These are often returned to their authors without review, usually because they are literature reviews rather than systematic reviews. There is a big difference between the two (Table 1).Here, we summarise the differences, how they are used in academic work, and why a general literature review ...

  15. The difference between a systematic review and a literature ...

    Systematic review methods have influenced many other review types, including the traditional literature review. Covidence is a web-based tool that saves you time at the screening, selection, data extraction and quality assessment stages of your systematic review. It supports easy collaboration across teams and provides a clear overview of task ...

  16. Systematic Review vs. Literature Review

    It is common to confuse systematic and literature reviews as both are used to provide a summary of the existent literature or research on a specific topic. Even with this common ground, both types vary significantly. Please review the following chart (and its corresponding poster linked below) for the detailed explanation of each as well as the ...

  17. Know the Difference! Systematic Review vs. Literature Review

    The difference between a Systematic Review and a Literature Review, and why it matters. Evidence Pyramid The evidence pyramid (image above) visually depicts the evidential strength of different research designs.

  18. (Pdf) Traditional Literature Review Versus Systematic Literature Review

    1 Department of Family Medicine, Federal Medical Centre, Abeokuta. Nigeria. E mail:[email protected]. Traditional and systematic literature reviews ar e the two main types of r eview we ...

  19. Meta‐analysis and traditional systematic literature reviews—What, why

    A traditional SLR is a "process for assembling, arranging, and assessing existing literature in a research domain" (Paul et al., 2021). In this process, assembling involves identification (i.e., defining the literature review domain, main question, and source type/quality) and acquisition (i.e., obtaining papers to be included).

  20. 5 differences between a systematic review and other types of literature

    2. Searching for evidence. Where and how one searches for evidence is an important difference. While literature reviews require only one database or source, systematic reviews require more comprehensive efforts to locate evidence. Multiple databases are searched, each with a specifically tailored search strategy (usually designed and ...

  21. Systematic Review

    Systematic review vs. literature review. A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

  22. Literature Review vs. Systematic Review

    The following table provides a detailed explanation as well as the differences between systematic and literature reviews. Kysh, Lynn (2013): Difference between a systematic review and a literature review.

  23. Literature Review vs. Systematic Review

    Literature Review: Systematic Review: Definition. Qualitatively summarizes evidence on a topic using informal or subjective methods to collect and interpret studies: High-level overview of primary research on a focused question that identifies, selects, synthesizes, and appraises all high quality research evidence to that question: Goals

  24. Understanding the Differences Between a Systematic Review vs Literature

    The methodology involved in a literature review is less complicated and requires a lower degree of planning. For a systematic review, the planning is extensive and requires defining robust pre-specified protocols. It first starts with formulating the research question and scope of the research. The PICO's approach (population, intervention ...

  25. Home

    A systematic review is not the same as a traditional (narrative) review or a literature review. Unlike other kinds of reviews, systematic reviews must be as thorough and unbiased as possible, and must also make explicit how the search was conducted. Systematic reviews may or may not include a meta-analysis.

  26. Comparing Integrative and Systematic Literature Reviews

    A literature review is a systematic way of collecting and synthesizing previous research (Snyder, 2019).An integrative literature review provides an integration of the current state of knowledge as a way of generating new knowledge (Holton, 2002).HRDR is labeling Integrative Literature Review as one of the journal's four non-empirical research article types as in theory and conceptual ...

  27. Literature reviews and systematic reviews: what is the difference?

    MeSH terms. Abstracting and Indexing / methods*. Publishing*. Review Literature as Topic*. Terminology as Topic*. Writing*. Literature reviews and systematic reviews: what is the difference?

  28. Systematic Review vs. Literature Review: Some Essential Differences

    A systematic literature review aims to comprehensively identify, select, and analyze all relevant studies on a specific research question using a rigorous methodology. It summarizes findings qualitatively. On the other hand, a meta-analysis is a statistical technique applied within a systematic review.

  29. Research Guides: Systematic Reviews: Types of Literature Reviews

    Rapid review. Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research. Completeness of searching determined by time constraints. Time-limited formal quality assessment. Typically narrative and tabular.

  30. What is the difference between a Systematic Review and a Literature

    This research guide will help you research, compile, and understand the elements required for a literature review. Systematic reviews and literature reviews are commonly confused. The main difference between the two is that systematic reviews answer a focused question whereas literature reviews contextualize a topic.