• - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Critically appraising...

Critically appraising qualitative research

  • Related content
  • Peer review
  • Ayelet Kuper , assistant professor 1 ,
  • Lorelei Lingard , associate professor 2 ,
  • Wendy Levinson , Sir John and Lady Eaton professor and chair 3
  • 1 Department of Medicine, Sunnybrook Health Sciences Centre, and Wilson Centre for Research in Education, University of Toronto, 2075 Bayview Avenue, Room HG 08, Toronto, ON, Canada M4N 3M5
  • 2 Department of Paediatrics and Wilson Centre for Research in Education, University of Toronto and SickKids Learning Institute; BMO Financial Group Professor in Health Professions Education Research, University Health Network, 200 Elizabeth Street, Eaton South 1-565, Toronto
  • 3 Department of Medicine, Sunnybrook Health Sciences Centre
  • Correspondence to: A Kuper ayelet94{at}post.harvard.edu

Six key questions will help readers to assess qualitative research

Summary points

Appraising qualitative research is different from appraising quantitative research

Qualitative research papers should show appropriate sampling, data collection, and data analysis

Transferability of qualitative research depends on context and may be enhanced by using theory

Ethics in qualitative research goes beyond review boards’ requirements to involve complex issues of confidentiality, reflexivity, and power

Over the past decade, readers of medical journals have gained skills in critically appraising studies to determine whether the results can be trusted and applied to their own practice settings. Criteria have been designed to assess studies that use quantitative methods, and these are now in common use.

In this article we offer guidance for readers on how to assess a study that uses qualitative research methods by providing six key questions to ask when reading qualitative research (box 1). However, the thorough assessment of qualitative research is an interpretive act and requires informed reflective thought rather than the simple application of a scoring system.

Box 1 Key questions to ask when reading qualitative research studies

Was the sample used in the study appropriate to its research question.

Were the data collected appropriately?

Were the data analysed appropriately?

Can I transfer the results of this study to my own setting?

Does the study adequately address potential ethical issues, including reflexivity?

Overall: is what the researchers did clear?

One of the critical decisions in a qualitative study is whom or what to include in the sample—whom to interview, whom to observe, what texts to analyse. An understanding that qualitative research is based in experience and in the construction of meaning, combined with the specific research question, should guide the sampling process. For example, a study of the experience of survivors of domestic violence that examined their reasons for not seeking help from healthcare providers might focus on interviewing a …

Log in using your username and password

BMA Member Log In

If you have a subscription to The BMJ, log in:

  • Need to activate
  • Log in via institution
  • Log in via OpenAthens

Log in through your institution

Subscribe from £184 *.

Subscribe and get access to all BMJ articles, and much more.

* For online subscription

Access this article for 1 day for: £50 / $60/ €56 ( excludes VAT )

You can download a PDF version for your personal record.

Buy this article

how to review qualitative research

how to review qualitative research

Beyond the pages of academic medicine

how to review qualitative research

10 Tips for Reviewing a Qualitative Paper

Peer Reviewer Resources

Editor’s Note: The following post is part of a series of Peer Reviewer Resources written by some of  Academic Medicine ‘s top peer reviewers. Read other peer review posts .

By: Carol-anne Moulton, MD, FRACS, MEd, PhD, Department of Surgery, University of Toronto, and Priyanka Patel, MSc, Wilson Center, University Health Network, University of Toronto

This is a tough task. Let us say that off the bat. We have been involved in qualitative research for a long time now and the complexity of it never ceases to amaze us…so there is no “how to” guide that will suit all qualitative research.

Having said that, we think there are some guiding principles that can help us begin to understand the rigor of qualitative research and consequently the review process.

  • Question/Purpose : This should be clearly stated, as in all research studies. There are generally no hypotheses statements in qualitative research as we are not testing but rather exploring. Ideally, the questions are framed around how and why type questions, rather than how often, is there a difference, or what are the factors type questions.
  • Rationale of study : We like to make sure that the study was built upon a well justified and referenced rationale. It may not be our area of study but we think it is important for the authors to provide rationale for their study by building up the arguments from the literature. Theories or pre-existing frameworks that informed the research question should be described up front. Some work claims to be atheoretical. Traditionally, grounded theorists claimed their work to be atheoretical, but nowadays many grounded theorists are acknowledging being informed by particular perspectives, frameworks, or theories. This should be made explicit.
  • Methodology described : What type of research was this? Ethnography? Grounded theory? Phenomenology? Discourse analysis? It’s important that the researchers describe their research journey in a clear and detailed enough way to give the readers an understanding for how the analyses evolved. This should include an explanation of why the methodological approach was used, as well as the key principles from the methodology that guided the study.
  • Epistemology : Researchers come from all paradigms and it is important to identify within which paradigm the authors are situated. Sometimes they might state deliberately “We have used constructivist grounded theory,” but it might be a matter of reading between the lines to figure it out. If from the positivist paradigm, authors might use the terms valid or verified to imply they are making statements of truth. The paradigm helps us understand what the authors mean by “truth” and informs how they went about creating knowledge and constructing meaning from their results.
  • Context described satisfactorily : Qualitative research is not meant to imply generalizability. In fact, we celebrate the importance of context. We recognize that the phenomena we study are often different in meaningful ways when taken to a different context. For example, the experiences of physicians coping with burnout may be unique based on specialty and/or institution (i.e. type of systems-level support available, differing demands in academic or community institutions). A good qualitative study should therefore describe sufficient details of context (i.e. physical, cultural, social, and/or environmental context) in which the research was conducted to allow the reader to make judgments of whether the results might be transferable to another (possibly their own) setting.
  • Data collection and analysis : Do they provide enough information to understand the collection and analysis process? As reviewers, we often ask ourselves whether the data collection and analyses are clear and detailed enough for us to gain a sense of how the analysis of the phenomena evolved. For example, who made up the research team? Because most knowledge is viewed as a co-construction between researcher and participants, each individual (e.g. a sociologist versus a surgeon) will analyze the results differently, but both meaningfully, based on their unique position and perspective.
  • Sampling strategies : These are very important to understand whether the question was aligned with the data collection process. The sample reflects the type of results achieved and helps the reader understand from which perspective the data was collected. Some common sampling strategies include theoretical sampling and negative case sampling. Researchers may theoretically sample by selecting participants that in someway inform their understanding of an emergent theme or idea. Negative case sampling may be used to search for instances that may challenge the emergent patterns from the data for the purpose of refining the analysis. Negative case sampling is used to ensure that the researchers are not specifically selecting cases that confirm their findings.
  • Analysis elevated beyond description : Results might be descriptive in nature (e.g. “One surgeon felt upset and isolated after he experienced a hernia complication in his first month of independent practice”) or they might be elevated to create more abstract concepts and ideas removed from the primary dataset (e.g. characterizing the phases of surgeons’ reactions to complications). In either case, the researcher should ensure that the way they present their findings are aligned with principles of the methodology used.
  • Proof of an iterative process : Qualitative research is usually done in an iterative manner where ideas and concepts are built up over time and occur through cycles of data collection and data analysis. This is demonstrated through statements like “Our interview template was altered over time to reflect the emergent ideas through the analysis process,” or “As we became interested in this concept, we began to sample for…”.
  • Reflexivity : This is tough to understand, especially for those of us who come from the positivist paradigm where it is of utmost importance to “prove” that the results are “true” and untainted by bias. The aim of qualitative research is to understand meaning rather than assuming that there is a singular truth or reality. A good qualitative researcher recognizes that the way they make sense of and attach meaning to the data is partly shaped by the characteristics of the researcher (i.e. age, gender, social class, ethnicity, professional status, etc.) and the assumptions they hold. The researcher should make explicit the perspectives they are coming from so that the readers can interpret the data appropriately. Consider a study exploring the pressures surgical trainees experience in residency conducted by a staff surgeon versus a non-surgical anthropologist. You can imagine the findings may differ based on the types of questions the two interviewers decide to ask, what they each find interesting or important, or how comfortable the resident feels discussing sensitive information with an outsider (anthropologist) as opposed to an insider (surgeon). We like to see that a researcher has reflected on how her or his unique position, preconceptions, and biases influenced the findings.

19 thoughts on “ 10 Tips for Reviewing a Qualitative Paper ”

Pingback: Qualitative Review in Nursing - Nursing Papers 247

Pingback: Qualitative Review in Nursing - Essay Don

Pingback: Qualitative Review in Nursing - Assignmentnerds.net

Pingback: Qualitative Review in Nursing - Academia Essays

Pingback: qualitative review in nursing - Superb Papers

Pingback: Qualitative Review in Nursing - Allessaysexpert

Pingback: Qualitative Review in Nursing - Chip Writers

Pingback: Science - Technical Assignments

Pingback: Qualitative Review in Nursing – fastwriting

Pingback: Qualitative Review in Nursing – EssaySolutions.net

Pingback: Qualitative Review in Nursing - Homework Market

Pingback: Qualitative Review in Nursing - Midterm Essays

Pingback: Qualitative Review in Nursing - Tutoring Beast

Pingback: Qualitative Review in Nursing - StudyCore

Pingback: Qualitative Review in Nursing - Custom Nursing Essays

Pingback: qualitative review in nursing - Gradesmine.com

Pingback: Qualitative Review in Nursing - Master My Course

Pingback: Qualitative Review in Nursing - Proficient Essay Help

Pingback: Qualitative Review in Nursing - Eliteprofessionalwriters.com

Comments are closed.

Discover more from AM Rounds

Subscribe now to keep reading and get access to the full archive.

Type your email…

Continue reading

Criteria for Good Qualitative Research: A Comprehensive Review

  • Regular Article
  • Open access
  • Published: 18 September 2021
  • Volume 31 , pages 679–689, ( 2022 )

Cite this article

You have full access to this open access article

how to review qualitative research

  • Drishti Yadav   ORCID: orcid.org/0000-0002-2974-0323 1  

97k Accesses

44 Citations

71 Altmetric

Explore all metrics

This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then, references of relevant articles were surveyed to find noteworthy, distinct, and well-defined pointers to good qualitative research. This review presents an investigative assessment of the pivotal features in qualitative research that can permit the readers to pass judgment on its quality and to condemn it as good research when objectively and adequately utilized. Overall, this review underlines the crux of qualitative research and accentuates the necessity to evaluate such research by the very tenets of its being. It also offers some prospects and recommendations to improve the quality of qualitative research. Based on the findings of this review, it is concluded that quality criteria are the aftereffect of socio-institutional procedures and existing paradigmatic conducts. Owing to the paradigmatic diversity of qualitative research, a single and specific set of quality criteria is neither feasible nor anticipated. Since qualitative research is not a cohesive discipline, researchers need to educate and familiarize themselves with applicable norms and decisive factors to evaluate qualitative research from within its theoretical and methodological framework of origin.

Similar content being viewed by others

how to review qualitative research

Good Qualitative Research: Opening up the Debate

Beyond qualitative/quantitative structuralism: the positivist qualitative research and the paradigmatic disclaimer.

how to review qualitative research

What is Qualitative in Research

Avoid common mistakes on your manuscript.

Introduction

“… It is important to regularly dialogue about what makes for good qualitative research” (Tracy, 2010 , p. 837)

To decide what represents good qualitative research is highly debatable. There are numerous methods that are contained within qualitative research and that are established on diverse philosophical perspectives. Bryman et al., ( 2008 , p. 262) suggest that “It is widely assumed that whereas quality criteria for quantitative research are well‐known and widely agreed, this is not the case for qualitative research.” Hence, the question “how to evaluate the quality of qualitative research” has been continuously debated. There are many areas of science and technology wherein these debates on the assessment of qualitative research have taken place. Examples include various areas of psychology: general psychology (Madill et al., 2000 ); counseling psychology (Morrow, 2005 ); and clinical psychology (Barker & Pistrang, 2005 ), and other disciplines of social sciences: social policy (Bryman et al., 2008 ); health research (Sparkes, 2001 ); business and management research (Johnson et al., 2006 ); information systems (Klein & Myers, 1999 ); and environmental studies (Reid & Gough, 2000 ). In the literature, these debates are enthused by the impression that the blanket application of criteria for good qualitative research developed around the positivist paradigm is improper. Such debates are based on the wide range of philosophical backgrounds within which qualitative research is conducted (e.g., Sandberg, 2000 ; Schwandt, 1996 ). The existence of methodological diversity led to the formulation of different sets of criteria applicable to qualitative research.

Among qualitative researchers, the dilemma of governing the measures to assess the quality of research is not a new phenomenon, especially when the virtuous triad of objectivity, reliability, and validity (Spencer et al., 2004 ) are not adequate. Occasionally, the criteria of quantitative research are used to evaluate qualitative research (Cohen & Crabtree, 2008 ; Lather, 2004 ). Indeed, Howe ( 2004 ) claims that the prevailing paradigm in educational research is scientifically based experimental research. Hypotheses and conjectures about the preeminence of quantitative research can weaken the worth and usefulness of qualitative research by neglecting the prominence of harmonizing match for purpose on research paradigm, the epistemological stance of the researcher, and the choice of methodology. Researchers have been reprimanded concerning this in “paradigmatic controversies, contradictions, and emerging confluences” (Lincoln & Guba, 2000 ).

In general, qualitative research tends to come from a very different paradigmatic stance and intrinsically demands distinctive and out-of-the-ordinary criteria for evaluating good research and varieties of research contributions that can be made. This review attempts to present a series of evaluative criteria for qualitative researchers, arguing that their choice of criteria needs to be compatible with the unique nature of the research in question (its methodology, aims, and assumptions). This review aims to assist researchers in identifying some of the indispensable features or markers of high-quality qualitative research. In a nutshell, the purpose of this systematic literature review is to analyze the existing knowledge on high-quality qualitative research and to verify the existence of research studies dealing with the critical assessment of qualitative research based on the concept of diverse paradigmatic stances. Contrary to the existing reviews, this review also suggests some critical directions to follow to improve the quality of qualitative research in different epistemological and ontological perspectives. This review is also intended to provide guidelines for the acceleration of future developments and dialogues among qualitative researchers in the context of assessing the qualitative research.

The rest of this review article is structured in the following fashion: Sect.  Methods describes the method followed for performing this review. Section Criteria for Evaluating Qualitative Studies provides a comprehensive description of the criteria for evaluating qualitative studies. This section is followed by a summary of the strategies to improve the quality of qualitative research in Sect.  Improving Quality: Strategies . Section  How to Assess the Quality of the Research Findings? provides details on how to assess the quality of the research findings. After that, some of the quality checklists (as tools to evaluate quality) are discussed in Sect.  Quality Checklists: Tools for Assessing the Quality . At last, the review ends with the concluding remarks presented in Sect.  Conclusions, Future Directions and Outlook . Some prospects in qualitative research for enhancing its quality and usefulness in the social and techno-scientific research community are also presented in Sect.  Conclusions, Future Directions and Outlook .

For this review, a comprehensive literature search was performed from many databases using generic search terms such as Qualitative Research , Criteria , etc . The following databases were chosen for the literature search based on the high number of results: IEEE Explore, ScienceDirect, PubMed, Google Scholar, and Web of Science. The following keywords (and their combinations using Boolean connectives OR/AND) were adopted for the literature search: qualitative research, criteria, quality, assessment, and validity. The synonyms for these keywords were collected and arranged in a logical structure (see Table 1 ). All publications in journals and conference proceedings later than 1950 till 2021 were considered for the search. Other articles extracted from the references of the papers identified in the electronic search were also included. A large number of publications on qualitative research were retrieved during the initial screening. Hence, to include the searches with the main focus on criteria for good qualitative research, an inclusion criterion was utilized in the search string.

From the selected databases, the search retrieved a total of 765 publications. Then, the duplicate records were removed. After that, based on the title and abstract, the remaining 426 publications were screened for their relevance by using the following inclusion and exclusion criteria (see Table 2 ). Publications focusing on evaluation criteria for good qualitative research were included, whereas those works which delivered theoretical concepts on qualitative research were excluded. Based on the screening and eligibility, 45 research articles were identified that offered explicit criteria for evaluating the quality of qualitative research and were found to be relevant to this review.

Figure  1 illustrates the complete review process in the form of PRISMA flow diagram. PRISMA, i.e., “preferred reporting items for systematic reviews and meta-analyses” is employed in systematic reviews to refine the quality of reporting.

figure 1

PRISMA flow diagram illustrating the search and inclusion process. N represents the number of records

Criteria for Evaluating Qualitative Studies

Fundamental criteria: general research quality.

Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3 . Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy’s “Eight big‐tent criteria for excellent qualitative research” (Tracy, 2010 ). Tracy argues that high-quality qualitative work should formulate criteria focusing on the worthiness, relevance, timeliness, significance, morality, and practicality of the research topic, and the ethical stance of the research itself. Researchers have also suggested a series of questions as guiding principles to assess the quality of a qualitative study (Mays & Pope, 2020 ). Nassaji ( 2020 ) argues that good qualitative research should be robust, well informed, and thoroughly documented.

Qualitative Research: Interpretive Paradigms

All qualitative researchers follow highly abstract principles which bring together beliefs about ontology, epistemology, and methodology. These beliefs govern how the researcher perceives and acts. The net, which encompasses the researcher’s epistemological, ontological, and methodological premises, is referred to as a paradigm, or an interpretive structure, a “Basic set of beliefs that guides action” (Guba, 1990 ). Four major interpretive paradigms structure the qualitative research: positivist and postpositivist, constructivist interpretive, critical (Marxist, emancipatory), and feminist poststructural. The complexity of these four abstract paradigms increases at the level of concrete, specific interpretive communities. Table 5 presents these paradigms and their assumptions, including their criteria for evaluating research, and the typical form that an interpretive or theoretical statement assumes in each paradigm. Moreover, for evaluating qualitative research, quantitative conceptualizations of reliability and validity are proven to be incompatible (Horsburgh, 2003 ). In addition, a series of questions have been put forward in the literature to assist a reviewer (who is proficient in qualitative methods) for meticulous assessment and endorsement of qualitative research (Morse, 2003 ). Hammersley ( 2007 ) also suggests that guiding principles for qualitative research are advantageous, but methodological pluralism should not be simply acknowledged for all qualitative approaches. Seale ( 1999 ) also points out the significance of methodological cognizance in research studies.

Table 5 reflects that criteria for assessing the quality of qualitative research are the aftermath of socio-institutional practices and existing paradigmatic standpoints. Owing to the paradigmatic diversity of qualitative research, a single set of quality criteria is neither possible nor desirable. Hence, the researchers must be reflexive about the criteria they use in the various roles they play within their research community.

Improving Quality: Strategies

Another critical question is “How can the qualitative researchers ensure that the abovementioned quality criteria can be met?” Lincoln and Guba ( 1986 ) delineated several strategies to intensify each criteria of trustworthiness. Other researchers (Merriam & Tisdell, 2016 ; Shenton, 2004 ) also presented such strategies. A brief description of these strategies is shown in Table 6 .

It is worth mentioning that generalizability is also an integral part of qualitative research (Hays & McKibben, 2021 ). In general, the guiding principle pertaining to generalizability speaks about inducing and comprehending knowledge to synthesize interpretive components of an underlying context. Table 7 summarizes the main metasynthesis steps required to ascertain generalizability in qualitative research.

Figure  2 reflects the crucial components of a conceptual framework and their contribution to decisions regarding research design, implementation, and applications of results to future thinking, study, and practice (Johnson et al., 2020 ). The synergy and interrelationship of these components signifies their role to different stances of a qualitative research study.

figure 2

Essential elements of a conceptual framework

In a nutshell, to assess the rationale of a study, its conceptual framework and research question(s), quality criteria must take account of the following: lucid context for the problem statement in the introduction; well-articulated research problems and questions; precise conceptual framework; distinct research purpose; and clear presentation and investigation of the paradigms. These criteria would expedite the quality of qualitative research.

How to Assess the Quality of the Research Findings?

The inclusion of quotes or similar research data enhances the confirmability in the write-up of the findings. The use of expressions (for instance, “80% of all respondents agreed that” or “only one of the interviewees mentioned that”) may also quantify qualitative findings (Stenfors et al., 2020 ). On the other hand, the persuasive reason for “why this may not help in intensifying the research” has also been provided (Monrouxe & Rees, 2020 ). Further, the Discussion and Conclusion sections of an article also prove robust markers of high-quality qualitative research, as elucidated in Table 8 .

Quality Checklists: Tools for Assessing the Quality

Numerous checklists are available to speed up the assessment of the quality of qualitative research. However, if used uncritically and recklessly concerning the research context, these checklists may be counterproductive. I recommend that such lists and guiding principles may assist in pinpointing the markers of high-quality qualitative research. However, considering enormous variations in the authors’ theoretical and philosophical contexts, I would emphasize that high dependability on such checklists may say little about whether the findings can be applied in your setting. A combination of such checklists might be appropriate for novice researchers. Some of these checklists are listed below:

The most commonly used framework is Consolidated Criteria for Reporting Qualitative Research (COREQ) (Tong et al., 2007 ). This framework is recommended by some journals to be followed by the authors during article submission.

Standards for Reporting Qualitative Research (SRQR) is another checklist that has been created particularly for medical education (O’Brien et al., 2014 ).

Also, Tracy ( 2010 ) and Critical Appraisal Skills Programme (CASP, 2021 ) offer criteria for qualitative research relevant across methods and approaches.

Further, researchers have also outlined different criteria as hallmarks of high-quality qualitative research. For instance, the “Road Trip Checklist” (Epp & Otnes, 2021 ) provides a quick reference to specific questions to address different elements of high-quality qualitative research.

Conclusions, Future Directions, and Outlook

This work presents a broad review of the criteria for good qualitative research. In addition, this article presents an exploratory analysis of the essential elements in qualitative research that can enable the readers of qualitative work to judge it as good research when objectively and adequately utilized. In this review, some of the essential markers that indicate high-quality qualitative research have been highlighted. I scope them narrowly to achieve rigor in qualitative research and note that they do not completely cover the broader considerations necessary for high-quality research. This review points out that a universal and versatile one-size-fits-all guideline for evaluating the quality of qualitative research does not exist. In other words, this review also emphasizes the non-existence of a set of common guidelines among qualitative researchers. In unison, this review reinforces that each qualitative approach should be treated uniquely on account of its own distinctive features for different epistemological and disciplinary positions. Owing to the sensitivity of the worth of qualitative research towards the specific context and the type of paradigmatic stance, researchers should themselves analyze what approaches can be and must be tailored to ensemble the distinct characteristics of the phenomenon under investigation. Although this article does not assert to put forward a magic bullet and to provide a one-stop solution for dealing with dilemmas about how, why, or whether to evaluate the “goodness” of qualitative research, it offers a platform to assist the researchers in improving their qualitative studies. This work provides an assembly of concerns to reflect on, a series of questions to ask, and multiple sets of criteria to look at, when attempting to determine the quality of qualitative research. Overall, this review underlines the crux of qualitative research and accentuates the need to evaluate such research by the very tenets of its being. Bringing together the vital arguments and delineating the requirements that good qualitative research should satisfy, this review strives to equip the researchers as well as reviewers to make well-versed judgment about the worth and significance of the qualitative research under scrutiny. In a nutshell, a comprehensive portrayal of the research process (from the context of research to the research objectives, research questions and design, speculative foundations, and from approaches of collecting data to analyzing the results, to deriving inferences) frequently proliferates the quality of a qualitative research.

Prospects : A Road Ahead for Qualitative Research

Irrefutably, qualitative research is a vivacious and evolving discipline wherein different epistemological and disciplinary positions have their own characteristics and importance. In addition, not surprisingly, owing to the sprouting and varied features of qualitative research, no consensus has been pulled off till date. Researchers have reflected various concerns and proposed several recommendations for editors and reviewers on conducting reviews of critical qualitative research (Levitt et al., 2021 ; McGinley et al., 2021 ). Following are some prospects and a few recommendations put forward towards the maturation of qualitative research and its quality evaluation:

In general, most of the manuscript and grant reviewers are not qualitative experts. Hence, it is more likely that they would prefer to adopt a broad set of criteria. However, researchers and reviewers need to keep in mind that it is inappropriate to utilize the same approaches and conducts among all qualitative research. Therefore, future work needs to focus on educating researchers and reviewers about the criteria to evaluate qualitative research from within the suitable theoretical and methodological context.

There is an urgent need to refurbish and augment critical assessment of some well-known and widely accepted tools (including checklists such as COREQ, SRQR) to interrogate their applicability on different aspects (along with their epistemological ramifications).

Efforts should be made towards creating more space for creativity, experimentation, and a dialogue between the diverse traditions of qualitative research. This would potentially help to avoid the enforcement of one's own set of quality criteria on the work carried out by others.

Moreover, journal reviewers need to be aware of various methodological practices and philosophical debates.

It is pivotal to highlight the expressions and considerations of qualitative researchers and bring them into a more open and transparent dialogue about assessing qualitative research in techno-scientific, academic, sociocultural, and political rooms.

Frequent debates on the use of evaluative criteria are required to solve some potentially resolved issues (including the applicability of a single set of criteria in multi-disciplinary aspects). Such debates would not only benefit the group of qualitative researchers themselves, but primarily assist in augmenting the well-being and vivacity of the entire discipline.

To conclude, I speculate that the criteria, and my perspective, may transfer to other methods, approaches, and contexts. I hope that they spark dialog and debate – about criteria for excellent qualitative research and the underpinnings of the discipline more broadly – and, therefore, help improve the quality of a qualitative study. Further, I anticipate that this review will assist the researchers to contemplate on the quality of their own research, to substantiate research design and help the reviewers to review qualitative research for journals. On a final note, I pinpoint the need to formulate a framework (encompassing the prerequisites of a qualitative study) by the cohesive efforts of qualitative researchers of different disciplines with different theoretic-paradigmatic origins. I believe that tailoring such a framework (of guiding principles) paves the way for qualitative researchers to consolidate the status of qualitative research in the wide-ranging open science debate. Dialogue on this issue across different approaches is crucial for the impending prospects of socio-techno-educational research.

Amin, M. E. K., Nørgaard, L. S., Cavaco, A. M., Witry, M. J., Hillman, L., Cernasev, A., & Desselle, S. P. (2020). Establishing trustworthiness and authenticity in qualitative pharmacy research. Research in Social and Administrative Pharmacy, 16 (10), 1472–1482.

Article   Google Scholar  

Barker, C., & Pistrang, N. (2005). Quality criteria under methodological pluralism: Implications for conducting and evaluating research. American Journal of Community Psychology, 35 (3–4), 201–212.

Bryman, A., Becker, S., & Sempik, J. (2008). Quality criteria for quantitative, qualitative and mixed methods research: A view from social policy. International Journal of Social Research Methodology, 11 (4), 261–276.

Caelli, K., Ray, L., & Mill, J. (2003). ‘Clear as mud’: Toward greater clarity in generic qualitative research. International Journal of Qualitative Methods, 2 (2), 1–13.

CASP (2021). CASP checklists. Retrieved May 2021 from https://casp-uk.net/casp-tools-checklists/

Cohen, D. J., & Crabtree, B. F. (2008). Evaluative criteria for qualitative research in health care: Controversies and recommendations. The Annals of Family Medicine, 6 (4), 331–339.

Denzin, N. K., & Lincoln, Y. S. (2005). Introduction: The discipline and practice of qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), The sage handbook of qualitative research (pp. 1–32). Sage Publications Ltd.

Google Scholar  

Elliott, R., Fischer, C. T., & Rennie, D. L. (1999). Evolving guidelines for publication of qualitative research studies in psychology and related fields. British Journal of Clinical Psychology, 38 (3), 215–229.

Epp, A. M., & Otnes, C. C. (2021). High-quality qualitative research: Getting into gear. Journal of Service Research . https://doi.org/10.1177/1094670520961445

Guba, E. G. (1990). The paradigm dialog. In Alternative paradigms conference, mar, 1989, Indiana u, school of education, San Francisco, ca, us . Sage Publications, Inc.

Hammersley, M. (2007). The issue of quality in qualitative research. International Journal of Research and Method in Education, 30 (3), 287–305.

Haven, T. L., Errington, T. M., Gleditsch, K. S., van Grootel, L., Jacobs, A. M., Kern, F. G., & Mokkink, L. B. (2020). Preregistering qualitative research: A Delphi study. International Journal of Qualitative Methods, 19 , 1609406920976417.

Hays, D. G., & McKibben, W. B. (2021). Promoting rigorous research: Generalizability and qualitative research. Journal of Counseling and Development, 99 (2), 178–188.

Horsburgh, D. (2003). Evaluation of qualitative research. Journal of Clinical Nursing, 12 (2), 307–312.

Howe, K. R. (2004). A critique of experimentalism. Qualitative Inquiry, 10 (1), 42–46.

Johnson, J. L., Adkins, D., & Chauvin, S. (2020). A review of the quality indicators of rigor in qualitative research. American Journal of Pharmaceutical Education, 84 (1), 7120.

Johnson, P., Buehring, A., Cassell, C., & Symon, G. (2006). Evaluating qualitative management research: Towards a contingent criteriology. International Journal of Management Reviews, 8 (3), 131–156.

Klein, H. K., & Myers, M. D. (1999). A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Quarterly, 23 (1), 67–93.

Lather, P. (2004). This is your father’s paradigm: Government intrusion and the case of qualitative research in education. Qualitative Inquiry, 10 (1), 15–34.

Levitt, H. M., Morrill, Z., Collins, K. M., & Rizo, J. L. (2021). The methodological integrity of critical qualitative research: Principles to support design and research review. Journal of Counseling Psychology, 68 (3), 357.

Lincoln, Y. S., & Guba, E. G. (1986). But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. New Directions for Program Evaluation, 1986 (30), 73–84.

Lincoln, Y. S., & Guba, E. G. (2000). Paradigmatic controversies, contradictions and emerging confluences. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 163–188). Sage Publications.

Madill, A., Jordan, A., & Shirley, C. (2000). Objectivity and reliability in qualitative analysis: Realist, contextualist and radical constructionist epistemologies. British Journal of Psychology, 91 (1), 1–20.

Mays, N., & Pope, C. (2020). Quality in qualitative research. Qualitative Research in Health Care . https://doi.org/10.1002/9781119410867.ch15

McGinley, S., Wei, W., Zhang, L., & Zheng, Y. (2021). The state of qualitative research in hospitality: A 5-year review 2014 to 2019. Cornell Hospitality Quarterly, 62 (1), 8–20.

Merriam, S., & Tisdell, E. (2016). Qualitative research: A guide to design and implementation. San Francisco, US.

Meyer, M., & Dykes, J. (2019). Criteria for rigor in visualization design study. IEEE Transactions on Visualization and Computer Graphics, 26 (1), 87–97.

Monrouxe, L. V., & Rees, C. E. (2020). When I say… quantification in qualitative research. Medical Education, 54 (3), 186–187.

Morrow, S. L. (2005). Quality and trustworthiness in qualitative research in counseling psychology. Journal of Counseling Psychology, 52 (2), 250.

Morse, J. M. (2003). A review committee’s guide for evaluating qualitative proposals. Qualitative Health Research, 13 (6), 833–851.

Nassaji, H. (2020). Good qualitative research. Language Teaching Research, 24 (4), 427–431.

O’Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine, 89 (9), 1245–1251.

O’Connor, C., & Joffe, H. (2020). Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods, 19 , 1609406919899220.

Reid, A., & Gough, S. (2000). Guidelines for reporting and evaluating qualitative research: What are the alternatives? Environmental Education Research, 6 (1), 59–91.

Rocco, T. S. (2010). Criteria for evaluating qualitative studies. Human Resource Development International . https://doi.org/10.1080/13678868.2010.501959

Sandberg, J. (2000). Understanding human competence at work: An interpretative approach. Academy of Management Journal, 43 (1), 9–25.

Schwandt, T. A. (1996). Farewell to criteriology. Qualitative Inquiry, 2 (1), 58–72.

Seale, C. (1999). Quality in qualitative research. Qualitative Inquiry, 5 (4), 465–478.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 (2), 63–75.

Sparkes, A. C. (2001). Myth 94: Qualitative health researchers will agree about validity. Qualitative Health Research, 11 (4), 538–552.

Spencer, L., Ritchie, J., Lewis, J., & Dillon, L. (2004). Quality in qualitative evaluation: A framework for assessing research evidence.

Stenfors, T., Kajamaa, A., & Bennett, D. (2020). How to assess the quality of qualitative research. The Clinical Teacher, 17 (6), 596–599.

Taylor, E. W., Beck, J., & Ainsworth, E. (2001). Publishing qualitative adult education research: A peer review perspective. Studies in the Education of Adults, 33 (2), 163–179.

Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19 (6), 349–357.

Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16 (10), 837–851.

Download references

Open access funding provided by TU Wien (TUW).

Author information

Authors and affiliations.

Faculty of Informatics, Technische Universität Wien, 1040, Vienna, Austria

Drishti Yadav

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Drishti Yadav .

Ethics declarations

Conflict of interest.

The author declares no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Yadav, D. Criteria for Good Qualitative Research: A Comprehensive Review. Asia-Pacific Edu Res 31 , 679–689 (2022). https://doi.org/10.1007/s40299-021-00619-0

Download citation

Accepted : 28 August 2021

Published : 18 September 2021

Issue Date : December 2022

DOI : https://doi.org/10.1007/s40299-021-00619-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Evaluative criteria
  • Find a journal
  • Publish with us
  • Track your research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Qualitative Research? | Methods & Examples

What Is Qualitative Research? | Methods & Examples

Published on June 19, 2020 by Pritha Bhandari . Revised on June 22, 2023.

Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research.

Qualitative research is the opposite of quantitative research , which involves collecting and analyzing numerical data for statistical analysis.

Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc.

  • How does social media shape body image in teenagers?
  • How do children and adults interpret healthy eating in the UK?
  • What factors influence employee retention in a large organization?
  • How is anxiety experienced around the world?
  • How can teachers integrate social issues into science curriculums?

Table of contents

Approaches to qualitative research, qualitative research methods, qualitative data analysis, advantages of qualitative research, disadvantages of qualitative research, other interesting articles, frequently asked questions about qualitative research.

Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data.

Common approaches include grounded theory, ethnography , action research , phenomenological research, and narrative research. They share some similarities, but emphasize different aims and perspectives.

Qualitative research approaches
Approach What does it involve?
Grounded theory Researchers collect rich data on a topic of interest and develop theories .
Researchers immerse themselves in groups or organizations to understand their cultures.
Action research Researchers and participants collaboratively link theory to practice to drive social change.
Phenomenological research Researchers investigate a phenomenon or event by describing and interpreting participants’ lived experiences.
Narrative research Researchers examine how stories are told to understand how participants perceive and make sense of their experiences.

Note that qualitative research is at risk for certain research biases including the Hawthorne effect , observer bias , recall bias , and social desirability bias . While not always totally avoidable, awareness of potential biases as you collect and analyze your data can prevent them from impacting your work too much.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Each of the research approaches involve using one or more data collection methods . These are some of the most common qualitative methods:

  • Observations: recording what you have seen, heard, or encountered in detailed field notes.
  • Interviews:  personally asking people questions in one-on-one conversations.
  • Focus groups: asking questions and generating discussion among a group of people.
  • Surveys : distributing questionnaires with open-ended questions.
  • Secondary research: collecting existing data in the form of texts, images, audio or video recordings, etc.
  • You take field notes with observations and reflect on your own experiences of the company culture.
  • You distribute open-ended surveys to employees across all the company’s offices by email to find out if the culture varies across locations.
  • You conduct in-depth interviews with employees in your office to learn about their experiences and perspectives in greater detail.

Qualitative researchers often consider themselves “instruments” in research because all observations, interpretations and analyses are filtered through their own personal lens.

For this reason, when writing up your methodology for qualitative research, it’s important to reflect on your approach and to thoroughly explain the choices you made in collecting and analyzing the data.

Qualitative data can take the form of texts, photos, videos and audio. For example, you might be working with interview transcripts, survey responses, fieldnotes, or recordings from natural settings.

Most types of qualitative data analysis share the same five steps:

  • Prepare and organize your data. This may mean transcribing interviews or typing up fieldnotes.
  • Review and explore your data. Examine the data for patterns or repeated ideas that emerge.
  • Develop a data coding system. Based on your initial ideas, establish a set of codes that you can apply to categorize your data.
  • Assign codes to the data. For example, in qualitative survey analysis, this may mean going through each participant’s responses and tagging them with codes in a spreadsheet. As you go through your data, you can create new codes to add to your system if necessary.
  • Identify recurring themes. Link codes together into cohesive, overarching themes.

There are several specific approaches to analyzing qualitative data. Although these methods share similar processes, they emphasize different concepts.

Qualitative data analysis
Approach When to use Example
To describe and categorize common words, phrases, and ideas in qualitative data. A market researcher could perform content analysis to find out what kind of language is used in descriptions of therapeutic apps.
To identify and interpret patterns and themes in qualitative data. A psychologist could apply thematic analysis to travel blogs to explore how tourism shapes self-identity.
To examine the content, structure, and design of texts. A media researcher could use textual analysis to understand how news coverage of celebrities has changed in the past decade.
To study communication and how language is used to achieve effects in specific contexts. A political scientist could use discourse analysis to study how politicians generate trust in election campaigns.

Qualitative research often tries to preserve the voice and perspective of participants and can be adjusted as new research questions arise. Qualitative research is good for:

  • Flexibility

The data collection and analysis process can be adapted as new ideas or patterns emerge. They are not rigidly decided beforehand.

  • Natural settings

Data collection occurs in real-world contexts or in naturalistic ways.

  • Meaningful insights

Detailed descriptions of people’s experiences, feelings and perceptions can be used in designing, testing or improving systems or products.

  • Generation of new ideas

Open-ended responses mean that researchers can uncover novel problems or opportunities that they wouldn’t have thought of otherwise.

Prevent plagiarism. Run a free check.

Researchers must consider practical and theoretical limitations in analyzing and interpreting their data. Qualitative research suffers from:

  • Unreliability

The real-world setting often makes qualitative research unreliable because of uncontrolled factors that affect the data.

  • Subjectivity

Due to the researcher’s primary role in analyzing and interpreting data, qualitative research cannot be replicated . The researcher decides what is important and what is irrelevant in data analysis, so interpretations of the same data can vary greatly.

  • Limited generalizability

Small samples are often used to gather detailed data about specific contexts. Despite rigorous analysis procedures, it is difficult to draw generalizable conclusions because the data may be biased and unrepresentative of the wider population .

  • Labor-intensive

Although software can be used to manage and record large amounts of text, data analysis often has to be checked or performed manually.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). What Is Qualitative Research? | Methods & Examples. Scribbr. Retrieved August 12, 2024, from https://www.scribbr.com/methodology/qualitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, how to do thematic analysis | step-by-step guide & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Open access
  • Published: 27 May 2020

How to use and assess qualitative research methods

  • Loraine Busetto   ORCID: orcid.org/0000-0002-9228-7875 1 ,
  • Wolfgang Wick 1 , 2 &
  • Christoph Gumbinger 1  

Neurological Research and Practice volume  2 , Article number:  14 ( 2020 ) Cite this article

768k Accesses

348 Citations

90 Altmetric

Metrics details

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 , 8 , 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 , 10 , 11 , 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

figure 1

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

figure 2

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

figure 3

From data collection to data analysis

Attributions for icons: see Fig. 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 , 25 , 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

figure 4

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 , 32 , 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 , 38 , 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Availability of data and materials

Not applicable.

Abbreviations

Endovascular treatment

Randomised Controlled Trial

Standard Operating Procedure

Standards for Reporting Qualitative Research

Philipsen, H., & Vernooij-Dassen, M. (2007). Kwalitatief onderzoek: nuttig, onmisbaar en uitdagend. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Qualitative research: useful, indispensable and challenging. In: Qualitative research: Practical methods for medical practice (pp. 5–12). Houten: Bohn Stafleu van Loghum.

Chapter   Google Scholar  

Punch, K. F. (2013). Introduction to social research: Quantitative and qualitative approaches . London: Sage.

Kelly, J., Dwyer, J., Willis, E., & Pekarsky, B. (2014). Travelling to the city for hospital care: Access factors in country aboriginal patient journeys. Australian Journal of Rural Health, 22 (3), 109–113.

Article   Google Scholar  

Nilsen, P., Ståhl, C., Roback, K., & Cairney, P. (2013). Never the twain shall meet? - a comparison of implementation science and policy implementation research. Implementation Science, 8 (1), 1–12.

Howick J, Chalmers I, Glasziou, P., Greenhalgh, T., Heneghan, C., Liberati, A., Moschetti, I., Phillips, B., & Thornton, H. (2011). The 2011 Oxford CEBM evidence levels of evidence (introductory document) . Oxford Center for Evidence Based Medicine. https://www.cebm.net/2011/06/2011-oxford-cebm-levels-evidence-introductory-document/ .

Eakin, J. M. (2016). Educating critical qualitative health researchers in the land of the randomized controlled trial. Qualitative Inquiry, 22 (2), 107–118.

May, A., & Mathijssen, J. (2015). Alternatieven voor RCT bij de evaluatie van effectiviteit van interventies!? Eindrapportage. In Alternatives for RCTs in the evaluation of effectiveness of interventions!? Final report .

Google Scholar  

Berwick, D. M. (2008). The science of improvement. Journal of the American Medical Association, 299 (10), 1182–1184.

Article   CAS   Google Scholar  

Christ, T. W. (2014). Scientific-based research and randomized controlled trials, the “gold” standard? Alternative paradigms and mixed methodologies. Qualitative Inquiry, 20 (1), 72–80.

Lamont, T., Barber, N., Jd, P., Fulop, N., Garfield-Birkbeck, S., Lilford, R., Mear, L., Raine, R., & Fitzpatrick, R. (2016). New approaches to evaluating complex health and care systems. BMJ, 352:i154.

Drabble, S. J., & O’Cathain, A. (2015). Moving from Randomized Controlled Trials to Mixed Methods Intervention Evaluation. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 406–425). London: Oxford University Press.

Chambers, D. A., Glasgow, R. E., & Stange, K. C. (2013). The dynamic sustainability framework: Addressing the paradox of sustainment amid ongoing change. Implementation Science : IS, 8 , 117.

Hak, T. (2007). Waarnemingsmethoden in kwalitatief onderzoek. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Observation methods in qualitative research] (pp. 13–25). Houten: Bohn Stafleu van Loghum.

Russell, C. K., & Gregory, D. M. (2003). Evaluation of qualitative research studies. Evidence Based Nursing, 6 (2), 36–40.

Fossey, E., Harvey, C., McDermott, F., & Davidson, L. (2002). Understanding and evaluating qualitative research. Australian and New Zealand Journal of Psychiatry, 36 , 717–732.

Yanow, D. (2000). Conducting interpretive policy analysis (Vol. 47). Thousand Oaks: Sage University Papers Series on Qualitative Research Methods.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 , 63–75.

van der Geest, S. (2006). Participeren in ziekte en zorg: meer over kwalitatief onderzoek. Huisarts en Wetenschap, 49 (4), 283–287.

Hijmans, E., & Kuyper, M. (2007). Het halfopen interview als onderzoeksmethode. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [The half-open interview as research method (pp. 43–51). Houten: Bohn Stafleu van Loghum.

Jansen, H. (2007). Systematiek en toepassing van de kwalitatieve survey. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Systematics and implementation of the qualitative survey (pp. 27–41). Houten: Bohn Stafleu van Loghum.

Pv, R., & Peremans, L. (2007). Exploreren met focusgroepgesprekken: de ‘stem’ van de groep onder de loep. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Exploring with focus group conversations: the “voice” of the group under the magnifying glass (pp. 53–64). Houten: Bohn Stafleu van Loghum.

Carter, N., Bryant-Lukosius, D., DiCenso, A., Blythe, J., & Neville, A. J. (2014). The use of triangulation in qualitative research. Oncology Nursing Forum, 41 (5), 545–547.

Boeije H: Analyseren in kwalitatief onderzoek: Denken en doen, [Analysis in qualitative research: Thinking and doing] vol. Den Haag Boom Lemma uitgevers; 2012.

Hunter, A., & Brewer, J. (2015). Designing Multimethod Research. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 185–205). London: Oxford University Press.

Archibald, M. M., Radil, A. I., Zhang, X., & Hanson, W. E. (2015). Current mixed methods practices in qualitative research: A content analysis of leading journals. International Journal of Qualitative Methods, 14 (2), 5–33.

Creswell, J. W., & Plano Clark, V. L. (2011). Choosing a Mixed Methods Design. In Designing and Conducting Mixed Methods Research . Thousand Oaks: SAGE Publications.

Mays, N., & Pope, C. (2000). Assessing quality in qualitative research. BMJ, 320 (7226), 50–52.

O'Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine : Journal of the Association of American Medical Colleges, 89 (9), 1245–1251.

Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B., Burroughs, H., & Jinks, C. (2018). Saturation in qualitative research: Exploring its conceptualization and operationalization. Quality and Quantity, 52 (4), 1893–1907.

Moser, A., & Korstjens, I. (2018). Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis. European Journal of General Practice, 24 (1), 9–18.

Marlett, N., Shklarov, S., Marshall, D., Santana, M. J., & Wasylak, T. (2015). Building new roles and relationships in research: A model of patient engagement research. Quality of Life Research : an international journal of quality of life aspects of treatment, care and rehabilitation, 24 (5), 1057–1067.

Demian, M. N., Lam, N. N., Mac-Way, F., Sapir-Pichhadze, R., & Fernandez, N. (2017). Opportunities for engaging patients in kidney research. Canadian Journal of Kidney Health and Disease, 4 , 2054358117703070–2054358117703070.

Noyes, J., McLaughlin, L., Morgan, K., Roberts, A., Stephens, M., Bourne, J., Houlston, M., Houlston, J., Thomas, S., Rhys, R. G., et al. (2019). Designing a co-productive study to overcome known methodological challenges in organ donation research with bereaved family members. Health Expectations . 22(4):824–35.

Piil, K., Jarden, M., & Pii, K. H. (2019). Research agenda for life-threatening cancer. European Journal Cancer Care (Engl), 28 (1), e12935.

Hofmann, D., Ibrahim, F., Rose, D., Scott, D. L., Cope, A., Wykes, T., & Lempp, H. (2015). Expectations of new treatment in rheumatoid arthritis: Developing a patient-generated questionnaire. Health Expectations : an international journal of public participation in health care and health policy, 18 (5), 995–1008.

Jun, M., Manns, B., Laupacis, A., Manns, L., Rehal, B., Crowe, S., & Hemmelgarn, B. R. (2015). Assessing the extent to which current clinical research is consistent with patient priorities: A scoping review using a case study in patients on or nearing dialysis. Canadian Journal of Kidney Health and Disease, 2 , 35.

Elsie Baker, S., & Edwards, R. (2012). How many qualitative interviews is enough? In National Centre for Research Methods Review Paper . National Centre for Research Methods. http://eprints.ncrm.ac.uk/2273/4/how_many_interviews.pdf .

Sandelowski, M. (1995). Sample size in qualitative research. Research in Nursing & Health, 18 (2), 179–183.

Sim, J., Saunders, B., Waterfield, J., & Kingstone, T. (2018). Can sample size in qualitative research be determined a priori? International Journal of Social Research Methodology, 21 (5), 619–634.

Download references

Acknowledgements

no external funding.

Author information

Authors and affiliations.

Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany

Loraine Busetto, Wolfgang Wick & Christoph Gumbinger

Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Wolfgang Wick

You can also search for this author in PubMed   Google Scholar

Contributions

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

Corresponding author

Correspondence to Loraine Busetto .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Busetto, L., Wick, W. & Gumbinger, C. How to use and assess qualitative research methods. Neurol. Res. Pract. 2 , 14 (2020). https://doi.org/10.1186/s42466-020-00059-z

Download citation

Received : 30 January 2020

Accepted : 22 April 2020

Published : 27 May 2020

DOI : https://doi.org/10.1186/s42466-020-00059-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Mixed methods
  • Quality assessment

Neurological Research and Practice

ISSN: 2524-3489

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

how to review qualitative research

Jump to navigation

Home

Cochrane Training

Chapter 21: qualitative evidence.

Jane Noyes, Andrew Booth, Margaret Cargo, Kate Flemming, Angela Harden, Janet Harris, Ruth Garside, Karin Hannes, Tomás Pantoja, James Thomas

Key Points:

  • A qualitative evidence synthesis (commonly referred to as QES) can add value by providing decision makers with additional evidence to improve understanding of intervention complexity, contextual variations, implementation, and stakeholder preferences and experiences.
  • A qualitative evidence synthesis can be undertaken and integrated with a corresponding intervention review; or
  • Undertaken using a mixed-method design that integrates a qualitative evidence synthesis with an intervention review in a single protocol.
  • Methods for qualitative evidence synthesis are complex and continue to develop. Authors should always consult current methods guidance at methods.cochrane.org/qi .

This chapter should be cited as: Noyes J, Booth A, Cargo M, Flemming K, Harden A, Harris J, Garside R, Hannes K, Pantoja T, Thomas J. Chapter 21: Qualitative evidence. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook .

21.1 Introduction

The potential contribution of qualitative evidence to decision making is well-established (Glenton et al 2016, Booth 2017, Carroll 2017). A synthesis of qualitative evidence can inform understanding of how interventions work by:

  • increasing understanding of a phenomenon of interest (e.g. women’s conceptualization of what good antenatal care looks like);
  • identifying associations between the broader environment within which people live and the interventions that are implemented;
  • increasing understanding of the values and attitudes toward, and experiences of, health conditions and interventions by those who implement or receive them; and
  • providing a detailed understanding of the complexity of interventions and implementation, and their impacts and effects on different subgroups of people and the influence of individual and contextual characteristics within different contexts.

The aim of this chapter is to provide authors (who already have experience of undertaking qualitative research and qualitative evidence synthesis) with additional guidance on undertaking a qualitative evidence synthesis that is subsequently integrated with an intervention review. This chapter draws upon guidance presented in a series of six papers published in the Journal of Clinical Epidemiology (Cargo et al 2018, Flemming et al 2018, Harden et al 2018, Harris et al 2018, Noyes et al 2018a, Noyes et al 2018b) and from a further World Health Organization series of papers published in BMJ Global Health, which extend guidance to qualitative evidence syntheses conducted within a complex intervention and health systems and decision making context (Booth et al 2019a, Booth et al 2019b, Flemming et al 2019, Noyes et al 2019, Petticrew et al 2019).The qualitative evidence synthesis and integration methods described in this chapter supplement Chapter 17 on methods for addressing intervention complexity. Authors undertaking qualitative evidence syntheses should consult these papers and chapters for more detailed guidance.

21.2 Designs for synthesizing and integrating qualitative evidence with intervention reviews

There are two main designs for synthesizing qualitative evidence with evidence of the effects of interventions:

  • Sequential reviews: where one or more existing intervention review(s) has been published on a similar topic, it is possible to do a sequential qualitative evidence synthesis and then integrate its findings with those of the intervention review to create a mixed-method review. For example, Lewin and colleagues (Lewin et al (2010) and Glenton and colleagues (Glenton et al (2013) undertook sequential reviews of lay health worker programmes using separate protocols and then integrated the findings.  
  • Convergent mixed-methods review: where no pre-existing intervention review exists, it is possible to do a full convergent ‘mixed-methods’ review where the trials and qualitative evidence are synthesized separately, creating opportunities for them to ‘speak’ to each other during development, and then integrated within a third synthesis. For example, Hurley and colleagues (Hurley et al (2018) undertook an intervention review and a qualitative evidence synthesis following a single protocol.

It is increasingly common for sequential and convergent reviews to be conducted by some or all of the same authors; if not, it is critical that authors working on the qualitative evidence synthesis and intervention review work closely together to identify and create sufficient points of integration to enable a third synthesis that integrates the two reviews, or the conduct of a mixed-method review (Noyes et al 2018a) (see Figure 21.2.a ). This consideration also applies where an intervention review has already been published and there is no prior relationship with the qualitative evidence synthesis authors. We recommend that at least one joint author works across both reviews to facilitate development of the qualitative evidence synthesis protocol, conduct of the synthesis, and subsequent integration of the qualitative evidence synthesis with the intervention review within a mixed-methods review.

Figure 21.2.a Considering context and points of contextual integration with the intervention review or within a mixed-method review

how to review qualitative research

21.3 Defining qualitative evidence and studies

We use the term ‘qualitative evidence synthesis’ to acknowledge that other types of qualitative evidence (or data) can potentially enrich a synthesis, such as narrative data derived from qualitative components of mixed-method studies or free text from questionnaire surveys. We would not, however, consider a questionnaire survey to be a qualitative study and qualitative data from questionnaires should not usually be privileged over relevant evidence from qualitative studies. When thinking about qualitative evidence, specific terminology is used to describe the level of conceptual and contextual detail. Qualitative evidence that includes higher or lower levels of conceptual detail is described as ‘rich’ or ‘poor’. Associated terms ‘thick’ or ‘thin’ are best used to refer to higher or lower levels of contextual detail. Review authors can potentially develop a stronger synthesis using rich and thick qualitative evidence but, in reality, they will identify diverse conceptually rich and poor and contextually thick and thin studies. Developing a clear picture of the type and conceptual richness of available qualitative evidence strongly influences the choice of methodology and subsequent methods. We recommend that authors undertake scoping searches to determining the type and richness of available qualitative evidence before selecting their methodology and methods.

A qualitative study is a research study that uses a qualitative method of data collection and analysis. Review authors should include the studies that enable them to answer their review question. When selecting qualitative studies in a review about intervention effects, two types of qualitative study are available: those that collect data from the same participants as the included trials, known as ‘trial siblings’; and those that address relevant issues about the intervention, but as separate items of research – not connected to any included trials. Both can provide useful information, with trial sibling studies obviously closer in terms of their precise contexts to the included trials (Moore et al 2015), and non-sibling studies possibly contributing perspectives not present in the trials (Noyes et al 2016b).

21.4 Planning a qualitative evidence synthesis linked to an intervention review

The Cochrane Qualitative and Implementation Methods Group (QIMG) website provides links to practical guidance and key steps for authors who are considering a qualitative evidence synthesis ( methods.cochrane.org/qi ). The RETREAT framework outlines seven key considerations that review authors should systematically work through when planning a review (Booth et al 2016, Booth et al 2018) ( Box 21.4.a ). Flemming and colleagues (Flemming et al (2019) further explain how to factor in such considerations when undertaking a qualitative evidence synthesis within a complex intervention and decision making context when complexity is an important consideration.

Box 21.4.a RETREAT considerations when selecting an appropriate method for qualitative synthesis

first, consider the complexity of the review question. Which elements contribute most to complexity (e.g. the condition, the intervention or the context)?
 

Which elements should be prioritized as the focal point for attention? (Squires et al 2013, Kelly et al 2017).
 

consider the philosophical foundations of the primary studies. Would it be appropriate to favour a method such as thematic synthesis that it is less reliant on epistemological considerations? (Barnett-Page and Thomas 2009).
 

– consider what type of qualitative evidence synthesis will be feasible and manageable within the time frame available (Booth et al 2016).
 

– consider whether the ambition of the review matches the available resources. Will the extent of the scope and the sampling approach of the review need to be limited? (Benoot et al 2016, Booth et al 2016).
 

consider access to expertise, both within the review team and among a wider group of advisors. Does the available expertise match the qualitative evidence synthesis approach chosen? (Booth et al 2016).
 

consider the intended audience and purpose of the review. Does the approach to question formulation, the scope of the review and the intended outputs meet their needs? (Booth et al 2016).
 

consider the type of data present in typical studies for inclusion. To what extent are candidate studies conceptually rich and contextually thick in their detail?

21.5 Question development

The review question is critical to development of the qualitative evidence synthesis (Harris et al 2018). Question development affords a key point for integration with the intervention review. Complementary guidance supports novel thinking about question development, application of question development frameworks and the types of questions to be addressed by a synthesis of qualitative evidence (Cargo et al 2018, Harris et al 2018, Noyes et al 2018a, Booth et al 2019b, Flemming et al 2019).

Research questions for quantitative reviews are often mapped using structures such as PICO. Some qualitative reviews adopt this structure, or use an adapted variation of such a structure (e.g. SPICE (Setting, Perspective, Intervention or Phenomenon of Interest, Comparison, Evaluation) or SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type); (Cooke et al 2012). Booth and colleagues (Booth et al (2019b) propose an extended question framework (PerSPecTIF) to describe both wider context and immediate setting that is particularly suited to qualitative evidence synthesis and complex intervention reviews (see Table 21.5.a ).

Detailed attention to the question and specification of context at an early stage is critical to many aspects of qualitative synthesis (see Petticrew et al (2019) and Booth et al (2019a) for a more detailed discussion). By specifying the context a review team is able to identify opportunities for integration with the intervention review, or opportunities for maximizing use and interpretation of evidence as a mixed-method review progresses (see Figure 21.2.a ), and informs both the interpretation of the observed effects and assessment of the strength of the evidence available in addressing the review question (Noyes et al 2019). Subsequent application of GRADE CERQual (Lewin et al 2015, Lewin et al 2018), an approach to assess the confidence in synthesized qualitative findings, requires further specification of context in the review question.

Table 21.5.a PerSPecTIF Question formulation framework for qualitative evidence syntheses (Booth et al (2019b). Reproduced with permission of BMJ Publishing Group

Perspective

Setting

Phenomenon of interest/ Problem

Environment

Comparison (optional)

Time/ Timing

Findings

From the perspective of a pregnant woman

In the setting of rural communities

How does facility-based care

Within an environment of poor transport infrastructure and distantly located facilities

Compare with traditional birth attendants at home

Up to and including delivery

In relation to the woman’s perceptions and experiences?

21.6 Questions exploring intervention implementation

Additional guidance is available on formulation of questions to understand and assess intervention implementation (Cargo et al 2018). A strong understanding of how an intervention is thought to work, and how it should be implemented in practice, will enable a critical consideration of whether any observed lack of effect might be due to a poorly conceptualized intervention (i.e. theory failure) or a poor intervention implementation (i.e. implementation failure). Heterogeneity needs to be considered for both the underlying theory and the ways in which the intervention was implemented. An a priori scoping review (Levac et al 2010), concept analysis (Walker and Avant 2005), critical review (Grant and Booth 2009) or textual narrative synthesis (Barnett-Page and Thomas 2009) can be undertaken to classify interventions and/or to identify the programme theory, logic model or implementation measures and processes. The intervention Complexity Assessment Tool for Systematic Reviews iCAT_SR (Lewin et al 2017) may be helpful in classifying complexity in interventions and developing associated questions.

An existing intervention model or framework may be used within a new topic or context. The ‘best-fit framework’ approach to synthesis (Carroll et al 2013) can be used to establish the degree to which the source context (from where the framework was derived) resembles the new target context (see Figure 21.2.a ). In the absence of an explicit programme theory and detail of how implementation relates to outcomes, an a priori realist review, meta-ethnography or meta-interpretive review can be undertaken (Booth et al 2016). For example, Downe and colleagues (Downe et al (2016) undertook an initial meta-ethnography review to develop an understanding of the outcomes of importance to women receiving antenatal care.

However, these additional activities are very resource-intensive and are only recommended when the review team has sufficient resources to supplement the planned qualitative evidence syntheses with an additional explanatory review. Where resources are less plentiful a review team could engage with key stakeholders to articulate and develop programme theory (Kelly et al 2017, De Buck et al 2018).

21.6.1 Using logic models and theories to support question development

Review authors can develop a more comprehensive representation of question features through use of logic models, programme theories, theories of change, templates and pathways (Anderson et al 2011, Kneale et al 2015, Noyes et al 2016a) (see also Chapter 17, Section 17.2.1  and Chapter 2, Section 2.5.1 ). These different forms of social theory can be used to visualize and map the research question, its context, components, influential factors and possible outcomes (Noyes et al 2016a, Rehfuess et al 2018).

21.6.2 Stakeholder engagement

Finally, review authors need to engage stakeholders, including consumers affected by the health issue and interventions, or likely users of the review from clinical or policy contexts. From the preparatory stage, this consultation can ensure that the review scope and question is appropriate and resulting products address implementation concerns of decision makers (Kelly et al 2017, Harris et al 2018).

21.7 Searching for qualitative evidence

In comparison with identification of quantitative studies (see also Chapter 4 ), procedures for retrieval of qualitative research remain relatively under-developed. Particular challenges in retrieval are associated with non-informative titles and abstracts, diffuse terminology, poor indexing and the overwhelming prevalence of quantitative studies within data sources (Booth et al 2016).

Principal considerations when planning a search for qualitative studies, and the evidence that underpins them, have been characterized using a 7S framework from Sampling and Sources through Structured questions, Search procedures, Strategies and filters and Supplementary strategies to Standards for Reporting (Booth et al 2016).

A key decision, aligned to the purpose of the qualitative evidence synthesis is whether to use the comprehensive, exhaustive approaches that characterize quantitative searches or whether to use purposive sampling that is more sensitive to the qualitative paradigm (Suri 2011). The latter, which is used when the intent is to generate an interpretative understanding, for example, when generating theory, draws upon a versatile toolkit that includes theoretical sampling, maximum variation sampling and intensity sampling. Sources of qualitative evidence are more likely to include book chapters, theses and grey literature reports than standard quantitative study reports, and so a search strategy should place extra emphasis on these sources. Local databases may be particularly valuable given the criticality of context (Stansfield et al 2012).

Another key decision is whether to use study filters or simply to conduct a topic-based search where qualitative studies are identified at the study selection stage. Search filters for qualitative studies lack the specificity of their quantitative counterparts. Nevertheless, filters may facilitate efficient retrieval by study type (e.g. qualitative (Rogers et al 2018) or mixed methods (El Sherif et al 2016) or by perspective (e.g. patient preferences (Selva et al 2017)) particularly where the quantitative literature is overwhelmingly large and thus increases the number needed to retrieve. Poor indexing of qualitative studies makes citation searching (forward and backward) and the Related Articles features of electronic databases particularly useful (Cooper et al 2017). Further guidance on searching for qualitative evidence is available (Booth et al 2016, Noyes et al 2018a). The CLUSTER method has been proposed as a specific named method for tracking down associated or sibling reports (Booth et al 2013). The BeHEMoTh approach has been developed for identifying explicit use of theory (Booth and Carroll 2015).

21.7.1 Searching for process evaluations and implementation evidence

Four potential approaches are available to identify process evaluations.

  • Identify studies at the point of study selection rather than through tailored search strategies. This involves conducting a sensitive topic search without any study design filter (Harden et al 1999), and identifying all study designs of interest during the screening process. This approach can be feasible when a review question involves multiple publication types (e.g. randomized trial, qualitative research and economic evaluations), which then do not require separate searches.  
  • Restrict included process evaluations to those conducted within randomized trials, which can be identified using standard search filters (see Chapter 4, Section 4.4.7 ). This method relies on reports of process evaluations also describing the surrounding randomized trial in enough detail to be identified by the search filter.  
  • Use unevaluated filter terms (such as ‘process evaluation’, ‘program(me) evaluation’, ‘feasibility study’, ‘implementation’ or ‘proof of concept’ etc) to retrieve process evaluations or implementation data. Approaches using strings of terms associated with the study type or purpose are considered experimental. There is a need to develop and test such filters. It is likely that such filters may be derived from the study type (process evaluation), the data type (process data) or the application (implementation) (Robbins et al 2011).  
  • Minimize reliance on topic-based searching and rely on citations-based approaches to identify linked reports, published or unpublished, of a particular study (Booth et al 2013) which may provide implementation or process data (Bonell et al 2013).

More detailed guidance is provided by Cargo and colleagues (Cargo et al (2018).

21.8 Assessing methodological strengths and limitations of qualitative studies

Assessment of the methodological strengths and limitations of qualitative research remains contested within the primary qualitative research community (Garside 2014). However, within systematic reviews and evidence syntheses it is considered essential, even when studies are not to be excluded on the basis of quality (Carroll et al 2013). One review found almost 100 appraisal tools for assessing primary qualitative studies (Munthe-Kaas et al 2019). Limitations included a focus on reporting rather than conduct and the presence of items that are separate from, or tangential to, consideration of study quality (e.g. ethical approval).

Authors should distinguish between assessment of study quality and assessment of risk of bias by focusing on assessment of methodological strengths and limitations as a marker of study rigour (what we term a ‘risk to rigour’ approach (Noyes et al 2019)). In the absence of a definitive risk to rigour tool, we recommend that review authors select from published, commonly used and validated tools that focus on the assessment of the methodological strengths and limitations of qualitative studies (see Box 21.8.a ). Pragmatically, we consider a ‘validated’ tool as one that has been subjected to evaluation. Issues such as inter-rater reliability are afforded less importance given that identification of complementary or conflicting perspectives on risk to rigour is considered more useful than achievement of consensus per se (Noyes et al 2019).

The CASP tool for qualitative research (as one example) maps onto the domains in Box 21.8.a (CASP 2013). Tools not meeting the criterion of focusing on assessment of methodological strengths and limitations include those that integrate assessment of the quality of reporting (such as scoring of the title and abstract, etc) into an overall assessment of methodological strengths and limitations. As with other risk of bias assessment tools, we strongly recommend against the application of scores to domains or calculation of total quality scores. We encourage review authors to discuss the studies and their assessments of ‘risk to rigour’ for each paper and how the study’s methodological limitations may affect review findings (Noyes et al 2019). We further advise that qualitative ‘sensitivity analysis’, exploring the robustness of the synthesis and its vulnerability to methodologically limited studies, be routinely applied regardless of the review authors’ overall confidence in synthesized findings (Carroll et al 2013). Evidence suggests that qualitative sensitivity analysis is equally advisable for mixed methods studies from which the qualitative component is extracted (Verhage and Boels 2017).

Box 21.8.a Example domains that provide an assessment of methodological strengths and limitations to determine study rigour

Clear aims and research question
 

Congruence between the research aims/question and research design/method(s)
 

Rigour of case and or participant identification, sampling and data collection to address the question
 

Appropriate application of the method
 

Richness/conceptual depth of findings
 

Exploration of deviant cases and alternative explanations
 

Reflexivity of the researchers*
 

*Reflexivity encourages qualitative researchers and reviewers to consider the actual and potential impacts of the researcher on the context, research participants and the interpretation and reporting of data and findings (Newton et al 2012). Being reflexive entails making conflicts of interest transparent, discussing the impact of the reviewers and their decisions on the review process and findings and making transparent any issues discussed and subsequent decisions.

Adapted from Noyes et al (2019) and Alvesson and Sköldberg (2009)

21.8.1 Additional assessment of methodological strengths and limitations of process evaluation and intervention implementation evidence

Few assessment tools explicitly address rigour in process evaluation or implementation evidence. For qualitative primary studies, the 8-item process evaluation tool developed by the EPPI-Centre (Rees et al 2009, Shepherd et al 2010) can be used to supplement tools selected to assess methodological strengths and limitations and risks to rigour in primary qualitative studies. One of these items, a question on usefulness (framed as ‘how well the intervention processes were described and whether or not the process data could illuminate why or how the interventions worked or did not work’ ) offers a mechanism for exploring process mechanisms (Cargo et al 2018).

21.9 Selecting studies to synthesize

Decisions about inclusion or exclusion of studies can be more complex in qualitative evidence syntheses compared to reviews of trials that aim to include all relevant studies. Decisions on whether to include all studies or to select a sample of studies depend on a range of general and review specific criteria that Noyes and colleagues (Noyes et al (2019) outline in detail. The number of qualitative studies selected needs to be consistent with a manageable synthesis, and the contexts of the included studies should enable integration with the trials in the effectiveness analysis (see Figure 21.2.a ). The guiding principle is transparency in the reporting of all decisions and their rationale.

21.10 Selecting a qualitative evidence synthesis and data extraction method

Authors will typically find that they cannot select an appropriate synthesis method until the pool of available qualitative evidence has been thoroughly scoped. Flexible options concerning choice of method may need to be articulated in the protocol.

The INTEGRATE-HTA guidance on selecting methodology and methods for qualitative evidence synthesis and health technology assessment offers a useful starting point when selecting a method of synthesis (Booth et al 2016, Booth et al 2018). Some methods are designed primarily to develop findings at a descriptive level and thus directly feed into lines of action for policy and practice. Others hold the capacity to develop new theory (e.g. meta-ethnography and theory building approaches to thematic synthesis). Noyes and colleagues (Noyes et al (2019) and Flemming and colleagues (Flemming et al (2019) elaborate on key issues for consideration when selecting a method that is particularly suited to a Cochrane Review and decision making context (see Table 21.10.a ). Three qualitative evidence synthesis methods (thematic synthesis, framework synthesis and meta-ethnography) are recommended to produce syntheses that can subsequently be integrated with an intervention review or analysis.

Table 21.10.a Recommended methods for undertaking a qualitative evidence synthesis for subsequent integration with an intervention review, or as part of a mixed-method review (adapted from an original source developed by convenors (Flemming et al 2019, Noyes et al 2019))

Thematic synthesis

(Thomas and Harden 2008)

Most accessible form of synthesis. Clear approach, can be used with ‘thin’ data to produce descriptive themes and with ‘thicker’ data to develop descriptive themes in to more in-depth analytic themes. Themes are then integrated within the quantitative synthesis.

May be limited in interpretive ‘power’ and risks over-simplistic use and thus not truly informing decision making such as guidelines. Complex synthesis process that requires an experienced team. Theoretical findings may combine empirical evidence, expert opinion and conjecture to form hypotheses. More work is needed on how GRADE CERQual to assess confidence in synthesized qualitative findings (see Section ) can be applied to theoretical findings. May lack clarity on how higher-level findings translate into actionable points.

Framework synthesis

(Oliver et al 2008, Dixon-Woods 2011)

Best-fit framework synthesis

(Carroll et al 2011)

Works well within reviews of complex interventions by accommodating complexity within the framework, including representation of theory. The framework allows a clear mechanism for integration of qualitative and quantitative evidence in an aggregative way – see Noyes et al (2018a). Works well where there is broad agreement about the nature of interventions and their desired impacts.

Requires identification, selection and justification of framework. A framework may be revealed as inappropriate only once extraction/synthesis is underway. Risk of simplistically forcing data into a framework for expedience.

Meta-ethnography

(Noblit and Hare 1988)

Primarily interpretive synthesis method leading to creation of descriptive as well as new high order constructs. Descriptive and theoretical findings can help inform decision making such as guidelines. Explicit reporting standards have been developed.

Complex methodology and synthesis process that requires highly experienced team. Can take more time and resources than other methodologies. Theoretical findings may combine empirical evidence, expert opinion and conjecture to form hypotheses. May not satisfy requirements for an audit trail (although new reporting guidelines will help overcome this (France et al 2019). More work is needed to determine how CERQual can be applied to theoretical findings. May be unclear how higher-level findings translate into actionable points.

21.11 Data extraction

Qualitative findings may take the form of quotations from participants, subthemes and themes identified by the study’s authors, explanations, hypotheses or new theory, or observational excerpts and author interpretations of these data (Sandelowski and Barroso 2002). Findings may be presented as a narrative, or summarized and displayed as tables, infographics or logic models and potentially located in any part of the paper (Noyes et al 2019).

Methods for qualitative data extraction vary according to the synthesis method selected. Data extraction is not sequential and linear; often, it involves moving backwards and forwards between review stages. Review teams will need regular meetings to discuss and further interrogate the evidence and thereby achieve a shared understanding. It may be helpful to draw on a key stakeholder group to help in interpreting the evidence and in formulating key findings. Additional approaches (such as subgroup analysis) can be used to explore evidence from specific contexts further.

Irrespective of the review type and choice of synthesis method, we consider it best practice to extract detailed contextual and methodological information on each study and to report this information in a table of ‘Characteristics of included studies’ (see Table 21.11.a ). The template for intervention description and replication TIDieR checklist (Hoffmann et al 2014) and ICAT_SR tool may help with specifying key information for extraction (Lewin et al 2017). Review authors must ensure that they preserve the context of the primary study data during the extraction and synthesis process to prevent misinterpretation of primary studies (Noyes et al 2019).

Table 21.11.a Contextual and methodological information for inclusion within a table of ‘Characteristics of included studies’. From Noyes et al (2019). Reproduced with permission of BMJ Publishing Group

Context and participants

Important elements of study context, relevant to addressing the review question and locating the context of the primary study; for example, the study setting, population characteristics, participants and participant characteristics, the intervention delivered (if appropriate), etc.

Study design and methods used

Methodological design and approach taken by the study; methods for identifying the sample recruitment; the specific data collection and analysis methods utilized; and any theoretical models used to interpret or contextualize the findings.

Noyes and colleagues (Noyes et al (2019) provide additional guidance and examples of the various methods of data extraction. It is usual for review authors to select one method. In summary, extraction methods can be grouped as follows.

  • Using a bespoke universal, standardized or adapted data extraction template Review authors can develop their own review-specific data extraction template, or select a generic data extraction template by study type (e.g. templates developed by the National Institute for Health and Clinical Excellence (National Institute for Health Care Excellence 2012).
  • Using an a priori theory or predetermined framework to extract data Framework synthesis, and its subvariant ‘Best Fit’ Framework approach, involve extracting data from primary studies against an a priori framework in order to better understand a phenomenon of interest (Carroll et al 2011, Carroll et al 2013). For example, Glenton and colleagues (Glenton et al (2013) extracted data against a modified SURE Framework (2011) to synthesize factors affecting the implementation of lay health worker interventions. The SURE framework enumerates possible factors that may influence the implementation of health system interventions (SURE (Supporting the Use of Research Evidence) Collaboration 2011, Glenton et al 2013). Use of the ‘PROGRESS’ (place of residence, race/ethnicity/culture/language, occupation, gender/sex, religion, education, socioeconomic status, and social capital) framework also helps to ensure that data extraction maintains an explicit equity focus (O'Neill et al 2014). A logic model can also be used as a framework for data extraction.
  • Using a software program to code original studies inductively A wide range of software products have been developed by systematic review organizations (such as EPPI-Reviewer (Thomas et al 2010)). Most software for the analysis of primary qualitative data – such as NVivo ( www.qsrinternational.com/nvivo/home ) and others – can be used to code studies in a systematic review (Houghton et al 2017). For example, one method of data extraction and thematic synthesis involves coding the original studies using a software program to build inductive descriptive themes and a theoretical explanation of phenomena of interest (Thomas and Harden 2008). Thomas and Harden (2008) provide a worked example to demonstrate coding and developing a new understanding of children’s choices and motivations to eating fruit and vegetables from included primary studies.

21.12 Assessing the confidence in qualitative synthesized findings

The GRADE system has long featured in assessing the certainty of quantitative findings and application of its qualitative counterpart, GRADE-CERQual, is recommended for Cochrane qualitative evidence syntheses (Lewin et al 2015). CERQual has four components (relevance, methodological limitations, adequacy and coherence) which are used to formulate an overall assessment of confidence in the synthesized qualitative finding. Guidance on its components and reporting requirements have been published in a series in Implementation Science (Lewin et al 2018).

21.13 Methods for integrating the qualitative evidence synthesis with an intervention review

A range of methods and tools is available for data integration or mixed-method synthesis (Harden et al 2018, Noyes et al 2019). As noted at the beginning of this chapter, review authors can integrate a qualitative evidence synthesis with an existing intervention review published on a similar topic (sequential approach), or conduct a new intervention review and qualitative evidence syntheses in parallel before integration (convergent approach). Irrespective of whether the qualitative synthesis is sequential or convergent to the intervention review, we recommend that qualitative and quantitative evidence be synthesized separately using appropriate methods before integration (Harden et al 2018). The scope for integration can be more limited with a pre-existing intervention review unless review authors have access to the data underlying the intervention review report.

Harden and colleagues and Noyes and colleagues outline the following methods and tools for integration with an intervention review (Harden et al 2018, Noyes et al 2019):

  • Juxtaposing findings in a matrix Juxtaposition is driven by the findings from the qualitative evidence synthesis (e.g. intervention components related to the acceptability or feasibility of the interventions) and these findings form one side of the matrix. Findings on intervention effects (e.g. improves outcome, no difference in outcome, uncertain effects) form the other side of the matrix. Quantitative studies are grouped according to findings on intervention effects and the presence or absence of features specified by the hypotheses generated from the qualitative synthesis (Candy et al 2011). Observed patterns in the matrix are used to explain differences in the findings of the quantitative studies and to identify gaps in research (van Grootel et al 2017). (See, for example, (Ames et al 2017, Munabi-Babigumira et al 2017, Hurley et al 2018)
  • Analysing programme theory Theories articulating how interventions are expected to work are analysed. Findings from quantitative studies, testing the effects of interventions, and from qualitative and process evaluation evidence are used together to examine how the theories work in practice (Greenhalgh et al 2007). The value of different theories is assessed or new/revised theory developed. Factors that enhance or reduce intervention effectiveness are also identified.
  • Using logic models or other types of conceptual framework A logic model (Glenton et al 2013) or other type of conceptual framework, which represents the processes by which an intervention produces change provides a common scaffold for integrating findings across different types of evidence (Booth and Carroll 2015). Frameworks can be specified a priori from the literature or through stakeholder engagement or newly developed during the review. Findings from quantitative studies testing the effects of interventions and those from qualitative evidence are used to develop and/or further refine the model.
  • Testing hypotheses derived from syntheses of qualitative evidence Quantitative studies are grouped according to the presence or absence of the proposition specified by the hypotheses to be tested and subgroup analysis is used to explore differential findings on the effects of interventions (Thomas et al 2004).
  • Qualitative comparative analysis (QCA) Findings from a qualitative synthesis are used to identify the range of features that are important for successful interventions, and the mechanisms through which these features operate. A QCA then tests whether or not the features are associated with effective interventions (Kahwati et al 2016). The analysis unpicks multiple potential pathways to effectiveness accommodating scenarios where the same intervention feature is associated both with effective and less effective interventions, depending on context. QCA offers potential for use in integration; unlike the other methods and tools presented here it does not yet have sufficient methodological guidance available. However, exemplar reviews using QCA are available (Thomas et al 2014, Harris et al 2015, Kahwati et al 2016).

Review authors can use the above methods in combination (e.g. patterns observed through juxtaposing findings within a matrix can be tested using subgroup analysis or QCA). Analysing programme theory, using logic models and QCA would require members of the review team with specific skills in these methods. Using subgroup analysis and QCA are not suitable when limited evidence is available (Harden et al 2018, Noyes et al 2019). (See also Chapter 17 on intervention complexity.)

21.14 Reporting the protocol and qualitative evidence synthesis

Reporting standards and tools designed for intervention reviews (such as Cochrane’s MECIR standards ( http://methods.cochrane.org/mecir ) or the PRISMA Statement (Liberati et al 2009), may not be appropriate for qualitative evidence syntheses or an integrated mixed-method review. Additional guidance on how to choose, adapt or create a hybrid reporting tool is provided as a 5-point ‘decision flowchart’ ( Figure 21.14.a ) (Flemming et al 2018). Review authors should consider whether: a specific set of reporting guidance is available (e.g. eMERGe for meta-ethnographies (France et al 2015)); whether generic guidance (e.g. ENTREQ (Tong et al 2012)) is suitable; or whether additional checklists or tools are appropriate for reporting a specific aspect of the review.

Figure 21.14.a Decision flowchart for choice of reporting approach for syntheses of qualitative, implementation or process evaluation evidence (Flemming et al 2018). Reproduced with permission of Elsevier

how to review qualitative research

21.15 Chapter information

Authors: Jane Noyes, Andrew Booth, Margaret Cargo, Kate Flemming, Angela Harden, Janet Harris, Ruth Garside, Karin Hannes, Tomás Pantoja, James Thomas

Acknowledgements: This chapter replaces Chapter 20 in the first edition of this Handbook (2008) and subsequent Version 5.2. We would like to thank the previous Chapter 20 authors Jennie Popay and Alan Pearson. Elements of this chapter draw on previous supplemental guidance produced by the Cochrane Qualitative and Implementation Methods Group Convenors, to which Simon Lewin contributed.

Funding: JT is supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care North Thames at Barts Health NHS Trust. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

21.16 References

Ames HM, Glenton C, Lewin S. Parents' and informal caregivers' views and experiences of communication about routine childhood vaccination: a synthesis of qualitative evidence. Cochrane Database of Systematic Reviews 2017; 2 : CD011787.

Anderson LM, Petticrew M, Rehfuess E, Armstrong R, Ueffing E, Baker P, Francis D, Tugwell P. Using logic models to capture complexity in systematic reviews. Research Synthesis Methods 2011; 2 : 33-42.

Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Medical Research Methodology 2009; 9 : 59.

Benoot C, Hannes K, Bilsen J. The use of purposeful sampling in a qualitative evidence synthesis: a worked example on sexual adjustment to a cancer trajectory. BMC Medical Research Methodology 2016; 16 : 21.

Bonell C, Jamal F, Harden A, Wells H, Parry W, Fletcher A, Petticrew M, Thomas J, Whitehead M, Campbell R, Murphy S, Moore L. Public Health Research. Systematic review of the effects of schools and school environment interventions on health: evidence mapping and synthesis . Southampton (UK): NIHR Journals Library; 2013.

Booth A, Harris J, Croot E, Springett J, Campbell F, Wilkins E. Towards a methodology for cluster searching to provide conceptual and contextual "richness" for systematic reviews of complex interventions: case study (CLUSTER). BMC Medical Research Methodology 2013; 13 : 118.

Booth A, Carroll C. How to build up the actionable knowledge base: the role of 'best fit' framework synthesis for studies of improvement in healthcare. BMJ Quality and Safety 2015; 24 : 700-708.

Booth A, Noyes J, Flemming K, Gerhardus A, Wahlster P, van der Wilt GJ, Mozygemba K, Refolo P, Sacchini D, Tummers M, Rehfuess E. Guidance on choosing qualitative evidence synthesis methods for use in health technology assessment for complex interventions 2016. https://www.integrate-hta.eu/wp-content/uploads/2016/02/Guidance-on-choosing-qualitative-evidence-synthesis-methods-for-use-in-HTA-of-complex-interventions.pdf

Booth A. Qualitative evidence synthesis. In: Facey K, editor. Patient involvement in Health Technology Assessment . Singapore: Springer; 2017. p. 187-199.

Booth A, Noyes J, Flemming K, Gehardus A, Wahlster P, Jan van der Wilt G, Mozygemba K, Refolo P, Sacchini D, Tummers M, Rehfuess E. Structured methodology review identified seven (RETREAT) criteria for selecting qualitative evidence synthesis approaches. Journal of Clinical Epidemiology 2018; 99 : 41-52.

Booth A, Moore G, Flemming K, Garside R, Rollins N, Tuncalp Ö, Noyes J. Taking account of context in systematic reviews and guidelines considering a complexity perspective. BMJ Global Health 2019a; 4 : e000840.

Booth A, Noyes J, Flemming K, Moore G, Tuncalp Ö, Shakibazadeh E. Formulating questions to address the acceptability and feasibility of complex interventions in qualitative evidence synthesis. BMJ Global Health 2019b; 4 : e001107.

Candy B, King M, Jones L, Oliver S. Using qualitative synthesis to explore heterogeneity of complex interventions. BMC Medical Research Methodology 2011; 11 : 124.

Cargo M, Harris J, Pantoja T, Booth A, Harden A, Hannes K, Thomas J, Flemming K, Garside R, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 4: methods for assessing evidence on intervention implementation. Journal of Clinical Epidemiology 2018; 97 : 59-69.

Carroll C, Booth A, Cooper K. A worked example of "best fit" framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents. BMC Medical Research Methodology 2011; 11 : 29.

Carroll C, Booth A, Leaviss J, Rick J. "Best fit" framework synthesis: refining the method. BMC Medical Research Methodology 2013; 13 : 37.

Carroll C. Qualitative evidence synthesis to improve implementation of clinical guidelines. BMJ 2017; 356 : j80.

CASP. Making sense of evidence: 10 questions to help you make sense of qualitative research: Public Health Resource Unit, England; 2013. http://media.wix.com/ugd/dded87_29c5b002d99342f788c6ac670e49f274.pdf .

Cooke A, Smith D, Booth A. Beyond PICO: the SPIDER tool for qualitative evidence synthesis. Qualitative Health Research 2012; 22 : 1435-1443.

Cooper C, Booth A, Britten N, Garside R. A comparison of results of empirical studies of supplementary search techniques and recommendations in review methodology handbooks: a methodological review. Systematic Reviews 2017; 6 : 234.

De Buck E, Hannes K, Cargo M, Van Remoortel H, Vande Veegaete A, Mosler HJ, Govender T, Vandekerckhove P, Young T. Engagement of stakeholders in the development of a Theory of Change for handwashing and sanitation behaviour change. International Journal of Environmental Research and Public Health 2018; 28 : 8-22.

Dixon-Woods M. Using framework-based synthesis for conducting reviews of qualitative studies. BMC Medicine 2011; 9 : 39.

Downe S, Finlayson K, Tuncalp, Metin Gulmezoglu A. What matters to women: a systematic scoping review to identify the processes and outcomes of antenatal care provision that are important to healthy pregnant women. BJOG: An International Journal of Obstetrics and Gynaecology 2016; 123 : 529-539.

El Sherif R, Pluye P, Gore G, Granikov V, Hong QN. Performance of a mixed filter to identify relevant studies for mixed studies reviews. Journal of the Medical Library Association 2016; 104 : 47-51.

Flemming K, Booth A, Hannes K, Cargo M, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 6: reporting guidelines for qualitative, implementation, and process evaluation evidence syntheses. Journal of Clinical Epidemiology 2018; 97 : 79-85.

Flemming K, Booth A, Garside R, Tuncalp O, Noyes J. Qualitative evidence synthesis for complex interventions and guideline development: clarification of the purpose, designs and relevant methods. BMJ Global Health 2019; 4 : e000882.

France EF, Ring N, Noyes J, Maxwell M, Jepson R, Duncan E, Turley R, Jones D, Uny I. Protocol-developing meta-ethnography reporting guidelines (eMERGe). BMC Medical Research Methodology 2015; 15 : 103.

France EF, Cunningham M, Ring N, Uny I, Duncan EAS, Jepson RG, Maxwell M, Roberts RJ, Turley RL, Booth A, Britten N, Flemming K, Gallagher I, Garside R, Hannes K, Lewin S, Noblit G, Pope C, Thomas J, Vanstone M, Higginbottom GMA, Noyes J. Improving reporting of Meta-Ethnography: The eMERGe Reporting Guidance BMC Medical Research Methodology 2019; 19 : 25.

Garside R. Should we appraise the quality of qualitative research reports for systematic reviews, and if so, how? Innovation: The European Journal of Social Science Research 2014; 27 : 67-79.

Glenton C, Colvin CJ, Carlsen B, Swartz A, Lewin S, Noyes J, Rashidian A. Barriers and facilitators to the implementation of lay health worker programmes to improve access to maternal and child health: qualitative evidence synthesis. Cochrane Database of Systematic Reviews 2013; 10 : CD010414.

Glenton C, Lewin S, Norris S. Chapter 15: Using evidence from qualitative research to develop WHO guidelines. In: Norris S, editor. World Health Organization Handbook for Guideline Development . 2nd. ed. Geneva: WHO; 2016.

Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information and Libraries Journal 2009; 26 : 91-108.

Greenhalgh T, Kristjansson E, Robinson V. Realist review to understand the efficacy of school feeding programmes. BMJ 2007; 335 : 858.

Harden A, Oakley A, Weston R. A review of the effectiveness and appropriateness of peer-delivered health promotion for young people. London: Institute of Education, University of London; 1999.

Harden A, Thomas J, Cargo M, Harris J, Pantoja T, Flemming K, Booth A, Garside R, Hannes K, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 5: methods for integrating qualitative and implementation evidence within intervention effectiveness reviews. Journal of Clinical Epidemiology 2018; 97 : 70-78.

Harris JL, Booth A, Cargo M, Hannes K, Harden A, Flemming K, Garside R, Pantoja T, Thomas J, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 2: methods for question formulation, searching, and protocol development for qualitative evidence synthesis. Journal of Clinical Epidemiology 2018; 97 : 39-48.

Harris KM, Kneale D, Lasserson TJ, McDonald VM, Grigg J, Thomas J. School-based self management interventions for asthma in children and adolescents: a mixed methods systematic review (Protocol). Cochrane Database of Systematic Reviews 2015; 4 : CD011651.

Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, Altman DG, Barbour V, Macdonald H, Johnston M, Lamb SE, Dixon-Woods M, McCulloch P, Wyatt JC, Chan AW, Michie S. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ 2014; 348 : g1687.

Houghton C, Murphy K, Meehan B, Thomas J, Brooker D, Casey D. From screening to synthesis: using nvivo to enhance transparency in qualitative evidence synthesis. Journal of Clinical Nursing 2017; 26 : 873-881.

Hurley M, Dickson K, Hallett R, Grant R, Hauari H, Walsh N, Stansfield C, Oliver S. Exercise interventions and patient beliefs for people with hip, knee or hip and knee osteoarthritis: a mixed methods review. Cochrane Database of Systematic Reviews 2018; 4 : CD010842.

Kahwati L, Jacobs S, Kane H, Lewis M, Viswanathan M, Golin CE. Using qualitative comparative analysis in a systematic review of a complex intervention. Systematic Reviews 2016; 5 : 82.

Kelly MP, Noyes J, Kane RL, Chang C, Uhl S, Robinson KA, Springs S, Butler ME, Guise JM. AHRQ series on complex intervention systematic reviews-paper 2: defining complexity, formulating scope, and questions. Journal of Clinical Epidemiology 2017; 90 : 11-18.

Kneale D, Thomas J, Harris K. Developing and Optimising the Use of Logic Models in Systematic Reviews: Exploring Practice and Good Practice in the Use of Programme Theory in Reviews. PloS One 2015; 10 : e0142187.

Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implementation Science 2010; 5 : 69.

Lewin S, Munabi-Babigumira S, Glenton C, Daniels K, Bosch-Capblanch X, van Wyk BE, Odgaard-Jensen J, Johansen M, Aja GN, Zwarenstein M, Scheel IB. Lay health workers in primary and community health care for maternal and child health and the management of infectious diseases. Cochrane Database of Systematic Reviews 2010; 3 : CD004015.

Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin CJ, Gulmezoglu M, Noyes J, Booth A, Garside R, Rashidian A. Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Medicine 2015; 12 : e1001895.

Lewin S, Hendry M, Chandler J, Oxman AD, Michie S, Shepperd S, Reeves BC, Tugwell P, Hannes K, Rehfuess EA, Welch V, McKenzie JE, Burford B, Petkovic J, Anderson LM, Harris J, Noyes J. Assessing the complexity of interventions within systematic reviews: development, content and use of a new tool (iCAT_SR). BMC Medical Research Methodology 2017; 17 : 76.

Lewin S, Booth A, Glenton C, Munthe-Kaas H, Rashidian A, Wainwright M, Bohren MA, Tuncalp O, Colvin CJ, Garside R, Carlsen B, Langlois EV, Noyes J. Applying GRADE-CERQual to qualitative evidence synthesis findings: introduction to the series. Implementation Science 2018; 13 : 2.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ 2009; 339 : b2700.

Moore G, Audrey S, Barker M, Bond L, Bonell C, Harderman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ 2015; 350 : h1258.

Munabi-Babigumira S, Glenton C, Lewin S, Fretheim A, Nabudere H. Factors that influence the provision of intrapartum and postnatal care by skilled birth attendants in low- and middle-income countries: a qualitative evidence synthesis. Cochrane Database of Systematic Reviews 2017; 11 : CD011558.

Munthe-Kaas H, Glenton C, Booth A, Noyes J, Lewin S. Systematic mapping of existing tools to appraise methodological strengths and limitations of qualitative research: first stage in the development of the CAMELOT tool. BMC Medical Research Methodology 2019; 19 : 113.

National Institute for Health Care Excellence. NICE Process and Methods Guides. Methods for the Development of NICE Public Health Guidance . London: National Institute for Health and Care Excellence (NICE); 2012.

Newton BJ, Rothlingova Z, Gutteridge R, LeMarchand K, Raphael JH. No room for reflexivity? Critical reflections following a systematic review of qualitative research. Journal of Health Psychology 2012; 17 : 866-885.

Noblit GW, Hare RD. Meta-ethnography: synthesizing qualitative studies . Newbury Park: Sage Publications, Inc; 1988.

Noyes J, Hendry M, Booth A, Chandler J, Lewin S, Glenton C, Garside R. Current use was established and Cochrane guidance on selection of social theories for systematic reviews of complex interventions was developed. Journal of Clinical Epidemiology 2016a; 75 : 78-92.

Noyes J, Hendry M, Lewin S, Glenton C, Chandler J, Rashidian A. Qualitative "trial-sibling" studies and "unrelated" qualitative studies contributed to complex intervention reviews. Journal of Clinical Epidemiology 2016b; 74 : 133-143.

Noyes J, Booth A, Flemming K, Garside R, Harden A, Lewin S, Pantoja T, Hannes K, Cargo M, Thomas J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 3: methods for assessing methodological limitations, data extraction and synthesis, and confidence in synthesized qualitative findings. Journal of Clinical Epidemiology 2018a; 97 : 49-58.

Noyes J, Booth A, Cargo M, Flemming K, Garside R, Hannes K, Harden A, Harris J, Lewin S, Pantoja T, Thomas J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 1: introduction. Journal of Clinical Epidemiology 2018b; 97 : 35-38.

Noyes J, Booth A, Moore G, Flemming K, Tuncalp O, Shakibazadeh E. Synthesising quantitative and qualitative evidence to inform guidelines on complex interventions: clarifying the purposes, designs and outlining some methods. BMJ Global Health 2019; 4 (Suppl 1) : e000893.

O'Neill J, Tabish H, Welch V, Petticrew M, Pottie K, Clarke M, Evans T, Pardo Pardo J, Waters E, White H, Tugwell P. Applying an equity lens to interventions: using PROGRESS ensures consideration of socially stratifying factors to illuminate inequities in health. Journal of Clinical Epidemiology 2014; 67 : 56-64.

Oliver S, Rees R, Clarke-Jones L, Milne R, Oakley A, Gabbay J, Stein K, Buchanan P, Gyte G. A multidimensional conceptual framework for analysing public involvement in health services research. Health Expectations 2008; 11 : 72-84.

Petticrew M, Knai C, Thomas J, Rehfuess E, Noyes J, Gerhardus A, Grimshaw J, Rutter H. Implications of a complexity perspective for systematic reviews and guideline development in health decision making. BMJ Global Health 2019; 4 (Suppl 1) : e000899.

Rees R, Oliver K, Woodman J, Thomas J. Children's views about obesity, body size, shape and weight. A systematic review. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London; 2009.

Rehfuess EA, Booth A, Brereton L, Burns J, Gerhardus A, Mozygemba K, Oortwijn W, Pfadenhauer LM, Tummers M, van der Wilt GJ, Rohwer A. Towards a taxonomy of logic models in systematic reviews and health technology assessments: A priori, staged, and iterative approaches. Research Synthesis Methods 2018; 9 : 13-24.

Robbins SCC, Ward K, Skinner SR. School-based vaccination: a systematic review of process evaluations. Vaccine 2011; 29 : 9588-9599.

Rogers M, Bethel A, Abbott R. Locating qualitative studies in dementia on MEDLINE, EMBASE, CINAHL, and PsycINFO: a comparison of search strategies. Research Synthesis Methods 2018; 9 : 579-586.

Sandelowski M, Barroso J. Finding the findings in qualitative studies. Journal of Nursing Scholarship 2002; 34 : 213-219.

Selva A, Sola I, Zhang Y, Pardo-Hernandez H, Haynes RB, Martinez Garcia L, Navarro T, Schünemann H, Alonso-Coello P. Development and use of a content search strategy for retrieving studies on patients' views and preferences. Health and Quality of Life Outcomes 2017; 15 : 126.

Shepherd J, Kavanagh J, Picot J, Cooper K, Harden A, Barnett-Page E, Jones J, Clegg A, Hartwell D, Frampton GK, Price A. The effectiveness and cost-effectiveness of behavioural interventions for the prevention of sexually transmitted infections in young people aged 13-19: a systematic review and economic evaluation. Health Technology Assessment 2010; 14 : 1-206, iii-iv.

Squires JE, Valentine JC, Grimshaw JM. Systematic reviews of complex interventions: framing the review question. Journal of Clinical Epidemiology 2013; 66 : 1215-1222.

Stansfield C, Kavanagh J, Rees R, Gomersall A, Thomas J. The selection of search sources influences the findings of a systematic review of people's views: a case study in public health. BMC Medical Research Methodology 2012; 12 : 55.

SURE (Supporting the Use of Research Evidence) Collaboration. SURE Guides for Preparing and Using Evidence-based Policy Briefs: 5 Identifying and Addressing Barriers to Implementing the Policy Options. Version 2.1, updated November 2011.  https://epoc.cochrane.org/sites/epoc.cochrane.org/files/public/uploads/SURE-Guides-v2.1/Collectedfiles/sure_guides.html

Suri H. Purposeful sampling in qualitative research synthesis. Qualitative Research Journal 2011; 11 : 63-75.

Thomas J, Harden A, Oakley A, Oliver S, Sutcliffe K, Rees R, Brunton G, Kavanagh J. Integrating qualitative research with trials in systematic reviews. BMJ 2004; 328 : 1010-1012.

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Medical Research Methodology 2008; 8 : 45.

Thomas J, Brunton J, Graziosi S. EPPI-Reviewer 4.0: software for research synthesis [Software]. EPPI-Centre Software. Social Science Research Unit, Institute of Education, University of London UK; 2010. https://eppi.ioe.ac.uk/CMS/Default.aspx?alias=eppi.ioe.ac.uk/cms/er4& .

Thomas J, O'Mara-Eves A, Brunton G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Systematic Reviews 2014; 3 : 67.

Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Medical Research Methodology 2012; 12 : 181.

van Grootel L, van Wesel F, O'Mara-Eves A, Thomas J, Hox J, Boeije H. Using the realist perspective to link theory from qualitative evidence synthesis to quantitative studies: broadening the matrix approach. Research Synthesis Methods 2017; 8 : 303-311.

Verhage A, Boels D. Critical appraisal of mixed methods research studies in a systematic scoping review on plural policing: assessing the impact of excluding inadequately reported studies by means of a sensitivity analysis. Quality & Quantity 2017; 51 : 1449-1468.

Walker LO, Avant KC. Strategies for theory construction in nursing . Upper Saddle River (NJ): Pearson Prentice Hall; 2005.

For permission to re-use material from the Handbook (either academic or commercial), please see here for full details.

Ashland University wordmark

Archer Library

Qualitative research: literature review .

  • Archer Library This link opens in a new window
  • Schedule a Reference Appointment This link opens in a new window
  • Qualitative Research Handout This link opens in a new window
  • Locating Books
  • ebook Collections This link opens in a new window
  • A to Z Database List This link opens in a new window
  • Research & Stats
  • Literature Review Resources
  • Citation & Reference

Exploring the literature review 

Literature review model: 6 steps.

literature review process

Adapted from The Literature Review , Machi & McEvoy (2009, p. 13).

Your Literature Review

Step 2: search, boolean search strategies, search limiters, ★ ebsco & google drive.

Right arrow

1. Select a Topic

"All research begins with curiosity" (Machi & McEvoy, 2009, p. 14)

Selection of a topic, and fully defined research interest and question, is supervised (and approved) by your professor. Tips for crafting your topic include:

  • Be specific. Take time to define your interest.
  • Topic Focus. Fully describe and sufficiently narrow the focus for research.
  • Academic Discipline. Learn more about your area of research & refine the scope.
  • Avoid Bias. Be aware of bias that you (as a researcher) may have.
  • Document your research. Use Google Docs to track your research process.
  • Research apps. Consider using Evernote or Zotero to track your research.

Consider Purpose

What will your topic and research address?

In The Literature Review: A Step-by-Step Guide for Students , Ridley presents that literature reviews serve several purposes (2008, p. 16-17).  Included are the following points:

  • Historical background for the research;
  • Overview of current field provided by "contemporary debates, issues, and questions;"
  • Theories and concepts related to your research;
  • Introduce "relevant terminology" - or academic language - being used it the field;
  • Connect to existing research - does your work "extend or challenge [this] or address a gap;" 
  • Provide "supporting evidence for a practical problem or issue" that your research addresses.

★ Schedule a research appointment

At this point in your literature review, take time to meet with a librarian. Why? Understanding the subject terminology used in databases can be challenging. Archer Librarians can help you structure a search, preparing you for step two. How? Contact a librarian directly or use the online form to schedule an appointment. Details are provided in the adjacent Schedule an Appointment box.

2. Search the Literature

Collect & Select Data: Preview, select, and organize

AU Library is your go-to resource for this step in your literature review process. The literature search will include books and ebooks, scholarly and practitioner journals, theses and dissertations, and indexes. You may also choose to include web sites, blogs, open access resources, and newspapers. This library guide provides access to resources needed to complete a literature review.

Books & eBooks: Archer Library & OhioLINK

Books
 

Databases: Scholarly & Practitioner Journals

Review the Library Databases tab on this library guide, it provides links to recommended databases for Education & Psychology, Business, and General & Social Sciences.

Expand your journal search; a complete listing of available AU Library and OhioLINK databases is available on the Databases  A to Z list . Search the database by subject, type, name, or do use the search box for a general title search. The A to Z list also includes open access resources and select internet sites.

Databases: Theses & Dissertations

Review the Library Databases tab on this guide, it includes Theses & Dissertation resources. AU library also has AU student authored theses and dissertations available in print, search the library catalog for these titles.

Did you know? If you are looking for particular chapters within a dissertation that is not fully available online, it is possible to submit an ILL article request . Do this instead of requesting the entire dissertation.

Newspapers:  Databases & Internet

Consider current literature in your academic field. AU Library's database collection includes The Chronicle of Higher Education and The Wall Street Journal .  The Internet Resources tab in this guide provides links to newspapers and online journals such as Inside Higher Ed , COABE Journal , and Education Week .

Database

The Chronicle of Higher Education has the nation’s largest newsroom dedicated to covering colleges and universities.  Source of news, information, and jobs for college and university faculty members and administrators

The Chronicle features complete contents of the latest print issue; daily news and advice columns; current job listings; archive of previously published content; discussion forums; and career-building tools such as online CV management and salary databases. Dates covered: 1970-present.

Offers in-depth coverage of national and international business and finance as well as first-rate coverage of hard news--all from America's premier financial newspaper. Covers complete bibliographic information and also subjects, companies, people, products, and geographic areas. 

Comprehensive coverage back to 1984 is available from the world's leading financial newspaper through the ProQuest database. 

Newspaper Source provides cover-to-cover full text for hundreds of national (U.S.), international and regional newspapers. In addition, it offers television and radio news transcripts from major networks.

Provides complete television and radio news transcripts from CBS News, CNN, CNN International, FOX News, and more.

Search Strategies & Boolean Operators

There are three basic boolean operators:  AND, OR, and NOT.

Used with your search terms, boolean operators will either expand or limit results. What purpose do they serve? They help to define the relationship between your search terms. For example, using the operator AND will combine the terms expanding the search. When searching some databases, and Google, the operator AND may be implied.

Overview of boolean terms

Search results will contain of the terms. Search results will contain of the search terms. Search results the specified search term.
Search for ; you will find items that contain terms. Search for ; you will find items that contain . Search for online education: you will find items that contain .
connects terms, limits the search, and will reduce the number of results returned. redefines connection of the terms, expands the search, and increases the number of results returned.
 
excludes results from the search term and reduces the number of results.

 

Adult learning online education:

 

Adult learning online education:

 

Adult learning online education:

About the example: Boolean searches were conducted on November 4, 2019; result numbers may vary at a later date. No additional database limiters were set to further narrow search returns.

Database Search Limiters

Database strategies for targeted search results.

Most databases include limiters, or additional parameters, you may use to strategically focus search results.  EBSCO databases, such as Education Research Complete & Academic Search Complete provide options to:

  • Limit results to full text;
  • Limit results to scholarly journals, and reference available;
  • Select results source type to journals, magazines, conference papers, reviews, and newspapers
  • Publication date

Keep in mind that these tools are defined as limiters for a reason; adding them to a search will limit the number of results returned.  This can be a double-edged sword.  How? 

  • If limiting results to full-text only, you may miss an important piece of research that could change the direction of your research. Interlibrary loan is available to students, free of charge. Request articles that are not available in full-text; they will be sent to you via email.
  • If narrowing publication date, you may eliminate significant historical - or recent - research conducted on your topic.
  • Limiting resource type to a specific type of material may cause bias in the research results.

Use limiters with care. When starting a search, consider opting out of limiters until the initial literature screening is complete. The second or third time through your research may be the ideal time to focus on specific time periods or material (scholarly vs newspaper).

★ Truncating Search Terms

Expanding your search term at the root.

Truncating is often referred to as 'wildcard' searching. Databases may have their own specific wildcard elements however, the most commonly used are the asterisk (*) or question mark (?).  When used within your search. they will expand returned results.

Asterisk (*) Wildcard

Using the asterisk wildcard will return varied spellings of the truncated word. In the following example, the search term education was truncated after the letter "t."

Original Search
adult education adult educat*
Results included:  educate, education, educator, educators'/educators, educating, & educational

Explore these database help pages for additional information on crafting search terms.

  • EBSCO Connect: Searching with Wildcards and Truncation Symbols
  • EBSCO Connect: Searching with Boolean Operators
  • EBSCO Connect: EBSCOhost Search Tips
  • EBSCO Connect: Basic Searching with EBSCO
  • ProQuest Help: Search Tips
  • ERIC: How does ERIC search work?

★ EBSCO Databases & Google Drive

Tips for saving research directly to Google drive.

Researching in an EBSCO database?

It is possible to save articles (PDF and HTML) and abstracts in EBSCOhost databases directly to Google drive. Select the Google Drive icon, authenticate using a Google account, and an EBSCO folder will be created in your account. This is a great option for managing your research. If documenting your research in a Google Doc, consider linking the information to actual articles saved in drive.

EBSCO Databases & Google Drive

EBSCOHost Databases & Google Drive: Managing your Research

This video features an overview of how to use Google Drive with EBSCO databases to help manage your research. It presents information for connecting an active Google account to EBSCO and steps needed to provide permission for EBSCO to manage a folder in Drive.

About the Video:  Closed captioning is available, select CC from the video menu.  If you need to review a specific area on the video, view on YouTube and expand the video description for access to topic time stamps.  A video transcript is provided below.

  • EBSCOhost Databases & Google Scholar

Defining Literature Review

What is a literature review.

A definition from the Online Dictionary for Library and Information Sciences .

A literature review is "a comprehensive survey of the works published in a particular field of study or line of research, usually over a specific period of time, in the form of an in-depth, critical bibliographic essay or annotated list in which attention is drawn to the most significant works" (Reitz, 2014). 

A systemic review is "a literature review focused on a specific research question, which uses explicit methods to minimize bias in the identification, appraisal, selection, and synthesis of all the high-quality evidence pertinent to the question" (Reitz, 2014).

Recommended Reading

Cover Art

About this page

EBSCO Connect [Discovery and Search]. (2022). Searching with boolean operators. Retrieved May, 3, 2022 from https://connect.ebsco.com/s/?language=en_US

EBSCO Connect [Discover and Search]. (2022). Searching with wildcards and truncation symbols. Retrieved May 3, 2022; https://connect.ebsco.com/s/?language=en_US

Machi, L.A. & McEvoy, B.T. (2009). The literature review . Thousand Oaks, CA: Corwin Press: 

Reitz, J.M. (2014). Online dictionary for library and information science. ABC-CLIO, Libraries Unlimited . Retrieved from https://www.abc-clio.com/ODLIS/odlis_A.aspx

Ridley, D. (2008). The literature review: A step-by-step guide for students . Thousand Oaks, CA: Sage Publications, Inc.

Archer Librarians

Schedule an appointment.

Contact a librarian directly (email), or submit a request form. If you have worked with someone before, you can request them on the form.

  • ★ Archer Library Help • Online Reqest Form
  • Carrie Halquist • Reference & Instruction
  • Jessica Byers • Reference & Curation
  • Don Reams • Corrections Education & Reference
  • Diane Schrecker • Education & Head of the IRC
  • Tanaya Silcox • Technical Services & Business
  • Sarah Thomas • Acquisitions & ATS Librarian
  • << Previous: Research & Stats
  • Next: Literature Review Resources >>
  • Last Updated: Jul 31, 2024 10:06 AM
  • URL: https://libguides.ashland.edu/qualitative

Archer Library • Ashland University © Copyright 2023. An Equal Opportunity/Equal Access Institution.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

A Guide to Writing a Qualitative Systematic Review Protocol to Enhance Evidence-Based Practice in Nursing and Health Care

Affiliations.

  • 1 PhD candidate, School of Nursing and Midwifey, Monash University, and Clinical Nurse Specialist, Adult and Pediatric Intensive Care Unit, Monash Health, Melbourne, Victoria, Australia.
  • 2 Lecturer, School of Nursing and Midwifery, Monash University, Melbourne, Victoria, Australia.
  • 3 Senior Lecturer, School of Nursing and Midwifery, Monash University, Melbourne, Victoria, Australia.
  • PMID: 26790142
  • DOI: 10.1111/wvn.12134

Background: The qualitative systematic review is a rapidly developing area of nursing research. In order to present trustworthy, high-quality recommendations, such reviews should be based on a review protocol to minimize bias and enhance transparency and reproducibility. Although there are a number of resources available to guide researchers in developing a quantitative review protocol, very few resources exist for qualitative reviews.

Aims: To guide researchers through the process of developing a qualitative systematic review protocol, using an example review question.

Methodology: The key elements required in a systematic review protocol are discussed, with a focus on application to qualitative reviews: Development of a research question; formulation of key search terms and strategies; designing a multistage review process; critical appraisal of qualitative literature; development of data extraction techniques; and data synthesis. The paper highlights important considerations during the protocol development process, and uses a previously developed review question as a working example.

Implications for research: This paper will assist novice researchers in developing a qualitative systematic review protocol. By providing a worked example of a protocol, the paper encourages the development of review protocols, enhancing the trustworthiness and value of the completed qualitative systematic review findings.

Linking evidence to action: Qualitative systematic reviews should be based on well planned, peer reviewed protocols to enhance the trustworthiness of results and thus their usefulness in clinical practice. Protocols should outline, in detail, the processes which will be used to undertake the review, including key search terms, inclusion and exclusion criteria, and the methods used for critical appraisal, data extraction and data analysis to facilitate transparency of the review process. Additionally, journals should encourage and support the publication of review protocols, and should require reference to a protocol prior to publication of the review results.

Keywords: guidelines; meta synthesis; qualitative; systematic review protocol.

© 2016 Sigma Theta Tau International.

PubMed Disclaimer

Similar articles

  • How has the impact of 'care pathway technologies' on service integration in stroke care been measured and what is the strength of the evidence to support their effectiveness in this respect? Allen D, Rixson L. Allen D, et al. Int J Evid Based Healthc. 2008 Mar;6(1):78-110. doi: 10.1111/j.1744-1609.2007.00098.x. Int J Evid Based Healthc. 2008. PMID: 21631815
  • Procedures and methods of benefit assessments for medicines in Germany. Bekkering GE, Kleijnen J. Bekkering GE, et al. Eur J Health Econ. 2008 Nov;9 Suppl 1:5-29. doi: 10.1007/s10198-008-0122-5. Eur J Health Econ. 2008. PMID: 18987905
  • [Procedures and methods of benefit assessments for medicines in Germany]. Bekkering GE, Kleijnen J. Bekkering GE, et al. Dtsch Med Wochenschr. 2008 Dec;133 Suppl 7:S225-46. doi: 10.1055/s-0028-1100954. Epub 2008 Nov 25. Dtsch Med Wochenschr. 2008. PMID: 19034813 German.
  • Evidence-based medicine, systematic reviews, and guidelines in interventional pain management, part I: introduction and general considerations. Manchikanti L. Manchikanti L. Pain Physician. 2008 Mar-Apr;11(2):161-86. Pain Physician. 2008. PMID: 18354710 Review.
  • An example of the use of systematic reviews to answer an effectiveness question. Forbes DA. Forbes DA. West J Nurs Res. 2003 Mar;25(2):179-92. doi: 10.1177/0193945902250036. West J Nurs Res. 2003. PMID: 12666642 Review.
  • Patients' experiences with musculoskeletal spinal pain: A qualitative systematic review protocol. El Chamaa A, Kowalski K, Parikh P, Rushton A. El Chamaa A, et al. PLoS One. 2024 Aug 8;19(8):e0306993. doi: 10.1371/journal.pone.0306993. eCollection 2024. PLoS One. 2024. PMID: 39116059 Free PMC article.
  • Physical Activity Interventions in People with Diabetes: A Systematic Review of The Qualitative Evidence. Vilafranca-Cartagena M, Bonet-Augè A, Colillas-Malet E, Puiggrós-Binefa A, Tort-Nasarre G. Vilafranca-Cartagena M, et al. Healthcare (Basel). 2024 Jul 9;12(14):1373. doi: 10.3390/healthcare12141373. Healthcare (Basel). 2024. PMID: 39057516 Free PMC article. Review.
  • Telemedicine in Advanced Kidney Disease and Kidney Transplant: A Qualitative Meta-Analysis of Studies of Patient Perspectives. Manko CD, Apple BJ, Chang AR, Romagnoli KM, Johannes BL. Manko CD, et al. Kidney Med. 2024 May 24;6(7):100849. doi: 10.1016/j.xkme.2024.100849. eCollection 2024 Jul. Kidney Med. 2024. PMID: 39040545 Free PMC article.
  • Voices of Wisdom: Geriatric Interviews on Self-Management of Type 2 Diabetes in the United States-A Systematic Review and Metasynthesis. Lo DF, Gawash A, Shah KP, Emanuel J, Goodwin B, Shamilov DD, Kumar G, Jean N, White CP. Lo DF, et al. J Diabetes Res. 2024 Jul 13;2024:2673742. doi: 10.1155/2024/2673742. eCollection 2024. J Diabetes Res. 2024. PMID: 39035684 Free PMC article. Review.
  • Factors affecting implementation of mindfulness in hospital settings: A qualitative meta-synthesis of healthcare professionals' experiences. Knudsen RK, Skovbjerg S, Pedersen EL, Nielsen CL, Storkholm MH, Timmermann C. Knudsen RK, et al. Int J Nurs Stud Adv. 2024 Mar 27;6:100192. doi: 10.1016/j.ijnsa.2024.100192. eCollection 2024 Jun. Int J Nurs Stud Adv. 2024. PMID: 38746813 Free PMC article. Review.
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.

Other Literature Sources

  • scite Smart Citations

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • Privacy Policy

Research Method

Home » Qualitative Research – Methods, Analysis Types and Guide

Qualitative Research – Methods, Analysis Types and Guide

Table of Contents

Qualitative Research

Qualitative Research

Qualitative research is a type of research methodology that focuses on exploring and understanding people’s beliefs, attitudes, behaviors, and experiences through the collection and analysis of non-numerical data. It seeks to answer research questions through the examination of subjective data, such as interviews, focus groups, observations, and textual analysis.

Qualitative research aims to uncover the meaning and significance of social phenomena, and it typically involves a more flexible and iterative approach to data collection and analysis compared to quantitative research. Qualitative research is often used in fields such as sociology, anthropology, psychology, and education.

Qualitative Research Methods

Types of Qualitative Research

Qualitative Research Methods are as follows:

One-to-One Interview

This method involves conducting an interview with a single participant to gain a detailed understanding of their experiences, attitudes, and beliefs. One-to-one interviews can be conducted in-person, over the phone, or through video conferencing. The interviewer typically uses open-ended questions to encourage the participant to share their thoughts and feelings. One-to-one interviews are useful for gaining detailed insights into individual experiences.

Focus Groups

This method involves bringing together a group of people to discuss a specific topic in a structured setting. The focus group is led by a moderator who guides the discussion and encourages participants to share their thoughts and opinions. Focus groups are useful for generating ideas and insights, exploring social norms and attitudes, and understanding group dynamics.

Ethnographic Studies

This method involves immersing oneself in a culture or community to gain a deep understanding of its norms, beliefs, and practices. Ethnographic studies typically involve long-term fieldwork and observation, as well as interviews and document analysis. Ethnographic studies are useful for understanding the cultural context of social phenomena and for gaining a holistic understanding of complex social processes.

Text Analysis

This method involves analyzing written or spoken language to identify patterns and themes. Text analysis can be quantitative or qualitative. Qualitative text analysis involves close reading and interpretation of texts to identify recurring themes, concepts, and patterns. Text analysis is useful for understanding media messages, public discourse, and cultural trends.

This method involves an in-depth examination of a single person, group, or event to gain an understanding of complex phenomena. Case studies typically involve a combination of data collection methods, such as interviews, observations, and document analysis, to provide a comprehensive understanding of the case. Case studies are useful for exploring unique or rare cases, and for generating hypotheses for further research.

Process of Observation

This method involves systematically observing and recording behaviors and interactions in natural settings. The observer may take notes, use audio or video recordings, or use other methods to document what they see. Process of observation is useful for understanding social interactions, cultural practices, and the context in which behaviors occur.

Record Keeping

This method involves keeping detailed records of observations, interviews, and other data collected during the research process. Record keeping is essential for ensuring the accuracy and reliability of the data, and for providing a basis for analysis and interpretation.

This method involves collecting data from a large sample of participants through a structured questionnaire. Surveys can be conducted in person, over the phone, through mail, or online. Surveys are useful for collecting data on attitudes, beliefs, and behaviors, and for identifying patterns and trends in a population.

Qualitative data analysis is a process of turning unstructured data into meaningful insights. It involves extracting and organizing information from sources like interviews, focus groups, and surveys. The goal is to understand people’s attitudes, behaviors, and motivations

Qualitative Research Analysis Methods

Qualitative Research analysis methods involve a systematic approach to interpreting and making sense of the data collected in qualitative research. Here are some common qualitative data analysis methods:

Thematic Analysis

This method involves identifying patterns or themes in the data that are relevant to the research question. The researcher reviews the data, identifies keywords or phrases, and groups them into categories or themes. Thematic analysis is useful for identifying patterns across multiple data sources and for generating new insights into the research topic.

Content Analysis

This method involves analyzing the content of written or spoken language to identify key themes or concepts. Content analysis can be quantitative or qualitative. Qualitative content analysis involves close reading and interpretation of texts to identify recurring themes, concepts, and patterns. Content analysis is useful for identifying patterns in media messages, public discourse, and cultural trends.

Discourse Analysis

This method involves analyzing language to understand how it constructs meaning and shapes social interactions. Discourse analysis can involve a variety of methods, such as conversation analysis, critical discourse analysis, and narrative analysis. Discourse analysis is useful for understanding how language shapes social interactions, cultural norms, and power relationships.

Grounded Theory Analysis

This method involves developing a theory or explanation based on the data collected. Grounded theory analysis starts with the data and uses an iterative process of coding and analysis to identify patterns and themes in the data. The theory or explanation that emerges is grounded in the data, rather than preconceived hypotheses. Grounded theory analysis is useful for understanding complex social phenomena and for generating new theoretical insights.

Narrative Analysis

This method involves analyzing the stories or narratives that participants share to gain insights into their experiences, attitudes, and beliefs. Narrative analysis can involve a variety of methods, such as structural analysis, thematic analysis, and discourse analysis. Narrative analysis is useful for understanding how individuals construct their identities, make sense of their experiences, and communicate their values and beliefs.

Phenomenological Analysis

This method involves analyzing how individuals make sense of their experiences and the meanings they attach to them. Phenomenological analysis typically involves in-depth interviews with participants to explore their experiences in detail. Phenomenological analysis is useful for understanding subjective experiences and for developing a rich understanding of human consciousness.

Comparative Analysis

This method involves comparing and contrasting data across different cases or groups to identify similarities and differences. Comparative analysis can be used to identify patterns or themes that are common across multiple cases, as well as to identify unique or distinctive features of individual cases. Comparative analysis is useful for understanding how social phenomena vary across different contexts and groups.

Applications of Qualitative Research

Qualitative research has many applications across different fields and industries. Here are some examples of how qualitative research is used:

  • Market Research: Qualitative research is often used in market research to understand consumer attitudes, behaviors, and preferences. Researchers conduct focus groups and one-on-one interviews with consumers to gather insights into their experiences and perceptions of products and services.
  • Health Care: Qualitative research is used in health care to explore patient experiences and perspectives on health and illness. Researchers conduct in-depth interviews with patients and their families to gather information on their experiences with different health care providers and treatments.
  • Education: Qualitative research is used in education to understand student experiences and to develop effective teaching strategies. Researchers conduct classroom observations and interviews with students and teachers to gather insights into classroom dynamics and instructional practices.
  • Social Work : Qualitative research is used in social work to explore social problems and to develop interventions to address them. Researchers conduct in-depth interviews with individuals and families to understand their experiences with poverty, discrimination, and other social problems.
  • Anthropology : Qualitative research is used in anthropology to understand different cultures and societies. Researchers conduct ethnographic studies and observe and interview members of different cultural groups to gain insights into their beliefs, practices, and social structures.
  • Psychology : Qualitative research is used in psychology to understand human behavior and mental processes. Researchers conduct in-depth interviews with individuals to explore their thoughts, feelings, and experiences.
  • Public Policy : Qualitative research is used in public policy to explore public attitudes and to inform policy decisions. Researchers conduct focus groups and one-on-one interviews with members of the public to gather insights into their perspectives on different policy issues.

How to Conduct Qualitative Research

Here are some general steps for conducting qualitative research:

  • Identify your research question: Qualitative research starts with a research question or set of questions that you want to explore. This question should be focused and specific, but also broad enough to allow for exploration and discovery.
  • Select your research design: There are different types of qualitative research designs, including ethnography, case study, grounded theory, and phenomenology. You should select a design that aligns with your research question and that will allow you to gather the data you need to answer your research question.
  • Recruit participants: Once you have your research question and design, you need to recruit participants. The number of participants you need will depend on your research design and the scope of your research. You can recruit participants through advertisements, social media, or through personal networks.
  • Collect data: There are different methods for collecting qualitative data, including interviews, focus groups, observation, and document analysis. You should select the method or methods that align with your research design and that will allow you to gather the data you need to answer your research question.
  • Analyze data: Once you have collected your data, you need to analyze it. This involves reviewing your data, identifying patterns and themes, and developing codes to organize your data. You can use different software programs to help you analyze your data, or you can do it manually.
  • Interpret data: Once you have analyzed your data, you need to interpret it. This involves making sense of the patterns and themes you have identified, and developing insights and conclusions that answer your research question. You should be guided by your research question and use your data to support your conclusions.
  • Communicate results: Once you have interpreted your data, you need to communicate your results. This can be done through academic papers, presentations, or reports. You should be clear and concise in your communication, and use examples and quotes from your data to support your findings.

Examples of Qualitative Research

Here are some real-time examples of qualitative research:

  • Customer Feedback: A company may conduct qualitative research to understand the feedback and experiences of its customers. This may involve conducting focus groups or one-on-one interviews with customers to gather insights into their attitudes, behaviors, and preferences.
  • Healthcare : A healthcare provider may conduct qualitative research to explore patient experiences and perspectives on health and illness. This may involve conducting in-depth interviews with patients and their families to gather information on their experiences with different health care providers and treatments.
  • Education : An educational institution may conduct qualitative research to understand student experiences and to develop effective teaching strategies. This may involve conducting classroom observations and interviews with students and teachers to gather insights into classroom dynamics and instructional practices.
  • Social Work: A social worker may conduct qualitative research to explore social problems and to develop interventions to address them. This may involve conducting in-depth interviews with individuals and families to understand their experiences with poverty, discrimination, and other social problems.
  • Anthropology : An anthropologist may conduct qualitative research to understand different cultures and societies. This may involve conducting ethnographic studies and observing and interviewing members of different cultural groups to gain insights into their beliefs, practices, and social structures.
  • Psychology : A psychologist may conduct qualitative research to understand human behavior and mental processes. This may involve conducting in-depth interviews with individuals to explore their thoughts, feelings, and experiences.
  • Public Policy: A government agency or non-profit organization may conduct qualitative research to explore public attitudes and to inform policy decisions. This may involve conducting focus groups and one-on-one interviews with members of the public to gather insights into their perspectives on different policy issues.

Purpose of Qualitative Research

The purpose of qualitative research is to explore and understand the subjective experiences, behaviors, and perspectives of individuals or groups in a particular context. Unlike quantitative research, which focuses on numerical data and statistical analysis, qualitative research aims to provide in-depth, descriptive information that can help researchers develop insights and theories about complex social phenomena.

Qualitative research can serve multiple purposes, including:

  • Exploring new or emerging phenomena : Qualitative research can be useful for exploring new or emerging phenomena, such as new technologies or social trends. This type of research can help researchers develop a deeper understanding of these phenomena and identify potential areas for further study.
  • Understanding complex social phenomena : Qualitative research can be useful for exploring complex social phenomena, such as cultural beliefs, social norms, or political processes. This type of research can help researchers develop a more nuanced understanding of these phenomena and identify factors that may influence them.
  • Generating new theories or hypotheses: Qualitative research can be useful for generating new theories or hypotheses about social phenomena. By gathering rich, detailed data about individuals’ experiences and perspectives, researchers can develop insights that may challenge existing theories or lead to new lines of inquiry.
  • Providing context for quantitative data: Qualitative research can be useful for providing context for quantitative data. By gathering qualitative data alongside quantitative data, researchers can develop a more complete understanding of complex social phenomena and identify potential explanations for quantitative findings.

When to use Qualitative Research

Here are some situations where qualitative research may be appropriate:

  • Exploring a new area: If little is known about a particular topic, qualitative research can help to identify key issues, generate hypotheses, and develop new theories.
  • Understanding complex phenomena: Qualitative research can be used to investigate complex social, cultural, or organizational phenomena that are difficult to measure quantitatively.
  • Investigating subjective experiences: Qualitative research is particularly useful for investigating the subjective experiences of individuals or groups, such as their attitudes, beliefs, values, or emotions.
  • Conducting formative research: Qualitative research can be used in the early stages of a research project to develop research questions, identify potential research participants, and refine research methods.
  • Evaluating interventions or programs: Qualitative research can be used to evaluate the effectiveness of interventions or programs by collecting data on participants’ experiences, attitudes, and behaviors.

Characteristics of Qualitative Research

Qualitative research is characterized by several key features, including:

  • Focus on subjective experience: Qualitative research is concerned with understanding the subjective experiences, beliefs, and perspectives of individuals or groups in a particular context. Researchers aim to explore the meanings that people attach to their experiences and to understand the social and cultural factors that shape these meanings.
  • Use of open-ended questions: Qualitative research relies on open-ended questions that allow participants to provide detailed, in-depth responses. Researchers seek to elicit rich, descriptive data that can provide insights into participants’ experiences and perspectives.
  • Sampling-based on purpose and diversity: Qualitative research often involves purposive sampling, in which participants are selected based on specific criteria related to the research question. Researchers may also seek to include participants with diverse experiences and perspectives to capture a range of viewpoints.
  • Data collection through multiple methods: Qualitative research typically involves the use of multiple data collection methods, such as in-depth interviews, focus groups, and observation. This allows researchers to gather rich, detailed data from multiple sources, which can provide a more complete picture of participants’ experiences and perspectives.
  • Inductive data analysis: Qualitative research relies on inductive data analysis, in which researchers develop theories and insights based on the data rather than testing pre-existing hypotheses. Researchers use coding and thematic analysis to identify patterns and themes in the data and to develop theories and explanations based on these patterns.
  • Emphasis on researcher reflexivity: Qualitative research recognizes the importance of the researcher’s role in shaping the research process and outcomes. Researchers are encouraged to reflect on their own biases and assumptions and to be transparent about their role in the research process.

Advantages of Qualitative Research

Qualitative research offers several advantages over other research methods, including:

  • Depth and detail: Qualitative research allows researchers to gather rich, detailed data that provides a deeper understanding of complex social phenomena. Through in-depth interviews, focus groups, and observation, researchers can gather detailed information about participants’ experiences and perspectives that may be missed by other research methods.
  • Flexibility : Qualitative research is a flexible approach that allows researchers to adapt their methods to the research question and context. Researchers can adjust their research methods in real-time to gather more information or explore unexpected findings.
  • Contextual understanding: Qualitative research is well-suited to exploring the social and cultural context in which individuals or groups are situated. Researchers can gather information about cultural norms, social structures, and historical events that may influence participants’ experiences and perspectives.
  • Participant perspective : Qualitative research prioritizes the perspective of participants, allowing researchers to explore subjective experiences and understand the meanings that participants attach to their experiences.
  • Theory development: Qualitative research can contribute to the development of new theories and insights about complex social phenomena. By gathering rich, detailed data and using inductive data analysis, researchers can develop new theories and explanations that may challenge existing understandings.
  • Validity : Qualitative research can offer high validity by using multiple data collection methods, purposive and diverse sampling, and researcher reflexivity. This can help ensure that findings are credible and trustworthy.

Limitations of Qualitative Research

Qualitative research also has some limitations, including:

  • Subjectivity : Qualitative research relies on the subjective interpretation of researchers, which can introduce bias into the research process. The researcher’s perspective, beliefs, and experiences can influence the way data is collected, analyzed, and interpreted.
  • Limited generalizability: Qualitative research typically involves small, purposive samples that may not be representative of larger populations. This limits the generalizability of findings to other contexts or populations.
  • Time-consuming: Qualitative research can be a time-consuming process, requiring significant resources for data collection, analysis, and interpretation.
  • Resource-intensive: Qualitative research may require more resources than other research methods, including specialized training for researchers, specialized software for data analysis, and transcription services.
  • Limited reliability: Qualitative research may be less reliable than quantitative research, as it relies on the subjective interpretation of researchers. This can make it difficult to replicate findings or compare results across different studies.
  • Ethics and confidentiality: Qualitative research involves collecting sensitive information from participants, which raises ethical concerns about confidentiality and informed consent. Researchers must take care to protect the privacy and confidentiality of participants and obtain informed consent.

Also see Research Methods

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

One-to-One Interview in Research

One-to-One Interview – Methods and Guide

Quasi-Experimental Design

Quasi-Experimental Research Design – Types...

Applied Research

Applied Research – Types, Methods and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Questionnaire

Questionnaire – Definition, Types, and Examples

Exploratory Research

Exploratory Research – Types, Methods and...

  • Research article
  • Open access
  • Published: 18 May 2020

What feedback do reviewers give when reviewing qualitative manuscripts? A focused mapping review and synthesis

  • Oliver Rudolf HERBER   ORCID: orcid.org/0000-0003-3041-4098 1 ,
  • Caroline BRADBURY-JONES 2 ,
  • Susanna BÖLING 3 ,
  • Sarah COMBES 4 ,
  • Julian HIRT 5 ,
  • Yvonne KOOP 6 ,
  • Ragnhild NYHAGEN 7 ,
  • Jessica D. VELDHUIZEN 8 &
  • Julie TAYLOR 2 , 9  

BMC Medical Research Methodology volume  20 , Article number:  122 ( 2020 ) Cite this article

15k Accesses

17 Citations

34 Altmetric

Metrics details

Peer review is at the heart of the scientific process. With the advent of digitisation, journals started to offer electronic articles or publishing online only. A new philosophy regarding the peer review process found its way into academia: the open peer review. Open peer review as practiced by BioMed Central ( BMC ) is a type of peer review where the names of authors and reviewers are disclosed and reviewer comments are published alongside the article. A number of articles have been published to assess peer reviews using quantitative research. However, no studies exist that used qualitative methods to analyse the content of reviewers’ comments.

A focused mapping review and synthesis (FMRS) was undertaken of manuscripts reporting qualitative research submitted to BMC open access journals from 1 January – 31 March 2018. Free-text reviewer comments were extracted from peer review reports using a 77-item classification system organised according to three key dimensions that represented common themes and sub-themes. A two stage analysis process was employed. First, frequency counts were undertaken that allowed revealing patterns across themes/sub-themes. Second, thematic analysis was conducted on selected themes of the narrative portion of reviewer reports.

A total of 107 manuscripts submitted to nine open-access journals were included in the FMRS. The frequency analysis revealed that among the 30 most frequently employed themes “writing criteria” (dimension II) is the top ranking theme, followed by comments in relation to the “methods” (dimension I). Besides that, some results suggest an underlying quantitative mindset of reviewers. Results are compared and contrasted in relation to established reporting guidelines for qualitative research to inform reviewers and authors of frequent feedback offered to enhance the quality of manuscripts.

Conclusions

This FMRS has highlighted some important issues that hold lessons for authors, reviewers and editors. We suggest modifying the current reporting guidelines by including a further item called “Degree of data transformation” to prompt authors and reviewers to make a judgment about the appropriateness of the degree of data transformation in relation to the chosen analysis method. Besides, we suggest that completion of a reporting checklist on submission becomes a requirement.

Peer Review reports

Peer review is at the heart of the scientific process. Reviewers independently examine a submitted manuscript and then recommend acceptance, rejection or – most frequently – revisions to be made before it gets published [ 1 ]. Editors rely on peer review to make decisions on which submissions warrant publication and to enhance quality standards. Typically, each manuscript is reviewed by two or three reviewers [ 2 ] who are chosen for their knowledge and expertise regarding the subject or methodology [ 3 ]. The history of peer review, often regarded as a “touchstone of modern evaluation of scientific quality” [ 4 ] is relatively short. For example, the British Medical Journal (now the BMJ ) was a pioneer when it established a system of external reviewers in 1893. But it was in the second half of the twentieth century that employing peers as reviewers became custom [ 5 ]. Then, in 1973 the prestigious scientific weekly Nature introduced a rigorous formal peer review system for every paper it printed [ 6 ].

Despite ever-growing concerns about its effectiveness, fairness and reliability [ 4 , 7 ], peer review as a central part of academic self-regulation is still considered the best available practice [ 8 ]. With the advent of digitisation in the late 1990s, scholarly publishing has changed dramatically with many journals starting to offer print as well as electronic articles or publishing online only [ 9 ]. The latter category includes for-profit journals such as BioMed Central ( BMC ) that have been online since their inception in 1999, with an ever evolving portfolio of currently over 300 peer-reviewed journals.

As compared to traditional print journals where individuals or libraries need to pay a fee for an annual subscription or for reading a specific article, open access journals such as BMC, PLoS ONE or BMJ Open are permanently free for everyone to read and download since the cost of publishing is paid by the author or an entity such as the university. Many, but not all, open access journals impose an article processing charge on the author, also known as the gold open access route, to cover the cost of publication. Depending on the journal and the publisher, article processing charges can range significantly between US$100 and US$5200 per article [ 10 , 11 ].

In the digital age, a new philosophy regarding the peer review process found its way into academia, questioning the anonymity of the closed system of peer-review as contrary to the demands for transparency [ 1 ]. The issue of reviewer bias, especially concerning gender and affiliation [ 12 ], led not only to the establishment of double-blind review but also to its extreme opposite: the open peer review system [ 8 ]. Although the term ‘open peer review’ has no standardised definition, scholars use the term to indicate that the identities of the authors and reviewers are disclosed and that reviewer reports are openly available [ 13 ]. In the late 1990s, the BMJ changed from a closed system of peer review to an open system [ 14 , 15 ]. During the same time, other publishers such as some journals in BMC followed the example of opening up their peer review.

While peer review reports have long been hidden from the public gaze [ 16 , 17 ], opening up the closed peer review system allows researchers to access reviewer comments, thus making it possible to study them. Since then, a number of articles have been published to assess reviews using quantitative research methods. For example, Landkroon et al. [ 18 ] assessed the quality of 247 reviews of 119 original articles using a 5-point Likert scale. Similarly, Henly and Dougherty [ 19 ] developed and applied a grading scale to assess the narrative portion of 464 reviews of 203 manuscripts using descriptive statistics. The retrospective cohort study by van Lent et al. [ 20 ] assessed peer review comments on drug trials from 246 manuscripts to investigate whether there is a relationship between the content of these comments and sponsorship using a generalised linear mixed model. Most recently, Davis et al. [ 21 ] evaluated reviewer grading forms for surgical journals with higher impact factors and compared them to surgical journals with lower impact factors using Fisher’s exact test.

Despite the readily available reviewer comments that are published alongside the final article of many open access journals, to the best of our knowledge no studies exist to date that used – besides quantitative methods – also qualitative methods to analyse the content of reviewers’ comments. Identifying (negative) reviewer comments will help authors to pay particular attention to these aspects and assist prospective qualitative researchers to understand the most common pitfalls when preparing their manuscript for submission. Thus, the aim of the study was to appraise the quality and nature of reviewers’ feedback in order to understand how reviewers engage with and influence the development of a qualitative manuscript. Our focus on qualitative research can be explained by the fact that we are passionate qualitative researchers with a history in determining the state of qualitative research in health and social science literature [ 22 ]. The following research questions were answered: (1) What are the frequencies of certain commentary types in manuscripts reporting on qualitative research? and (2) What are the nature of reviewers’ comments made on manuscripts reporting on qualitative research?

We conducted a focused mapping review and synthesis (FMRS) [ 22 , 23 , 24 , 25 ]. Most forms of review aim for breadth and exhaustive searches, but the FMRS searches within specific, pre-determined journals. While Platt [ 26 ] observed that ‘a number of studies have used samples of journal articles’, the distinctive feature of the FMRS is the purposive selection of journals. These are chosen for their likelihood to contain articles relevant to the field of inquiry – in this case qualitative research published in open access journals that operate an open peer-review process that involves posting the reviewer’s reports. It is these reports that we have analysed using thematic analysis techniques [ 27 ].

Currently there are over 70 BMC journals that have adopted open peer-review. The FMRS focused on reviewers’ reports published during the first quarter of 2018. Journals were selected using a three-stage process. First, we produced a list with all BMC journals that operate an open peer review process and will publish qualitative research articles ( n  = 62). Second, from this list we selected journals that are general fields of practice and non-disease specific ( n  = 15). Third, to ensure a sufficient number of qualitative articles, we excluded journals with less than 25 hits on the search term “qualitative” for the year 2018 (search date: 16 July 2018) because chances were considered too slim to contain sufficient articles of interest. At the end of the selection process, the following nine BMC journals were included in our synthesis: (1) BMC Complementary and Alternative Medicine , (2) BMC Family Practice , (3) BMC Health Services Research , (4) BMC Medical Education , (5) BMC Medical Ethics , (6) BMC Nursing , (7) BMC Public Health , (8) Health Research Policy and Systems , and (9) Implementation Science . Since these journals represent different subjects, a variety of qualitative papers written for different audiences was captured. Every article published within the timeframe was scrutinised against the inclusion and exclusion criteria (Table  1 ).

Development of the data extraction sheet

A validated instrument for the classification of reviewer comments does not exist [ 20 ]. Hence, a detailed classification system was developed and pilot tested considering previous research [ 20 ]. Our newly developed data extraction sheet consists of a 77-item classification system organised according to three dimensions: (1) scientific/technical content, (2) writing criteria/representation, and (3) technical criteria. It represents themes and sub-themes identified by reading reviewer comments from twelve articles published in open peer-review journals. For the development of the data extraction sheet, we randomly selected four articles containing qualitative research from each of the following three journals published between 2017 and 2018: BMC Nursing , BMC Family Practice and BMJ Open . We then analysed the reviews of manuscripts by systematically coding and categorising the reviewers’ free-text comments. Following the recommendation by Shashok [ 28 ], we initially organised the reviewer’s comments along two main dimensions, i.e., scientific content and writing criteria. Shashok [ 28 ] argues that when peer reviewers confuse content and writing, their feedback can be misunderstood by authors who may modify texts in unintentional ways to the detriment of the manuscript.

To check the comprehensiveness of our classification system, provisional themes and sub-themes were piloted using reviewer comments we had previously received from twelve of our own manuscripts that had been submitted to journals that operate blind peer-review. We wanted to account for potential differences in reviewers’ feedback (open vs. blind review). As a result of this quality enhancement procedure, three sub-themes and a further dimension (‘technical criteria’) were added. For reasons of clarity and comprehensibility, the dimension ‘scientific content’ was subdivided following the IMRaD structure. IMRaD is the most common organisational structure of an original research article comprising I ntroduction, M ethods, R esults a nd D iscussion [ 29 ]. Anchoring examples were provided for each theme/sub-theme. To account for reviewer comments unrelated to the IMRaD structure, a sub-category called ‘generic codes’ was created to collect more general comments. When reviewer comments could not be assigned to any of the existing themes/sub-themes, they were noted as “Miscellaneous”. Table  2 shows the final data extraction sheet including anchoring examples.

Data extraction procedure

Data extraction was accomplished by six doctoral students (coders). On average, each coder was allocated 18 articles. After reading the reviews, coders independently classified each comment using the classification system. In line with Day et al. [ 30 ] a reviewer comment was defined as “ a distinct statement or idea found in a review, regardless of whether that statement was presented in isolation or was included in a paragraph that contained several statements. ” Editor comments were not included. Reviewers’ comments were copied and pasted into the most appropriate item of the classification system following a set of pre-defined guidelines. For example, a reviewer comment could only be coded once by assigning it to the most appropriate theme/sub-theme. A separate data extraction sheet was used for each article. For the purpose of calibration, the first completed data extraction sheet from each coder together with the reviewer’s comments was sent to the study coordinator (ORH) who provided feedback on classifying the reviewer comments. The aim of the calibration was to ensure that all coders were working within the same parameters of understanding, to discuss the subtleties of the judgement process and create consensus regarding classifications. Although the assignment to specific themes/sub-themes is, by nature, a subjective process, difficult to assign comments were classified following discussion and agreement between coder and study coordinator to ensure reliability. Once all data extraction was completed, two experienced qualitative researchers (CB-J, JT) independently undertook a further calibration exercise of a random sub-sample of 20% of articles ( n  = 22) to ensure consistency across coders. Articles were selected using a random number generator. For these 22 articles, classification discrepancies were resolved by consensus between coders and experienced researchers. Finally, all individual data extraction sheets were collated to create a comprehensive Excel spreadsheet with over 8000 cells that allowed tallying the reviewer’s comments across manuscripts for the purpose of data analysis. For each manuscript, a reviewer could have several remarks related to one type of comment. However, each type of comment was scored only once per category.

Finally, reviewer comments were ‘quantitized’ [ 31 ] by applying programming language (Python) to Jupyter Notebook, an open-source web application, to perform frequency counts of free-text comments regarding the 77 items. Among other data manipulation, we sorted elements of arrays in descending order of frequency using Pandas, counted the number of studies in which a certain theme/sub-theme occurred, conducted distinct word searches using NLTK 3 or grouped data according to certain criteria. The calculation of frequencies is a way to unite the empirical precision of quantitative research with the descriptive precision of qualitative research [ 32 ]. This quantitative transformation of qualitative data allowed extracting more meaning from our spreadsheet through revealing patterns across themes/sub-themes, thus giving indicators about which of them to analyse using thematic analysis.

A total of 109 manuscripts submitted to nine open-access journals were included in the FMRS. When scrutinising the peer review reports, we noticed that on one occasion the reviewer’s comments were missing [ 33 ]. For the remaining 108 manuscripts, reviewer comments were accessible via the journal’s pre-publication history. On close inspection, however, it became apparent that one article did not contain qualitative research, thus leaving ultimately 107 articles to work with ( supplementary file ). Considering that each manuscript could potentially be reviewed by multiple reviewers and underwent at least one round of revision, the total number of reviewer reports analysed amounted to 347 containing collectively 1703 reviewer comments. The level of inter-rater agreement for the 22 articles included in the calibration exercise was 97%. Disagreement was, for example, in relation to coding a comment as “miscellaneous” or as “confirmation/approval (from reviewer)”. For 18 out of 22 articles, there was 100% agreement for all types of comments.

Variation in number of reviewers

The number of reviewers invited by the editor to review a submitted manuscript varied greatly within and among journals. While the majority of manuscripts across journals had been reviewed by two to three reviewers, there were also significant variations. For example, the manuscript submitted to BMC Medical Education by Burgess et al. [ 34 ] had been reviewed by five reviewers whereas the manuscript submitted to BMC Public Health by Lee and Lee [ 35 ] had been reviewed by one reviewer only. Even within journals there was a huge variation. Among our sample, BMC Public Health had the greatest variance ranging from one to four reviewers. Besides, it was noted that additional reviewers were called in not until the second or even third revision of the manuscript. A summary of key information on journals included in the FMRS is provided in Table  3 .

“Quantitizing” reviewer comments

The frequency analysis revealed that the number of articles in which a certain theme/sub-theme occurred ranged from 1 to 79. Across all 107 articles, the types of comments most frequently reported were in relation to generic themes. Reviewer comments regarding “Adding information/detail/nuances”, “Clarification needed”, “Further explanation required” and “Confirmation/approval (from reviewer)” were used in 79, 79, 66 and 63 articles, respectively. The four most frequently used themes/sub-themes are composed of generic codes from dimension I (“Scientific/technical content”). Leaving all generic codes aside, it became apparent that among the 30 most frequently employed themes “Writing criteria” (dimension II) is the top ranking theme, followed by comments in relation to the “Methods” (dimension I) (Table  4 ).

Subsequently, we present key qualitative findings regarding “Confirmation/approval from reviewers” (generic), “Sampling” and “Analysis process” (methods), “Robust/rich data analysis and “Themes/sub-themes” (results) as well as findings that suggest an underlying quantitative mindset of the reviewers.

Confirmation/approval from reviewers (generic)

The theme “confirmation/approval from reviewers” ranks third among the top 30 categories. A total of 63 manuscripts contained at least one reviewer comment related to this theme. Overall, reviewers maintained a respectful and affirmative rhetoric when providing feedback. The vast majority of reviewers began their report by stating that the manuscript was well written. The following is a typical example:

“Overall, the paper is well written, and theoretically informed.” Article #14.

Reviewers then continued to add explicit praise for aspects or sections that were particularly innovative and/or well constructed before they started to put forward any negative feedback.

Sampling (methods)

Across all 107 articles there were 34 reviewer comments in relation to the sampling technique(s). Two major categories were identified: (1) composition of the sample and (2) identification and justification of selected participants. Regarding the former, reviewers raised several concerns about how the sample was composed. For instance, one reviewer wanted to know the reason for female predominance in the study and why an entire focus group was composed of females only. Another reviewer expressed strong criticism on the composition of the sample since only young, educated and non-minority white British participants were included in the study. The reviewer commented:

“ So a typical patient was young, educated and non-minority White British? The research studies these days should be inclusive of diverse types of patients and excluding patients because of their age and ethnicity is extremely concerning to me. This assumption that these individuals will “find it more difficult to complete questionnaires” is concerning ” Article #40.

This raised concerns of potentially excluding important diverse perspectives – such as extreme or deviant cases – from other participants. Similarly, some reviewers expressed concerns that relevant groups of people were not interviewed, calling into question that the findings were theoretically saturated. In terms of the identification of participants, reviewers raised questions regarding how the authors obtained the necessary characteristics to achieve purposive sampling or why only certain groups of people were included for interviews. Besides that, reviewers criticised that some authors did not mention their inclusion/exclusion criteria for selecting participants or did not specify their sampling method. For example:

“The authors state that they recruited a purposive sample of patients for the interviews. Concerning which variables was this sampling purposive? Are there any studies informing the patient selection process?” Article #61.

Hence, reviewers requested more detailed information on how participants were selected and to clearly state the type of sampling. Apart from the two key categories, reviewers made additional comments in relation to data saturation, transferability of findings, limitations of certain sampling methods and criticised the lack of description of participants who were approached but refused to participate in the study.

Details of analysis process (methods)

In 60 out of 107 articles, reviewers made comments in relation to the data analysis. The vast majority of comments stressed that authors provided scarce information about the analysis process. Hence, reviewers requested a more detailed description of the specific analysis techniques employed so that readers can obtain a better understanding of how the analysis was done to judge the trustworthiness of the findings. To this end, reviewers frequently requested an explicit statement on whether the analysis was inductive or deductive or iterative or sequential. One reviewer wrote the following comment:

“Please elaborate more on the qualitative analysis. The authors indicate that they used ‘iterative’ approaches. While this is certainly laudable, it is important to know how they moved from codes to themes (e.g. inductively? deductively?)” Article #5.

Since there are many approaches to analysing qualitative data, reviewers demanded sufficient detail in relation to the underlying theoretical framework used to develop the coding scheme, the analytic process, the researchers’ background (e.g. profession), the number of coders, data handling, length of interviews and whether data saturation occurred. Over a dozen reviewer comments were specifically in relation to the identification of themes/sub-themes. Reviewers requested a more detailed description on how the themes/sub-themes were derived from codes and whether they were developed by a second researcher working independently from each other.

“I would have liked to read how their themes were generated, what they were and how they assured robust practices in qualitative data analysis”. Article #43.

Besides that, some reviewers were in the opinion that the approach to analysis has led to a surface-level penetration of the data which was reflected in the Results section where themes were underexplored (for more detail see “ Robust/rich data analysis” below). Finally, reviewer comments that occurred infrequently included questions concerning the inter-rater reliability, competing interpretations of data, the use of computer software or the original interview language.

Robust/rich data analysis (results)

Among the 30 reviewer comments related to this theme/sub-theme, three key facets were observed: (1) greater analytical depth required, (2) suggestions for further analysis, and (3) themes are underexplored. In relation to the first point, reviewers requested more in-depth data analysis to strengthen the quality of the manuscript. Reviewers were in the opinion that authors reproduced interview data (raw data) in a reduced form with minimal or no interpretation, thus leaving the interpretation to the reader. Other reviewers referred to manuscripts as preliminary drafts that need to be further analysed to achieve greater analytical depth of themes, make links between themes or identify variations between respondents. In relation to the second point, several reviewers offered suggestions for further analysis. They provided detailed information on how to further explore the data and what additional results they would like to see in the revised version (e.g. group comparison, gender analysis). The latter aspect goes hand in hand with the third point. Several reviewers pointed out that the findings were shallow, simplistic or superficial at best; lacking the detailed descriptions of complex accounts from participants. For example:

“The results of the study are mostly descriptive and there is limited analysis. There is also absence of thick description, which one would expect in a qualitative study”. Article #34.

Even after the first revision, some manuscripts still lacked detailed analysis as the following comment from the same reviewer illustrates:

“I believe that the results in the revised version are still mostly descriptive and that there is limited analysis”. Article #34, R1.

Other, less frequently mentioned reviewer comments included lack of deviant cases or absence of relationships between themes.

Themes/sub-themes (results)

In total, there were 24 reviewer comments in relation to themes/sub-themes. More than half of the comments fell into one of the three categories: (1) themes/sub-themes are not sufficiently supported by data, (2) example/excerpt does not fit the stated theme, and (3) use of insufficient quotes to support theme/sub-theme. In relation to the first category, reviewers largely criticised that the data provided were insufficient to warrant being called a theme. Reviewers requested to provide data “from more than just one participant” to substantiate a certain theme or criticised that only a short excerpt was provided to support a theme. The second category dealt with reviewer comments that questioned whether the excerpts provided actually reflected the essence of a theme/sub-theme presented in the results section. The following reviewer comment exemplifies the issue:

“The data themes seem valid, but the data and narratives used to illustrate that don’t seem to fit entirely under each sub-heading”. Article #99.

Some reviewers provided alternative suggestions on how to call a theme/sub-theme or advised the authors to rethink if excerpts might be better placed under a different theme. The third category concerns themes/sub-themes that are not sufficiently supported by participants’ quotes. Reviewers perceived direct quotes as evidence to support a certain theme or as a means to add strength to the theme as the following example illustrates:

“Please provide at least one quote from each school leader and one quote from children to support this theme, if possible. It would seem that most, if not all, themes should reflect data from each participant group”. Article #88.

Hence, the absence of quotes prompted reviewers to request at least one quote to justify the existence of that theme. The inclusion of a rich set of quotes was perceived as strength of a manuscript. Finally, less frequently raised reviewer comments related to the discrimination of similar themes, the presentation of quotes in tables (rather than under the appropriate theme headings), the lack of defining a theme and reducing the number of themes.

Quantitative mindset

Some reviewers who were appointed by journal editors to review a manuscript containing qualitative research evaluated the quality of the manuscript from a perspective of a quantitative research paradigm. Some reviewers not only used terminology that is attuned to quantitative research, but also their judgements were based on a quantitative mindset. In particular, there were a number of reviewer comments published in BMC Health Services Research , BMC Medical Education and BMC Family Practice that demonstrated an apparent lack of understanding of the principles underlying qualitative inquiry of the person providing the review. First, several reviewers seemed to have confused the concept of generalisability with the concept of representativeness inherently associated with the positivist tradition. For instance, reviewers erroneously raised concerns about whether interviewees were “representative” of the “final target population” and requested the provision of detailed demographic characteristics.

“Need to better describe how the patients are representative of patients with chronic heart failure in the Netherlands generally. The declaration that “a representative group of patients were recruited” would benefit from stating what they were representative of.” Article # 66.

Similarly, another reviewer wanted to know from the authors how they ensured that the qualitative analysis was done objectively.

“The reader would benefit from a detailed description of […] how did the investigators ensure that they were objective in their analysis – objectivity and trustworthiness?” Article #22.

Furthermore, despite the fact that the paradigm wars have largely come to an end, hostility has not ceased on all fronts. In some reviewers the dominance and superiority of the quantitative paradigm over the qualitative paradigm is still present as the following comment illustrates:

“The main question and methods of this article is largely qualitative and does not seem to have significant implications for clinical practice, thus it may not be suitable to publish in this journal.” Article #45.

Finally, one reviewer apologised at the outset of the reviewer’s report for being unable to judge the data analysis due to the absence of sufficient knowledge in qualitative research.

Overall, in this FMRS we found that reviewers maintained a respectful and affirmative rhetoric when providing feedback. Yet, the positive feedback did not overshadow any key negative points that needed to be addressed in order to increase the quality of the manuscript. However, it should not be taken for granted that all reviewers are as courteous and generous as the ones included in our particular review, because as Taylor and Bradbury-Jones [ 36 ] observed there are many examples where reviewers can be unhelpful and destructive in their comments.

A key finding of this FMRS is that reviewers are more inclined to comment on the writing rather than the methodological rigour of a manuscript. This is a matter of concern, because Altman [ 37 ] – the originator of the EQUATOR (Enhancing the Quality and Transparency of Health Research) Network – has pointed out: “Unless methodology is described the conclusions must be suspect”. If we are to advance the quality of qualitative research then we need to encourage clarity and depth in reporting the rigour of research.

When reviewers did comment on the methodological aspects of an article, issues frequently commented on by reviewers were in relation to sampling, data analysis, robust/rich data analysis as reflected in the findings and themes/sub-themes that are insufficiently supported. Considerable work has been undertaken over the past decade trying to improve the reporting standards of qualitative research through the dissemination of qualitatively oriented reporting guidelines such as the ‘Standards for Reporting Qualitative Research’ (SRQR) [ 38 ] or the ‘Consolidated Criteria for Reporting Qualitative Research’ (COREQ) [ 39 ] with the aim of improving transparency of qualitative research. Although these guidelines appear to be comprehensive, some important issues identified in our study are not mentioned or only dealt with somewhat superficially: sampling for example. Neither COREQ nor SRQR shed light on the appropriateness of the sample composition, i.e., to critically question whether all relevant groups of people have been identified as potential participants or whether extreme or deviant cases were sought.

Similarly, lack of in-depth data analysis has been identified as another weakness where uninterpreted (raw) data were presented as if they were findings. However, existing reporting guidelines are not sharp enough to distinguish between findings and data. While findings are researchers’ interpretations of the data they collected, data consist of empirical, uninterpreted material researchers offer as their findings [ 32 ]. Hence, we suggest modifying the current reporting guidelines by including a further item to the checklist called “Degree of data transformation”. The suggested checklist item might prompt both authors and reviewers to make a judgment about the degree to which data have been transformed, i.e., interpretively removed from data as given. The rationale for the new item is to raise authors’ and reviewers’ awareness for the appropriateness of the degree of data transformation in relation to the chosen analysis method. For example, findings derived from content analysis remain close to the data as they were given to the research; they are often organised into surface classification systems and summarised in brief text. Findings derived from grounded theory, however, should offer a coherent model or line of argument which addresses causality or the fundamental nature of events or experiences [ 32 ].

Besides that, some reviewers put forward comments that we refer to as aligning with a ‘quantitative mindset’. Such reviewers did not appear to understand that rather than aspiring to statistical representativeness, in qualitative research participants are selected purposefully for the contribution they can make towards the phenomenon under study [ 40 ]. Hence, the generalisability of qualitative findings beyond an immediate group of participants is judged by similarities between the time, place, people or other social contexts [ 41 ] rather than in relation to the comparability of the demographic variables. It is the fit of the topic or the comparability of the problem that is of concern [ 40 ].

The majority of issues that reviewers picked up on are already mentioned in reporting guidelines, so there is no reason why these were omitted by researchers. Many journals now insist on alignment with COREQ criteria, so there is an important question to be asked as to why this is not always happening. We suggest that completion of an established reporting checklist (e.g. COREQ, SRQR) on submission becomes a requirement.

In this FMRS we have made judgements about fellow peer reviewers and found their feedback to be constructive, but also, among some, we found some lack of grasp of the essence of the qualitative endeavor. Some reviewers did not seem to understand that objectivity and representative sampling are the antithesis of subjectivity, reflexivity and data saturation. We acknowledge though, that individual reviewers might have varying levels of experience and competence both in terms of qualitative research, but also in the reviewing process. We found one reviewer who apologised at the outset of the reviewer’s report for being unable to judge the data analysis due to their absence of sufficient knowledge in qualitative research. In line with Spigt and Arts [ 42 ], we appreciate the honesty of that reviewer for being transparent about their skillset. The lessons here we feel are for more experienced reviewers to offer support and reviewing mentorship to those who are less experienced and for reviewers to emulate the honesty of the reviewer as discussed here, by being open about their capabilities within the review process.

Based on our findings, we have a number of recommendations for both researchers and reviewers. For researchers reporting qualitative studies, we suggest that particular attention is paid to reporting of sampling techniques, both in the characteristics and composition of the sample, and how participants were selected. This is an issue that the reviewers in our FMRS picked up on, so forewarned is forearmed. But it is also crucially important that sampling matters are not glossed over, so this constitutes good practice in research reporting as well. Second, it seems that qualitative researchers do not give sufficient detail about analytic techniques and underlying theoretical frameworks. The latter has been pointed out before [ 25 ], but both these aspects were often the subject of reviewer comments.

Our recommendation for reviewers is simply to be honest. If qualitative research is not an area of expertise, then it is better to decline to undertake the review, than to apply a quantitative lens in the assessment of a qualitative piece of work. It is inappropriate to ask for details about validity and generalisability and shows a lack of respect to qualitative researchers. We are well beyond the arguments about quantitative versus qualitative [ 43 ]. It is totally appropriate to comment on background and findings and any obvious deficiencies. Finally, our recommendation to editors is a difficult one, because as editors ourselves we know how challenging it can be to find willing reviewers. When selecting reviewers however, it is as important to bear in mind the methodological aspects of an article and its subject, and to select reviewers with appropriate methodological expertise. Some journals make it a requirement for quantitative articles to be reviewed by a statistical expert and we think this is good practice. When it comes to qualitative articles however, the methodological expertise of reviewers may not be so stringently noted and applied. Editors could make a difference here and help to push up the quality of qualitative reviews.

Strengths and weaknesses

Since we had only access to reviewer’s comments of articles that were finally published in open access journals, we are unable to compare them to types of comments related to rejected submissions. Thus, this study was limited to manuscripts that were sent out for external peer review and were finally published. Furthermore, the chosen study design of analysing only reviewer comments of published articles with an open system of peer review did not allow direct comparison with reviewer comments derived from blind-review.

FMRS provides a snap-shot of a particular issue at one particular time [ 23 ]. To that end, findings might be different in another review undertaken in a different time period. However, as a contemporary profile of reviewing within qualitative research, the current findings provide useful insights for authors of qualitative reports and reviewers alike. Further research should focus on comparing reviewer comments taken from an open and closed system of peer review in order to identify similarities and differences between the two models of peer review.

A limitation is that we reviewed open access journals because this was the only way of accessing a range of comments. The alternative that we did consider was to use the feedback provided by reviewers on our own manuscripts. However, this would have lacked the transparency and traceability associated with this current FMRS, which we consider to be a strength. That said, there may be an inherent problem in having reviewed open access peer review comments, where both the author and reviewer are known. Reviewers are unable to ‘hide behind’ the anonymity of blind peer review and this might reflect, at least in part, why their comments as analysed for this review were overwhelmingly courteous and constructive. This is at odds with the comments that one of us has received as part of a blind peer review: ‘silly, silly, silly’ [ 36 ].

This FMRS has highlighted some important issues in the field of qualitative reviewing that hold lessons for authors, reviewers and editors. Authors of qualitative reports are called upon to follow guidelines on reporting and any amendments that these might contain as recommended by the findings of our review. Humility and transparency are required among reviewers when it comes to accepting to undertake a review and an honest appraisal of their capabilities in understanding the qualitative endeavor. Journal editors can assist this by thoughtful and judicious selection of reviewers. Ultimately, all those involved with the publication process can drive up the quality of individual qualitative articles and the synergy is such that this can make a significant impact on quality across the field.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

BioMed central

British medical journal

Consolidated criteria for reporting qualitative research

Enhancing the quality and transparency of health research

Focused mapping review and synthesis

Introduction, methods, results and discussion

Natural language toolKit

Standards for reporting qualitative research

Gannon F. The essential role of peer review (editorial). EMBO Rep. 2001;21(91):743.

Article   Google Scholar  

Mungra P, Webber P. Peer review process in medical research publications: language and content comments. Engl Specif Purp. 2010;29:43–53.

Turcotte C, Drolet P, Girard M. Study design, originality, and overall consistency influence acceptance or rejection of manuscripts submitted to the journal. Can J Anaesth. 2004;51:549–56.

Van der Wall EE. Peer review under review: room for improvement? Neth Heart J. 2009;17:187.

Burnham JC. The evolution of editorial peer review. JAMA. 1990;263:1323–9.

Article   CAS   Google Scholar  

Baldwin M. Credibility, peer review, and Nature , 1945-1990. Notes Rec R Soc Lond. 2015;69:337–52.

Lee CJ, Sugimoto CR, Zhang G, Cronin B. Bias in peer review. J Assoc Inf Sci Technol. 2013;64:2–17.

Horbach SPJM, Halffman W. The changing forms and expectations of peer review. Res Integr Peer Rev. 2018;3:8.

Oermann MH, Nicoll LH, Chinn PL, Ashton KS, Conklin JL, Edie AH, et al. Quality of articles published in predatory nursing journals. Nurs Outlook. 2018;66:4–10.

University of Cambridge. How much do publishers charge for Open Access? (2019) https://www.openaccess.cam.ac.uk/paying-open-access/how-much-do-publishers-charge-open-access Accessed 26 Jun 2019.

Elsevier. Open access journals. (2018) https://www.elsevier.com/about/open-science/open-access/open-access-journals Accessed 28 Oct 2018.

Peters DP, Ceci SJ. Peer-review practices of psychological journals: the fate of published articles, submitted again. Behav Brain Sci. 1982;5:187–95.

Ross-Hellauer T. What is open peer review? A systematic review. F1000 Res. 2017;6:588.

Smith R. Opening up BMJ peer review. A beginning that should lead to complete transparency. BMJ. 1999;318:4–5.

Brown HM. Peer review should not be anonymous. BMJ. 2003;326:824.

Gosden H. “Thank you for your critical comments and helpful suggestions”: compliance and conflict in authors’ replies to referees’ comments in peer reviews of scientific research papers. Iberica. 2001;3:3–17.

Google Scholar  

Swales J. Occluded genres in the academy. In: Mauranen A, Ventola E, editors. Academic writing: intercultural and textual issues. Amsterdam: John Benjamins Publishing Company; 1996. p. 45–58.

Chapter   Google Scholar  

Landkroon AP, Euser AM, Veeken H, Hart W, Overbeke AJ. Quality assessment of reviewers' reports using a simple instrument. Obstet Gynecol. 2006;108:979–85.

Henly SJ, Dougherty MC. Quality of manuscript reviews in nursing research. Nurs Outlook. 2009;57:18–26.

Van Lent M, IntHout J, Out HJ. Peer review comments on drug trials submitted to medical journals differ depending on sponsorship, results and acceptance: a retrospective cohort study. BMJ Open. 2015. https://doi.org/10.1136/bmjopen-2015-007961 .

Davis CH, Bass BL, Behrns KE, Lillemoe KD, Garden OJ, Roh MS, et al. Reviewing the review: a qualitative assessment of the peer review process in surgical journals. Res Integr Peer Rev. 2018;3:4.

Bradbury-Jones C, Breckenridge J, Clark MT, Herber OR, Wagstaff C, Taylor J. The state of qualitative research in health and social science literature: a focused mapping review and synthesis. Int J Soc Res Methodol. 2017;20:627–45.

Bradbury-Jones C, Breckenridge J, Clark MT, Herber OR, Jones C, Taylor J. Advancing the science of literature reviewing in social research: the focused mapping review and synthesis. Int J Soc Res Methodol. 2019. https://doi.org/10.1080/13645579.2019.1576328 .

Taylor J, Bradbury-Jones C, Breckenridge J, Jones C, Herber OR. Risk of vicarious trauma in nursing research: a focused mapping review and synthesis. J Clin Nurs. 2016;25:2768–77.

Bradbury-Jones C, Taylor J, Herber OR. How theory is used and articulated in qualitative research: development of a new typology. Soc Sci Med. 2014;120:135–41.

Platt J. Using journal articles to measure the level of quantification in national sociologies. Int JSoc Res Methodol. 2016;19:31–49.

Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77–101.

Shashok K. Content and communication: how can peer review provide helpful feedback about the writing? BMC Med Res Methodol. 2008;8:3.

Hall GM. How to write a paper. 2nd ed. London: BMJ Publishing Group; 1998.

Day FC, Dl S, Todd C, Wears RL. The use of dedicated methodology and statistical reviewers for peer review: a content analysis of comments to authors made by methodology and regular reviewers. Ann Emerg Med. 2002;40:329–33.

Tashakkori A, Teddlie C. Mixed methodology: combining qualitative and quantitative approaches. London: Sage Publications; 1998.

Sandelowski M, Barroso J. Handbook for synthesizing qualitative research. New York: Springer Publishing Company; 2007.

Jonas K, Crutzen R, Krumeich A, Roman N, van den Borne B, Reddy P. Healthcare workers’ beliefs, motivations and behaviours affecting adequate provision of sexual and reproductive healthcare services to adolescents in Cape Town, South Africa: a qualitative study. BMC Health Serv Res. 2018;18:109.

Burgess A, Roberts C, Sureshkumar P, Mossman K. Multiple mini interview (MMI) for general practice training selection in Australia: interviewers’ motivation. BMC Med Educ. 2018;18:21.

Lee S-Y, Lee EE. Cancer screening in Koreans: a focus group approach. BMC Public Health. 2018;18:254.

Taylor J, Bradbury-Jones C. Writing a helpful journal review: application of the 6 C’s. J Clin Nurs. 2014;23:2695–7.

Altman D. My journey to EQUATOR: There are no degrees of randomness. EQUATOR Network. 2016 https://www.equator-network.org/2016/02/16/anniversary-blog-series-1/ Accessed 17 Jun 2019.

O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89:1245–51.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19:349–57.

Morse JM. Editorial: Qualitative generalizability. Qual Health Res. 1999;9:5–6.

Leung L. Validity, reliability, and generalizability in qualitative research. J Family Med Prim Care. 2015;4:324–7.

Spigt M, Arts ICW. How to review a manuscript. J Clin Epidemiol. 2010;63:1385–90.

Griffiths P, Norman I. Qualitative or quantitative? Developing and evaluating complex interventions: time to end the paradigm war. Int J Nurs Stud. 2013;50:583–4.

Download references

Acknowledgments

The support of Daniel Rütter in compiling data and providing technical support is gratefully acknowledged. Furthermore, we would like to thank Holger Hönings for applying general-purpose programming language to allow for a quantification of reviewer comments in the MS Excel spreadsheet.

Author information

Authors and affiliations.

Institute of General Practice, Centre for Health and Society, Medical Faculty of the Heinrich Heine University Düsseldorf, Moorenstr. 5, 40225, Düsseldorf, Germany

Oliver Rudolf HERBER

School of Nursing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK

Caroline BRADBURY-JONES & Julie TAYLOR

Institute of Health and Care Sciences, Sahlgrenska Academy at the University of Gothenburg, Gothenburg, Sweden

Susanna BÖLING

Florence Nightingale Faculty of Nursing, Midwifery & Palliative Care, King’s College London, London, UK

Sarah COMBES

Institute of Applied Nursing Sciences, Department of Health, University of Applied Sciences FHS St.Gallen, St. Gallen, Switzerland

Julian HIRT

Cardiology department, Radboud University Medical Centre, Nijmegen, the Netherlands

Yvonne KOOP

Division of Emergencies and Critical Care, Oslo University Hospital/Institute of Health and Society, Faculty of Medicine, University of Oslo, Oslo, Norway

Ragnhild NYHAGEN

Hogeschool Utrecht, Utrecht, the Netherlands

Jessica D. VELDHUIZEN

Birmingham Women’s and Children’s Hospitals NHS Foundation Trust, Birmingham, UK

Julie TAYLOR

You can also search for this author in PubMed   Google Scholar

Contributions

All authors have made an intellectual contribution to this research paper. ORH conducted the qualitative analysis and wrote the first draft of the paper. SB, SC, JH, YK, RN and JDV extracted and classified each comment using the classification system. CB-J and JT independently undertook a calibration exercise of a random sub-sample of articles ( n  = 22) to ensure consistency across coders. All co-authors (CB-J, SB, SC, JH, YK, RN, JDV and JT) have input into drafts and have read and approved the final version of the manuscript.

Corresponding author

Correspondence to Oliver Rudolf HERBER .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

References of all manuscripts included in the analysis ( n  = 107).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

HERBER, O.R., BRADBURY-JONES, C., BÖLING, S. et al. What feedback do reviewers give when reviewing qualitative manuscripts? A focused mapping review and synthesis. BMC Med Res Methodol 20 , 122 (2020). https://doi.org/10.1186/s12874-020-01005-y

Download citation

Received : 27 August 2019

Accepted : 04 May 2020

Published : 18 May 2020

DOI : https://doi.org/10.1186/s12874-020-01005-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Open access publishing
  • Peer review
  • Manuscript review
  • reviewer’s report
  • Qualitative analysis
  • Qualitative research

BMC Medical Research Methodology

ISSN: 1471-2288

how to review qualitative research

  • Methodology
  • Research Methodology
  • Qualitative Research

What Is Qualitative Research? An Overview and Guidelines

  • Australasian Marketing Journal (AMJ)
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

No full-text available

Request Full-text Paper PDF

To read the full-text of this research, you can request a copy directly from the author.

Robert V. Kozinets

  • Mina Seraj-Aksit

Prokopis A. Christou

  • J Consum Behav

Weng Marc Lim

  • Margaret M. Cullen

Niamh M. Brennan

  • Rossella Gambetti

Ulrike Gretzel

  • Luisella Bovera

Luca Longo

  • Qual Res Psychol

Virginia Braun

  • Weng Marc Lim

Robert White

  • Karyn Cooper
  • Gaurav Gupta

Baidyanath Biswas

  • Bronislaw Malinowski
  • QUAL HEALTH RES
  • Wendy Duggleby

Shelley C Peacock

  • Edward F. Fern
  • J BUS IND MARK

Karen Francis

  • Kathy Charmaz
  • INT J MANAG REV
  • Malvina Klag
  • Ann Langley
  • Roy Suddaby
  • INT J INTERCULT REL
  • Yvonna S. Lincoln
  • Anselm L. Strauss
  • Int J Retail Distrib Manag

Lucy Woodliffe

  • J SOC ISSUES
  • Egon G. Guba
  • LANDSCAPE URBAN PLAN

John Gaber

  • Susan Spiggle
  • L Cavusoglu
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.9(1); 2015 Feb

Qualitative systematic reviews: their importance for our understanding of research relevant to pain

This article outlines what a qualitative systematic review is and explores what it can contribute to our understanding of pain. Many of us use evidence of effectiveness for various interventions when working with people in pain. A good systematic review can be invaluable in bringing together research evidence to help inform our practice and help us understand what works. In addition to evidence of effectiveness, understanding how people with pain experience both their pain and their care can help us when we are working with them to provide care that meets their needs. A rigorous qualitative systematic review can also uncover new understandings, often helping illuminate ‘why’ and can help build theory. Such a review can answer the question ‘What is it like to have chronic pain?’ This article presents the different stages of meta-ethnography, which is the most common methodology used for qualitative systematic reviews. It presents evidence from four meta-ethnographies relevant to pain to illustrate the types of findings that can emerge from this approach. It shows how new understandings may emerge and gives an example of chronic musculoskeletal pain being experienced as ‘an adversarial struggle’ across many aspects of the person’s life. This article concludes that evidence from qualitative systematic reviews has its place alongside or integrated with evidence from more quantitative approaches.

Many of us use evidence of effectiveness for various interventions when working with people in pain. A good systematic review can be invaluable in bringing together research evidence to help inform our practice and help us understand what works. In addition to evidence of effectiveness, understanding how people with pain experience both their pain and their care can help us when we are working with them to provide care that meets their needs. A high-quality qualitative systematic review can also uncover new understandings, often helping illuminate ‘why’ and can help build theory. A qualitative systematic review could answer the question ‘What is it like to have chronic non-malignant pain?’

The purpose of this article is to outline what a qualitative systematic review is and explore what it can contribute to our understanding of pain. A qualitative systematic review brings together research on a topic, systematically searching for research evidence from primary qualitative studies and drawing the findings together. There is a debate over whether the search needs to be exhaustive. 1 , 2 Methods for systematic reviews of quantitative research are well established and explicit and have been pioneered through the Cochrane Collaboration. Methods for qualitative systematic reviews have been developed more recently and are still evolving. The Cochrane Collaboration now has a Qualitative and Implementation Methods Group, including a register of protocols, illustrating the recognition of the importance of qualitative research within the Cochrane Collaboration. In November 2013, an editorial described the Cochrane Collaboration’s first publication of a qualitative systematic review as ‘a new milestone’ for Cochrane. 3 Other editorials have raised awareness of qualitative systematic reviews in health. 4

Noblit and Hare 5 were pioneers in the area of synthesising qualitative data. They describe such reviews as aggregated or as interpretative. The aggregated review summarises the data, and Hannes and Pearson 6 provide a worked example of an aggregation approach. Interpretative approaches, as the name suggests, interpret the data, and from that interpretation, new understandings can develop that may lead to development of a theory that helps us to understand or predict behaviour. Types of interpretative qualitative systematic reviews include meta-ethnography, critical interpretative synthesis, realist synthesis and narrative synthesis. More details about these and other approaches can be found in other papers and books. 1 , 5 , 7 – 11 This article will describe one approach, meta-ethnography, as it was identified as the most frequently used approach, 1 and there are some examples using meta-ethnography that focus on pain. A meta-ethnographic approach can be used with a variety of qualitative methodologies, not only ethnography. The data for a meta-ethnography are the concepts or themes described by the authors of the primary studies.

Noblit and Hare 5 outlined the seven steps of a meta-ethnography: (1) getting started, (2) deciding what is relevant, (3) reading the studies, (4) determining how studies are related to each other, (5) translating studies into each other, (6) synthesising translations and (7) expressing the synthesis.

The first three might seem relatively straightforward, although Lee et al. 12 emphasised both the importance and nuances of the reading stage, and Toye et al. 13 discuss the complexities of making quality assessments of qualitative papers and searching for this type of study. You need to understand what data to extract from the papers and how you are going to do this.

You have to first identify what is a concept and what is purely descriptive. Toye et al. 2 describe a process for collaboratively identifying concepts. In determining how studies are related to each other and translating them into each other, the meta-ethnographer compares the concepts found in each study with each other and then groups similar concepts into conceptual themes. Translating studies into each other involves looking at where concepts between studies agree (reciprocal synthesis) and where they do not agree (refutational synthesis). Developing conceptual categories can be challenging as you need to judge the extent to which a concept from one study adequately reflects concepts from other studies and choose one that seems to fit best. This is discussed in more detail in Toye et al. 2 , 13

To synthesise the translation, a line of argument is then developed from the conceptual categories. How the concepts group and relate to each other are developed. This provides an overall interpretation of the findings, ensuring this is grounded in the data from the primary studies. You are aiming to explain, and new concepts and understandings may emerge, which can then go on to underpin development of theory. For example, a qualitative systematic review that explored medicine taking found that ‘resistance’ was a new concept, revealed through meta-ethnography, and this helped understanding of lay responses to medicine taking. 1 Hannes and Macaitis, 14 in a review of published papers, reported that over time, authors have become more transparent about searching and critical appraisal, but that the synthesis element of reviews is often not well described. Being transparent about decisions that are interpretative has its own challenges. Working collaboratively to challenge interpretations and assumptions can be helpful. 2 , 12 The next section will use examples of qualitative systematic reviews from the pain field to illuminate what this type of review can contribute to our understanding of pain.

What can a qualitative systematic review contribute to the field of pain – some examples

Toye et al. 2 , 15 undertook a meta-ethnography to look at patients’ experiences of chronic non-malignant musculoskeletal pain. At the time of this research, no other qualitative systematic reviews had been published in this area. Their review included 77 papers reporting 60 individual studies, resulting from searches of six electronic bibliographic databases (MEDLINE, EMBASE, CINAHL, PsycINFO, AMED and HMIC) from inception until February 2012 and hand-searching key journals from 2001 to 2012.

They developed a new concept which they identified as an ‘adversarial struggle’. This struggle took place across five main dimensions: (1) there was a struggle to affirm themselves, where there was a tension between the ‘real me’ (without pain) and ‘not real me’ (me with pain). (2) The present and future were often unpredictable, and construction of time was altered and they struggled to reconstruct themselves in time. (3) People struggled to find an acceptable explanation for their pain and suffering. (4) There was a struggle to negotiate the healthcare system and (5) a struggle for pain to be seen as legitimate, including the need to be believed, and a struggle to know whether to show or hide their pain. Some people were able to move forward with pain. They saw their body as more integrated, they re-defined what was normal, they told people about their pain, they were part of a community of people with pain and they felt more expert on how their pain affected them and what they could do about it.

So, this meta-ethnography highlighted the adversarial nature of having chronic musculoskeletal pain and how this struggle pervaded many different areas of their life. It also illustrated how by showing patients their pain is understood and being alongside the person in pain, they can start to move forward. A short film based on the 77 papers in this meta-ethnography has been made and is available on YouTube. 16 This film was made as an attempt to disseminate the findings of a meta-ethnography in a way that is accessible to a range of people.

Snelgrove and Liossi 17 undertook a meta-ethnography of qualitative research in chronic low back pain (CLBP) using meta-ethnography. They included 33 papers of 28 studies published between 2000 and 2012. They identified three overarching themes of (1) the impact of CLBP on self, (2) relationships with others (health professionals and family and friends) and (3) coping with CLBP. They found that very few successful coping strategies were reported. Like Toye et al., 2 , 15 they also reported disruption to self, distancing their valued self from their painful self, legitimising pain, the struggle to manage daily living and the importance of social relationships alongside negotiation of their care in the health system.

MacNeela et al. 18 also undertook a meta-ethnography of experiences of CLBP. They included 38 articles published between 1994 and 2012 representing 28 studies. They identified four themes: (1) the undermining influence of pain, (2) the disempowering impact on all levels, (3) unsatisfying relationships with healthcare professionals and (4) learning to live with the pain. They reported the findings being dominated by ‘wide-ranging distress and loss’. They discussed the disempowering consequences of pain and a search for help. However, they also highlighted self-determination and resilience and suggested these could offer ‘pathways to endurance’. They emphasised self-management and adaptation, which resonates with the moving forward category reported by Toye et al. 2 , 15

Froud et al. 19 looked at the impact of low back pain on people’s lives. They describe their approach as meta-ethnographic and meta-narrative. They included 49 papers of about 42 studies from inception of databases searched until July 2011. They described five themes: activities, relationships, work, stigma and changing outlook, which they derived from ‘participant-level data’. They described their findings as showing patients wanted to be believed. They highlighted the importance of social factors when developing relevant outcome measures. There are other examples of qualitative systematic reviews relevant to pain. 20 – 23

Different qualitative systematic reviews on a similar subject may come up with overlapping but also some different findings. This could be, for example, because different search periods or different inclusion criteria are used, so different primary studies may be included in different reviews. In addition, undertaking a qualitative systematic review requires researchers to interpret concepts. This interpretation does not need to be a limitation. For example, to ensure rigour and transparency, Toye et al. 24 report a process of collaborative interpretation of concepts among a team of experienced qualitative researchers to ensure individual interpretations were challenged and remained grounded in the original studies. They also published a detailed audit trail of the processes and decisions made. 2 Campbell et al. 1 argue ‘Meta-ethnography is a highly interpretative method requiring considerable immersion in the individual studies to achieve a synthesis. It places substantial demands upon the synthesiser and requires a high degree of qualitative research skill’. It is important to be able to think conceptually when undertaking a meta-ethnography, and it can be a time-consuming process. However, the ability of a meta-ethnography to synthesise a large number of primary research studies, generate new conceptual understandings and thus increase our understanding of patients’ experiences of pain makes it a very useful resource for our evidence-based practice.

The way forward

A register of qualitative systematic reviews would be useful for researchers and clinicians, so there was a clear way of identifying existing qualitative reviews or reviews that are planned or underway. The Cochrane Collaboration does now have a register for protocols of qualitative systematic reviews being undertaken under the aegis of the Cochrane Qualitative and Implementation Methods Group. It would help those wanting to undertake qualitative systematic reviews if reviews that were underway were registered and described more clearly to prevent duplication of effort, for example, using ‘qualitative systematic review’ and the methodological approach used (such as meta-ethnography) in the title and/or abstract. The Toye et al. 2 protocol 25 was accessible on the National Institutes for Health website from 2010. The Snelgrove and Liossi 17 study was done without external funding, so it would be difficult to pick up that it was underway. The MacNeela et al. 18 study was listed on the Irish Research Council for the Humanities and Social Sciences under their Research Development Initiative 2008–2009, but was described as ‘Motivation and Beliefs among People Experiencing Chronic Low Back Pain’, so it was not clearly identified at that stage as a qualitative systematic review. Finally, the Froud et al. 19 award details 26 do not mention qualitative systematic reviews or meta-ethnography. This highlights the difficulty of finding some of these reviews and the importance of a register of both completed and ongoing reviews.

This article has argued that qualitative systematic reviews have their place alongside or integrated with more quantitative approaches. There is an increasing body of evidence from qualitative systematic reviews. They can synthesise primary research, and this can be helpful for the busy practitioner. The methods for these approaches are still developing, and attention to rigour at each stage is crucial. It is important that each stage of the synthesis is reported transparently and that the researchers’ stance is clearly reported. 27 Meta-ethnographies published over the last year 2 , 15 , 17 – 19 have drawn together a wide range of primary studies and shown that people’s lives can be markedly changed by their pain across multiple dimensions of their life.

Declaration of Conflicting Interests: The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The authors received no financial support for the research, authorship, and/or publication of this article.

  • Open access
  • Published: 14 August 2024

Qualitative studies involving users of clinical neurotechnology: a scoping review

  • Georg Starke 1 , 2 ,
  • Tugba Basaran Akmazoglu 3 ,
  • Annalisa Colucci 4 ,
  • Mareike Vermehren 4 ,
  • Amanda van Beinum 5 ,
  • Maria Buthut 4 ,
  • Surjo R. Soekadar 4 ,
  • Christoph Bublitz 7 ,
  • Jennifer A. Chandler 6 &
  • Marcello Ienca 1 , 2  

BMC Medical Ethics volume  25 , Article number:  89 ( 2024 ) Cite this article

Metrics details

The rise of a new generation of intelligent neuroprostheses, brain-computer interfaces (BCI) and adaptive closed-loop brain stimulation devices hastens the clinical deployment of neurotechnologies to treat neurological and neuropsychiatric disorders. However, it remains unclear how these nascent technologies may impact the subjective experience of their users. To inform this debate, it is crucial to have a solid understanding how more established current technologies already affect their users. In recent years, researchers have used qualitative research methods to explore the subjective experience of individuals who become users of clinical neurotechnology. Yet, a synthesis of these more recent findings focusing on qualitative methods is still lacking.

To address this gap in the literature, we systematically searched five databases for original research articles that investigated subjective experiences of persons using or receiving neuroprosthetics, BCIs or neuromodulation with qualitative interviews and raised normative questions.

36 research articles were included and analysed using qualitative content analysis. Our findings synthesise the current scientific literature and reveal a pronounced focus on usability and other technical aspects of user experience. In parallel, they highlight a relative neglect of considerations regarding agency, self-perception, personal identity and subjective experience.

Conclusions

Our synthesis of the existing qualitative literature on clinical neurotechnology highlights the need to expand the current methodological focus as to investigate also non-technical aspects of user experience. Given the critical role considerations of agency, self-perception and personal identity play in assessing the ethical and legal significance of these technologies, our findings reveal a critical gap in the existing literature. This review provides a comprehensive synthesis of the current qualitative research landscape on neurotechnology and the limitations thereof. These findings can inform researchers on how to study the subjective experience of neurotechnology users more holistically and build patient-centred neurotechnology.

Peer Review reports

Introduction

Due to a rapid expansion in public-private investment, market size and availability of Artificial Intelligence (AI) tools for functional optimization, the clinical advancement of novel neurotechnologies is accelerating its pace [ 1 ]. Bidirectional intelligent Brain-Computer interfaces (BCI) that aim at merging both read-out and write-in devices are in active development and are expanding in functional capabilities and commercial availability. [ 2 , 3 ]. Such BCIs that can decode and modulate neural activity through direct stimulation of brain tissue, promise additional avenues in the treatment of neurological diseases by adapting to the particularities of individual users’ brain. Potential applications are Parkinson’s disease [ 4 ] or epilepsy [ 5 ] as well as psychiatric disorders, such as major depressive disorder [ 6 ] or obsessive compulsive disorder [ 7 ]. Driven by these advances and in conjunction with progress in deep learning and generative AI software as well as higher-bandwidth hardware, clinical neurotechnology is likely to take an increasingly central role in the prevention, diagnosis and treatment of neuropsychiatric disorders.

In line with these scientific trends, the last decade has seen a consequent fast rise in the ethical attention devoted to neurotechnological systems that establish a direct connection with the human central nervous system [ 8 ], including neurostimulation devices. Yet, at times, neuroethical concerns may have outpaced real-life possibilities, particularly with view to the impact of neurotechnology on personality, identity, autonomy, authenticity, agency or self (PIAAAS) [ 9 ]. This points to the need for basing ethical assessments and personal decisions about deploying devices on solid empirical grounds. In particular, it is crucial to gain a comprehensive understanding of the lived experience of using neurotechnologies from the epistemically privileged first-person perspective of users – “what it is like” to use neurotechnologies. Its examination by empirical studies have added a vital contribution to the literature [ 10 ].

Yet, few reviews have attempted to synthesize the growing body of empirical studies on user experience with clinical neurotechnology. Burwell et al. [ 11 ] reviewed literature from biomedical ethics on BCIs up to 2016, identifying key ethical, legal and societal challenges, yet noting a lack of concrete ethical recommendations for implementation. Worries about a lack of attention to ethics in BCI studies have been further corroborated by two reviews by Specker Sullivan and Illes, reviewing BCI research published up until 2015. They critically assessed the rationales of BCI research studies [ 12 ] and found a remarkable absence of ethical language in published BCI research [ 13 ]. Taking a different focus, Kögel et al. [ 14 ] have provided a scoping review summarizing empirical studies investigating ethics of BCIs until 2017, with a strong focus on quantitative methods in the reviewed papers. Most recently, this list of reviews has been complemented by van Velthoven et al. [ 15 ], who review empirical and conceptual ethical literature on the use of visual neuroprostheses.

To the best of our knowledge, a specific review of qualitative research on the ethics of emerging neurotechnologies such as neuroprosthetics, BCIs and neuromodulation systems is outstanding. We believe that qualitative research involving actual or prospective neurotechnology users is particularly significant as it allows researchers to tap into the richness of first-person experiences as compared to standardized questionnaires without the option of free report. In the following, we synthesize published research on the subjective experience of using clinical neurotechnologies to enrich the ethical debate and provide guidance to developers and regulators.

On January 13, 2022 we conducted a search of relevant scientific literature across 5 databases, namely Pubmed (89 results), Scopus (178 results), Web of Science (79 results), PsycInfo (134 results) and IEEE Xplore (4 results). The search was performed for title, abstract and keywords, using a search string to identify articles employing qualitative methods that engaged with users of neurotechnology, and covered normative issues: [“qualitative” OR “interview” OR “focus group” OR “ethnography” OR “grounded theory” OR “discourse analysis” OR “interpretative phenomenological analysis” OR “thematic analysis”] AND [“user” OR “patient” OR “people” OR “person” OR “participant” OR “subject”] AND [“Brain-Computer” OR “BCI” OR “Brain-Machine” OR “neurostimulation” OR “neuromodulation” OR “TMS” OR “transcranial” OR “neuroprosthetic*” OR “neuroprosthesis” OR “DBS”] AND [“ethic*” OR “bioethic*” OR “normative” OR “value” OR “evaluation”].

Across databases, search syntax was adapted to reflect the respective logic of each library. Our search yielded a total of 484 articles. Of these, 133 duplicates were removed. 52 further results were marked as ineligible by automation tools, due to either not being written in English or not representing original research in a peer-reviewed journal. The remaining 299 were screened manually, with screening tasks being shared equally among the authors GS, TBA, AC, MV, CB, JC, and MI. Articles were included if they were written in English, published in a peer-reviewed journal, and reported original research of empirical qualitative findings among human users of a neurotechnological system that establishes a direct connection with the human central nervous system (including neurostimulation devices). Other types of articles such as perspectives, letters to the editor, or review articles were not included. Potential methods included individual interviews, focus groups, stakeholder consultations but excluded studies that did not use any direct verbal input from the users. Each abstract was screened individually by two reviewers. Unclear cases were resolved by discussion among reviewers. This process resulted in the exclusion of 247 articles, leaving 52 publications for inclusion into the final synthesis.

Full texts of these 52 articles were retrieved and assessed for eligibility. Again, this task was shared equally across the 7 authors who made independent recommendations whether an article was included for further analysis, and disagreement was resolved by discussion. 20 articles were excluded at this stage, due to not meeting the inclusion criteria. This resulted in a body of 32 articles plus 4 additional papers identified through citation chaining, as customary in scoping reviews.

In the data analysis phase, we compiled a descriptive summary of the findings and conducted a thematic analysis. When compiling the descriptive summary, we followed the recommendations by Arksey and O’Malley [ 16 ] and included comprehensive information beyond authors, year, and title of the study, extracting also study location, methodology, study population, type of neurotechnology, and more. For the thematic analysis, the full text was read and coded by the authors through annotations in pdf files, with papers evenly distributed among the group. Coding was based on a previously agreed coding structure of four thematic families, covering (1) subjective experience with BCIs, (2) aspects concerning usability and technology, (3) ethical questions, (4) impact on social relations, and a fifth miscellaneous category for future resolution. In accordance with the suggestions by Braun and Clarke [ 17 ], codes that were not clearly covered by the coding tree were grouped into a category “miscellaneous”, and after discussion used to develop new themes or subsumed under the existing thematic families. The results were compiled and unified by the first author and imported into the Atlas.ti software (version 22.2), with adaptations to the coding tree being discussed between first and last author.

In line with the framework suggested by Pham, Rajić [ 18 ], we adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) in conducting and presenting our results [ 19 ]. A flow diagram representing the entire process is depicted in Fig.  1 .

figure 1

PRISMA flow diagram: search and screening strategy. Based on Page et al [ 19 ]

Descriptive findings

Our study included 36 papers reporting original qualitative research among users of BCIs, neuroprosthetics and neuromodulation. We found a pronounced increase in the number of publications employing qualitative methods in the investigation of such neurotechnology users over time, with the earliest study dating back to 2012. However, contrary to what one may expect as reflection of the growing number of neurotechnology users, we did not find an increase in the average sample size of participants enrolled in qualitative studies nor a correlation between year of publication and number of participants (see Fig.  2 ).

figure 2

Average number of participants and number of publications over time

The included studies were exclusively conducted in Western countries, with 11 studies from the US, 9 from Australia and the remaining 16 distributed across Europe (UK: 6, Germany: 4, Sweden, Netherlands and Switzerland 2 each). The majority of studies investigated the effects of invasive neurotechnology in the form of Deep Brain Stimulators (DBS) (26/36), especially in patients with Parkinson’s Disease (PD) (19/36). Many papers also investigated users’ experiences with non-invasive EEG-based BCIs (7/36), whereas all other technologies such as TMS, ECT, FES, intracortical microelectrode arrays, or spinal cord stimulation were only covered by one or two papers each. Footnote 1 Due to the large focus on PD patients, other potential fields for clinical neurotechnological applications were much less present in the analysed research, with only 4 papers each investigating the effects of DBS on patients with major depressive disorder (4/36) or obsessive-compulsive disorder (OCD) (4/36). Across all technologies and patient groups, studies most frequently relied on semi-structured interviews with individual participants (28/36), with much fewer studies using focus groups (3/36) or other qualitative methods.

We found that a large number of papers (14/36) incorporated longitudinal aspects in their study design. With view to non-invasive BCIs, this comprised involving users in the development and testing of BCIs for acquired brain injury [ 20 , 21 ], assessing subjective reports across sessions for experimental BCI training [ 22 ], or having a 2-month follow-up interview for users of a BCI for pain management after spinal cord injury [ 23 ]. Studies of invasive devices often included interviews pre- and post-implantation, with a potential third follow-up. In studies with two interviews, the first interview after implantation took place a few weeks after implantation [ 24 , 25 ], after 3 months [ 26 ], after 9 months [ 27 , 28 ] or after a year [ 29 ]. In studies with 3 interviews, post-implantation interviews were either conducted after surgery and again after 3 months in a study on spinal cord stimulation [ 30 ] or, in the case of DBS for PD, after 3 and 6 months [ 31 , 32 ] or after 3–6 and 9–12 months respectively [ 33 ]. Table  1 provides a full overview over the included studies.

Thematic findings

Our findings from the thematic analysis can be grouped into four overlapping thematic families, namely (1) ethical challenges of neurotechnology use, (2) subjective experience with clinical neurotechnologies, (3) impact on social relations, and (4) usability and technological aspects. The raw data of our findings are accessible in the supplementary file.

Ethical concerns

With respect to users’ experiences of neurotechnology that touch on classical ethical topics, we found that autonomy played a central role in slightly more than half of all papers (20/36), yet in four different ways. Many papers noted the positive impact neurotechnology has on users’ autonomy. Users often perceive the technology as enabler of greater control over their own life, allowing them “to become who they wanted to be” [ 2 ], providing them with agency and greater independence, restoring their ability to help others, or allowing them to be more spontaneous in their everyday life [ 2 , 10 , 28 , 31 , 32 , 34 , 35 , 36 , 37 ]. Some studies reported how neurotechnology may impact users’ autonomy negatively, especially by making them more dependent on technological and medical support [ 25 , 28 , 35 , 38 , 39 ]. When balancing these positive and negative impacts, some users seem to prefer such dependency and to leave control over the devices to healthcare professionals, to ensure its safe and appropriate working [ 2 , 32 , 39 , 40 ]. Also related to autonomy were concerns about consent, especially with a view to the level of information patients received before the implantation of an invasive device, which was deemed inadequate by some patients [ 2 , 24 , 31 , 34 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 ]. Several papers called to include patients during the technology design process [ 2 , 31 , 39 ]. In addition, questions of responsibility and accountability in case of malfunctioning were repeatedly named as key concern [ 10 , 25 , 37 , 38 , 45 , 47 ].

Concerns about beneficence and about harming patients also featured prominently in most of the analysed papers (24/36), yet with substantive differences on a more granular level. While symptom improvement and restorative changes were widely reported [ 2 , 10 , 23 , 26 , 29 , 31 , 33 , 34 , 35 , 38 , 39 , 40 , 43 , 44 , 46 ], some users reported experiencing physical or psychological side effects, such as postoperative complications, new worries – for instance about magnetic fields or about changing batteries –, stigma, or becoming more aware of their past suffering [ 23 , 25 , 26 , 28 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 42 , 46 , 48 , 49 ]. Less frequently we found concerns about patient-doctor-relationships [ 2 , 24 , 32 , 40 , 42 , 43 ], which seem to mediate the acceptance of clinical neurotechnologies but are also themselves impacted by technology use. For instance, while some research points to the importance of patients’ trust in healthcare professionals for the acceptance of neurotechnology [ 24 ], a personal narrative described a breakdown of patient-physician relationship following a distressful DBS implantation for treating PD [ 42 ].

Impact on subjective experiences

Since the subjective lived experiences of neurotechnology users commonly constituted the central element of the reviewed qualitative papers, we found a rich field of reports in the vast majority of paper (31/36), describing experiences that were perceived as positive, negative or neutral. Neurotechnology-induced behavioural changes [ 28 , 36 , 37 , 40 , 42 , 46 , 47 , 49 ], as well as changes in feelings [ 27 , 41 , 42 ], (self-) perception [ 10 , 23 , 34 , 36 , 40 , 41 , 42 , 44 , 48 , 50 ], personality [ 27 , 29 , 34 , 35 , 36 , 37 , 42 , 43 , 44 , 47 , 49 ], preferences [ 49 , 50 ] or thinking [ 10 , 41 ] were also reported, particularly in users receiving continuous, non-adaptive deep brain stimulation (DBS).

Behavioural changes often concerned desired outcomes such as fewer obsessive thoughts and compulsive behaviours after successful OCD treatment [ 49 ], acting with less impediment due to seizure predictions [ 36 ], or acting more boldly with more energy and increased confidence due to symptom improvement in PD [ 37 , 47 ]. Nevertheless, it was necessary for patients and for their environment to adapt and get used to new patterns of behaviour. Some patients also reported undesirable behavioural changes after subthalamic DBS implantation, “bordering on mania” [ 42 ], such as being excessively talkative [ 46 ] or shopping compulsions that were later described by the patient as “ridiculous” [ 28 ].

These outwardly observable changes were often related to psychological changes that users reported. Some DBS users experienced mood changes, ranging from elevated to depressed [ 27 , 41 , 42 , 44 ], while others reported changed preferences. Sometimes this affected what users valued as important in life [ 50 ], sometimes it related to very particular preferences, such as taste in music, with one patient attributing a transition from The Rolling Stones and The Beatles to Johnny Cash to their DBS implantation [ 49 ]. In patients treated for OCD or motor disorders, two studies also found positive impact on users’ thinking, whether by freeing them from obsessive thoughts [ 41 ] or improving their concentration skill [ 10 ]. In line with the large neuroethical debate on the subject, changes at times amounted to what neurotechnology users described as personality changes. Such changes included negative impacts such as being more irritable, anxious or less patient [ 34 , 35 ] or overly increased libido [ 49 ], neutral changes, such as (re-)taking an interest in politics or movies [ 49 ], and positive changes linked to improvement of psychiatric symptoms, such as being more easy-going and daring, being more expressive and assertive, or simply being more confident [ 35 , 49 ].

In line with the diversity of these changes, patients reported a vast spectrum of different attitudes towards and relations with the neurotechnology. Some users embraced the BCI explicitly as part of themselves [ 14 , 37 , 39 , 49 ] and described how “DBS becomes a part of who you are rather than changing you” [ 37 ]. Others felt estranged using the BCI [ 28 , 36 , 37 , 42 , 49 ] and even expressed desires to remove the alien device in forceful terms: “I hate it! I wish I could pull it out!” [ 37 ]. Aside from changes brought about by the device, the patients’ state before using neurotechnology and especially their relation to their illness seemed to play a crucial role [ 28 , 51 ]. An overview over the different thematic findings is provided in Fig.  3 .

figure 3

Impact of clinical neurotechnology on subjective experience. The colours represent the valence of the impact, with orange dots representing negative, green dots representing positive, and blue dots representing ambivalent changes

The overwhelming majority of studies (23/36) reported improvements of the treated symptoms [ 2 , 26 , 28 , 31 , 33 , 34 , 35 , 37 , 40 , 41 , 42 , 43 , 46 , 47 , 48 , 49 , 50 , 52 ], making patients’ lives easier [ 48 , 49 ] or – as some put it – even saving their lives [ 34 , 45 , 48 ]. Patients felt that the neurotechnology allowed them an increase in activity [ 33 , 34 , 40 ] and a return to previous forms of behaviour [ 33 , 40 , 48 , 49 ], strengthening their sense of freedom and independence [ 2 , 10 , 22 , 33 , 34 , 35 , 36 , 40 , 43 , 49 , 50 , 53 ]. Emotionally, users reported feeling more daring [ 29 , 35 ], self-confident [ 28 , 35 , 36 , 37 , 44 ] or more stable [ 34 , 50 ] as well as feelings of hope or joy [ 10 , 22 , 35 , 50 ]. For better or worse, such changes were sometimes perceived as providing a “new start” [ 34 , 48 ] or even a “new identity” [ 34 , 41 , 42 , 49 ], while others perceived their changes as a reversion to their “former” [ 28 , 29 , 47 , 49 , 50 ] or their “real” self [ 36 , 42 , 49 ].

Among the negative subjective impacts of clinical neurotechnology mentioned in the literature (16/36), users commonly reported issues of estrangement, caused by self-perceived changes to behaviour, feelings, personality traits, or patients’ relation to their disease or disorder [ 28 , 36 , 37 , 42 , 49 ]. The negative impact differed largely depending on the type of neurotechnology used as well as on the disorders and symptoms treated with the technology. While ALS patients as users of non-invasive BCIs for spelling interfaces reported increased anxiety in interaction with the devices [ 53 ], PD patients with invasive DBS reported presurgical fears of pain and of the invasive procedure as well as fear of outward manipulation within their brain through the DBS implantation [ 40 , 43 , 54 ]. Frequently, it was not entirely clear whether adverse developments such as further cognitive decline were attributable to the implanted device or to the persisting disease and its natural trajectory [ 31 , 33 , 34 , 40 , 43 , 48 , 50 ]. However, occasionally very severe psychiatric consequences of treatment were reported, notably by one PD patient who experienced mania and depressive symptoms through DBS treatment, resulting in a suicide attempt [ 42 ]. For DBS patients with OCD, negative impacts seem more related to difficulties of adapting to the new situation [ 35 , 49 ], for instance to their suddenly increased libido as a side-effect of DBS use that may be perceived as “too much” [ 49 ], or to a perceived lack of preparation for their new (OCD-free) identity [ 41 ]. In two studies on patients with OCD, the sudden improvement of symptoms also led to moments of existential crisis, given that the symptoms had shaped a great part of their previous daily activities [ 41 , 49 ].

Impact on social relations

Using a neurotechnology not only impacts users but can also affect social relations with others (23/36), particularly primary caregivers. While some neurotechnologies such as non-invasive BCIs for communication may create additional workload for caregivers if the BCI needs to be set up, neurotechnologies can also reduce their burden by rendering patients more independent [ 10 , 34 , 40 , 53 ]. Beyond workload, neurotechnologies were also reported to enrich social relations by facilitating communication [ 10 , 34 , 53 ], though in some cases, they led to potential tension between informal caregivers and patients, e.g. due to personality changes [ 28 , 35 , 37 , 40 , 42 , 47 , 49 , 55 ] or if the device was blamed for a patient’s behaviour or suggested as a solution to interpersonal problems [ 2 ]. Whether positive or negative, family and social support were reportedly playing a vital role in the treatment [ 2 , 28 , 40 , 50 ].

Similarly important was support by clinicians [ 39 , 40 ] and the wish for support groups with fellow neurotechnology users [ 27 , 30 , 40 , 41 ]. Inclusion in research activities was also reported as a positive effect of (experimental) BCIs [ 10 , 38 ]. More importantly though, in a large number of studies, neurotechnology users reported positive effects on their social relations [ 2 , 29 , 35 , 43 , 46 , 48 , 50 ], with some users reporting an increased wish to help others [ 35 , 50 ]. A negative social consequence in public was perceived stigma [ 25 , 35 , 48 ], even though some patients chose to actively show their device in public, “to spread information and knowledge about this treatment” [ 39 ].

Usability concerns

Concerns with technical questions and usability issues comprising efficiency, effectiveness and satisfaction [ 52 ] were also raised by almost half of the research papers (17/36), yet differed greatly between neurotechnologies, owing to large differences in hardware (e.g., between EEG caps and implanted electrodes) and handling (e.g., between passive neurostimulation or training-intensive active BCIs). Across all applications, invasive as much as non-invasive, the most frequent concerns (8/36 each) related to hardware issues [ 2 , 22 , 23 , 38 , 39 , 46 , 52 , 53 ] as well as to the required fine-tuning of devices to find optimal settings, associated with time-burden for their users [ 20 , 23 , 27 , 32 , 39 , 46 , 50 , 56 ]. Similarly, the training of patients required for the successful use of non-invasive, active BCIs was reported as being perceived as cumbersome or complicated, providing a potential obstacle to their implementation in everyday contexts [ 38 , 52 ]. Several studies reported that the use of such active BCIs required considerable concentration, leading to fatigue after prolonged use [ 10 , 38 , 53 ]. Mediating factors to address such obstacles were the availability of technical support [ 33 , 53 ], general attitudes towards technology [ 53 ], ease of integrating the technology into everyday life [ 10 , 38 , 53 ] and realistic expectations regarding the neurotechnology’s effects [ 30 , 38 , 40 , 46 ].

The identified publications highlight that qualitative research through interviews and focus groups offers a useful way to gain access to the subjective experience of users of a diverse range of neurotechnologies. Such investigation of users’ privileged knowledge about novel devices in turn is crucial to improve future neurotechnological developments and align them with ethical considerations already at an early stage [ 57 ]. Here, we discuss our findings by comparing different clinical neurotechnologies, identify gaps in the literature and point to the limitations of our scoping review.

One finding of our scoping review is that qualitative research on neurotechnologies has so far primarily focused on users of DBS treated for PD. In part, this may reflect that DBS is an established, effective treatment for controlling motor symptoms in PD, improving patients’ quality of life, resulting in its wide-spread adoption in many different healthcare systems worldwide [ 58 , 59 , 60 , 61 ]. Still, it would be highly beneficial to extend qualitative research to different patient groups and other clinical neurotechnologies that directly target mental states or processes, where more pronounced effects of subjective experiences may be expected.

A potential obstacle to involving more neurotechnology users beyond PD patients treated with DBS is that, for many other technologies, users are still likely to receive their treatment as part of an experimental trial. Qualitative research with such patients may face the additional practical barrier of convincing the other researchers to facilitate access to their patients. Better communication across disciplines and research fields may facilitate such access, providing much-needed insights into user experiences of experimental neurotechnologies.

Some of the articles reviewed here already offer such perspectives, e.g. the ones investigating DBS used for major depressive disorder or OCD. Such research may also help to further clarify which differences in subjective outcome are owed to technology and which are owed to differences in the treated disorders. As different patient groups are likely to have different needs and views, further research is needed to explore those needs and views and develop implementation strategies designed to address them in a patient-tailored manner. Furthermore, different neurotechnologies (and applications thereof) are likely to impact the mind of their users in a different way. Therefore, future research should investigate whether the type and modality of stimulation exert differential impacts on the subjective experience of the end users.

Our findings reveal differential effects among patients using DBS for the treatment of PD and patients using DBS for the treatment of OCD, respectively. For example, some reported effects of invasive neurotechnology such as the induction of more assertive behaviour may be a reason for concern in PD [ 28 ], while being considered a successful treatment outcome in OCD [ 35 , 49 ]. More comparative research among DBS users treated for OCD or other neuropsychiatric disorders, such as depression, are needed [ 62 ] and may help to better understand which experiences are directly attributable to the stimulation of specific brain areas such as the subthalamic nucleus for PD and the nucleus accumbens for OCD, and which result from other factors, e.g., related to undergoing surgery or to different treatment settings in neurological and psychiatric care [ 63 , 64 ].

Research on such differences may also imply practical consequences. For instance, one may wonder whether different preparation stages and possibly different degrees of information for obtaining consent may be called for between invasive clinical neurotechnologies used in psychiatry and neurology—or whether, on the contrary, similarities in the use of neurotechnologies ultimately point towards ending the distinction between mental and neurological illnesses [ 63 ]. In either case, our findings highlight that psychological impacts of clinical neurotechnologies are complex and multi-faceted phenomena—mediated by many factors—calling for more qualitative research to better grasp the lived experiences of those using novel neurotechnologies.

Our scoping review identified several gaps in the literature related to research methodology, investigated topics and investigated neurotechnologies. First, while a large number of studies embrace a longitudinal approach to investigating users’ experiences, none of the included studies looked at impacts beyond a timeframe of one year. However, as is known from DBS studies in major depressive disorder, it is important to investigate and evaluate long-term effects of neurotechnologies such as DBS [ 6 ]. Future qualitative research should therefore address this gap. Connected to this are, second, research questions that have not yet been investigated in full, such as long-term impacts of clinical neurotechnologies on memory or belief continuity. Third, empirical findings on closed-loop neurotechnologies that integrate artificial intelligence are so far nascent [ 2 , 36 ]. As there are important conceptual and ethical questions that arise specifically from the integration of human and artificial intelligence, e.g. questions of control and responsibility, further qualitative research should be conducted on users of such devices.

Finally, our findings reveal a complex and multifaceted landscape of ethical considerations. While considerations regarding personal autonomy appear largely prevalent among users, the perceived or expected impacts of neurotechnology use on personal autonomy differ significantly. Some studies suggest that neurotechnology use may enhance personal autonomy by allowing users to be more autonomous and independent in their daily lives and even restore part of the autonomous control that was disrupted by their disorders. Other studies suggest that some neurotechnologies, especially neural implants relying on autonomous components, may diminish autonomy as they may override some users’ intentions. Sometimes this ambivalent effect is observed within the same study. This is consistent with previous theoretical reflections on this topic [ 65 ] and urges scientists to develop fine-grained and patient-centred models for assessing the impact of neurotechnology on personal autonomy. These models should distinguish on-target and off-target effects and elucidate which subcomponents of personal autonomy (e.g., volition, behavioural control, authenticity etc.) are impacted by the use of neurotechnology.

Our scoping review has several limitations. Owing to the nature of a scoping review and to our inclusion criteria, there may be relevant literature that we missed to identify and analyse. For instance, since we only included English publications, we may have missed relevant research published in other languages, which may explain why we only found qualitative studies conducted in Western countries. Furthermore, our narrow search strategy excluded other relevant research, for instance qualitative studies conducted with potential users of clinical neurotechnology or with caregivers. Yet, a scoping reviews can provide a useful tool to map existing literature [ 16 , 18 ], and given recent advances in technology and accompanying qualitative research, an update of earlier reviews such as the one by Kögel et al. [ 14 ], provides an important addition to the existing literature. By looking at qualitative studies only we further import general limitations of qualitative studies, such as a lack of generalizability and a dependency on the skills and experience of the involved researchers. More standardized instruments to complement the investigation of subjective experiences of neurotechnology users therefore seem highly desirable. Recent quantitative approaches such as online surveys assessing the subjective preferences of DBS users concerning the timing of implantation [ 66 ] or studies combining qualitative data with quantitative assessments [ 67 ] point in this direction. Additionally, experimental approaches to the monitoring and evaluation of the effects of neurotechnology on the user’s experience are currently absent. Therefore, future research should complement qualitative and quantitative user evaluations based on social science methods (e.g., interviews, focus groups and questionnaires) with experimental models.

The findings of our review emphasize the diversity of individual experiences with neurotechnology across individuals and different technologies. They underscore the need to conduct qualitative research among diverse groups at different time-points to better assess the impact of such technologies on their users, which are important to inform requirements of efficacy and safety for clinical neurotechnologies. In addition, qualitative research offers one way to implement user-centred ethical considerations into product development through user-centred design and to accompany the development of novel neurotechnologies with ethical considerations as they mature and become clinical standard.

Data availability

The availability of the full data supporting the findings of this study is subject to restrictions due to the copyright of the included papers. The quotes analysed during this study are included in this published article and its supplementary information files. Further data are available from the authors upon request.

As many publications included patients with different diagnoses or investigated the effects of different neurotechnologies, the numbers indicated here do not add up.

UNESCO. Unveiling the neurotechnology landscape: scientific advancements innovations and major trends. 2023.

Klein E, et al. Brain-computer interface-based control of closed-loop brain stimulation: attitudes and ethical considerations. Brain-Computer Interfaces. 2016;3(3):140–8.

Article   Google Scholar  

Kellmeyer P, et al. The effects of closed-loop medical devices on the autonomy and accountability of persons and systems. Camb Q Healthc Ethics. 2016;25(4):623–33.

Limousin P, Foltynie T. Long-term outcomes of deep brain stimulation in Parkinson disease. Nat Reviews Neurol. 2019;15(4):234–42.

Alkawadri R. Brain–computer interface (BCI) applications in mapping of epileptic brain networks based on intracranial-EEG: an update. Front NeuroSci. 2019;13:191.

Crowell AL, et al. Long-term outcomes of subcallosal cingulate deep brain stimulation for treatment-resistant depression. Am J Psychiatry. 2019;176(11):949–56.

Mar-Barrutia L, et al. Deep brain stimulation for obsessive-compulsive disorder: a systematic review of worldwide experience after 20 years. World J Psychiatry. 2021;11(9):659.

Clausen J, et al. Help, hope, and hype: ethical dimensions of neuroprosthetics. Science. 2017;356(6345):1338–9.

Gilbert F, Viaña JNM, Ineichen C. Deflating the DBS causes personality changes bubble. Neuroethics. 2021;14(1):1–17.

Kögel J, Jox RJ, Friedrich O. What is it like to use a BCI? - insights from an interview study with brain-computer interface users. BMC Med Ethics. 2020;21(1):2.

Burwell S, Sample M, Racine E. Ethical aspects of brain computer interfaces: a scoping review. BMC Med Ethics. 2017;18(1):1–11.

Sullivan LS, Illes J. Beyond ‘communication and control’: towards ethically complete rationales for brain-computer interface research. Brain-Computer Interfaces. 2016;3(3):156–63.

Specker Sullivan L, Illes J. Ethics in published brain–computer interface research. J Neural Eng. 2018;15(1):013001.

Kögel J, et al. Using brain-computer interfaces: a scoping review of studies employing social research methods. BMC Med Ethics. 2019;20(1):18.

van Velthoven E, et al. Ethical implications of visual neuroprostheses—a systematic review. J Neural Eng. 2022;19(2):026055.

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Braun V, Clarke V. Using thematic analysis in psychology. Qualitative Res Psychol. 2006;3(2):77–101.

Pham MT, et al. A scoping review of scoping reviews: advancing the approach and enhancing the consistency. Res Synthesis Methods. 2014;5(4):371–85.

Page MJ, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Syst Reviews. 2021;10(1):1–11.

Mulvenna M, et al. Realistic expectations with brain computer interfaces. J Assist Technol. 2012;6(4):233–44.

Martin S, et al. A qualitative study adopting a user-centered approach to design and validate a brain computer interface for cognitive rehabilitation for people with brain injury. Assist Technol. 2018;30(5):233–41.

Kryger M, et al. Flight simulation using a brain-computer interface: a pilot, pilot study. Exp Neurol. 2017;287:473–8.

Al-Taleb M, et al. Home used, patient self-managed, brain-computer interface for the management of central neuropathic pain post spinal cord injury: usability study. J Neuroeng Rehabil. 2019;16(1):1–24.

Wexler A, et al. Ethical issues in intraoperative neuroscience research: assessing subjects’ recall of informed consent and motivations for participation. AJOB Empir Bioeth. 2022;13(1):57–66.

Goering S, Wexler A, Klein E. Trading vulnerabilities: living with Parkinson’s Disease before and after deep brain stimulation. Camb Q Healthc Ethics. 2021;30(4):623–30.

Maier F, et al. Patients’ expectations of deep brain stimulation, and subjective perceived outcome related to clinical measures in Parkinson’s disease: a mixed-method approach. J Neurol Neurosurg Psychiatry. 2013;84(11):1273–81.

Thomson CJ, Segrave RA, Carter A. Changes in Personality Associated with Deep Brain Stimulation: a qualitative evaluation of clinician perspectives. Neuroethics. 2021;14:109–24.

Thomson CJ, et al. He’s back so I’m not alone: the impact of deep brain stimulation on personality, self, and relationships in Parkinson’s disease. Qual Health Res. 2020;30(14):2217–33.

Lewis CJ, et al. Subjectively perceived personality and mood changes associated with subthalamic stimulation in patients with Parkinson’s disease. Psychol Med. 2015;45(1):73–85.

Ryan CG, et al. An exploration of the experiences and Educational needs of patients with failed back surgery syndrome receiving spinal cord stimulation. Neuromodulation. 2019;22(3):295–301.

Kubu CS, et al. Patients’ shifting goals for deep brain stimulation and informed consent. Neurology. 2018;91(5):e472–8.

Merner AR, et al. Changes in patients’ desired control of their deep brain stimulation and subjective Global Control over the Course of Deep Brain Stimulation. Front Hum Neurosci. 2021;15:642195.

Liddle J, et al. Impact of deep brain stimulation on people with Parkinson’s disease: a mixed methods feasibility study exploring lifespace and community outcomes. Hong Kong J Occup Ther. 2019;32(2):97–107.

Chacón Gámez YM, Brugger F, Biller-Andorno N. Parkinson’s Disease and Deep Brain Stimulation Have an Impact on My Life: A Multimodal Study on the Experiences of Patients and Family Caregivers. Int J Environ Res Public Health. 2021;18(18):9516.

de Haan S et al. Effects of deep brain stimulation on the lived experience of obsessive-compulsive disorder patients: in-depth interviews with 18 patients. PLoS One. 2015;10(8):e0135524.

Gilbert F, et al. Embodiment and estrangement: results from a first-in-Human Intelligent BCI Trial. Sci Eng Ethics. 2019;25(1):83–96.

Gilbert F, et al. I miss being me: phenomenological effects of deep brain stimulation. AJOB Neurosci. 2017;8(2):96–109.

Grübler G, et al. Psychosocial and ethical aspects in non-invasive EEG-based BCI research - A survey among BCI users and BCI professionals. Neuroethics. 2014;7(1):29–41.

Hariz G-M, Hamberg K. Perceptions of living with a device-based treatment: an account of patients treated with deep brain stimulation for Parkinson’s disease. Neuromodulation: Technol Neural Interface. 2014;17(3):272–8.

Liddle J, et al. Mapping the experiences and needs of deep brain stimulation for people with Parkinson’s disease and their family members. Brain Impairment. 2019;20(3):211–25.

Bosanac P, et al. Identity challenges and ‘burden of normality’ after DBS for severe OCD: a narrative case study. BMC Psychiatry. 2018;18(1):186.

Gilbert F, Viaña JN. A personal narrative on living and dealing with Psychiatric symptoms after DBS surgery. Narrat Inq Bioeth. 2018;8(1):67–77.

Cabrera LY, Kelly-Blake K, Sidiropoulos C. Perspectives on deep brain stimulation and its earlier use for parkinson’s disease: a qualitative study of US patients. Brain Sci. 2020;10(1).

Bluhm R, et al. They affect the person, but for Better or worse? Perceptions of Electroceutical interventions for Depression among psychiatrists, patients, and the Public. Qual Health Res. 2021;31(13):2542–53.

Sankary LR et al. Exit from Brain Device Research: A Modified Grounded Theory Study of Researcher Obligations and Participant Experiences. AJOB Neurosci. 2021;1–12.

Thomson CJ, et al. Nothing to lose, absolutely everything to Gain: patient and caregiver expectations and subjective outcomes of deep brain stimulation for treatment-resistant depression. Front Hum Neurosci. 2021;15:755276.

Mosley PE, et al. Woe betides anybody who tries to turn me down.’ A qualitative analysis of neuropsychiatric symptoms following subthalamic deep brain stimulation for Parkinson’s Disease. Neuroethics. 2021;14:47–63.

Hariz G-M, Limousin P, Hamberg K. DBS means everything-for some time. Patients’ perspectives on daily life with deep brain stimulation for Parkinson’s disease. J Parkinson’s Disease. 2016;6(2):335–47.

de Haan S, et al. Becoming more oneself? Changes in personality following DBS treatment for psychiatric disorders: experiences of OCD patients and general considerations. PLoS ONE. 2017;12(4):e0175748.

Shahmoon S, Smith JA, Jahanshahi M. The lived experiences of deep brain stimulation in parkinson’s disease: an interpretative phenomenological analysis. Parkinson’s Disease. 2019;2019(1):1937235.

Adamson AS, Welch HG. Machine learning and the Cancer-diagnosis problem - no gold Standard. N Engl J Med. 2019;381(24):2285–7.

Zulauf-Czaja A, et al. On the way home: a BCI-FES hand therapy self-managed by sub-acute SCI participants and their caregivers: a usability study. J Neuroeng Rehabil. 2021;18(1):1–18.

Blain-Moraes S, et al. Barriers to and mediators of brain-computer interface user acceptance: Focus group findings. Ergonomics. 2012;55(5):516–25.

LaHue SC, et al. Parkinson’s disease patient preference and experience with various methods of DBS lead placement. Parkinsonism Relat Disord. 2017;41:25–30.

Lewis CJ, et al. The impact of subthalamic deep brain stimulation on caregivers of Parkinson’s disease patients: an exploratory study. J Neurol. 2015;262(2):337–45.

Cabrera LY, et al. Beyond the cuckoo’s nest: patient and public attitudes about Psychiatric Electroceutical interventions. Psychiatr Q. 2021;92(4):1425–38.

Jongsma KR, Bredenoord AL. Ethics parallel research: an approach for (early) ethical guidance of biomedical innovation. BMC Med Ethics. 2020;21(1):1–9.

Lozano AM, et al. Deep brain stimulation: current challenges and future directions. Nat Reviews Neurol. 2019;15(3):148–60.

Schuepbach W, et al. Neurostimulation for Parkinson’s disease with early motor complications. N Engl J Med. 2013;368(7):610–22.

Follett KA, et al. Pallidal versus subthalamic deep-brain stimulation for Parkinson’s disease. N Engl J Med. 2010;362(22):2077–91.

Mahajan A, et al. Global variability in Deep Brain Stimulation practices for Parkinson’s Disease. Front Hum Neurosci. 2021;15:667035.

Bublitz C, Gilbert F, Soekadar SR. Concerns with the promotion of deep brain stimulation for obsessive-compulsive disorder. Nat Med. 2023.

White P, Rickards H, Zeman A. Time to end the distinction between mental and neurological illnesses. BMJ. 2012;344.

Martin JB. The integration of neurology, psychiatry, and neuroscience in the 21st century. Am J Psychiatry. 2002;159(5):695–704.

Ferretti A, Ienca M. Enhanced cognition, enhanced self? On neuroenhancement and subjectivity. J Cogn Enhancement. 2018;2(4):348–55.

Montemayor J, et al. Deep brain stimulation for Parkinson’s Disease: why earlier use makes Shared decision making important. Neuroethics. 2022;15(2):1–11.

Maier F, et al. Subjective perceived outcome of subthalamic deep brain stimulation in Parkinson’s disease one year after surgery. Parkinsonism Relat Disord. 2016;24:41–7.

Download references

Acknowledgements

GS would like to thank the attendees of the ERA-NET NEURON mid-term seminar (Madrid, January 2023) for kind and constructive feedback on an earlier draft.

This work was supported by the ERA-NET NEURON project HYBRIDMIND (SNSF 32NE30_199436; BMBF, 01GP2121A and -B), and in part by the European Research Council (ERC) under the project NGBMI (759370), the Federal Ministry of Research and Education (BMBF) under the projects SSMART (01DR21025A), NEO (13GW0483C), QHMI (03ZU1110DD), QSHIFT (01UX2211) and NeuroQ (13N16486), as well as the Einstein Foundation Berlin (A-2019-558).

Author information

Authors and affiliations.

Faculty of Medicine, Institute for History and Ethics of Medicine, Technical University of Munich, Munich, Germany

Georg Starke & Marcello Ienca

College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland

Faculty of Law, University of Ottawa, Ottawa, ON, Canada

Tugba Basaran Akmazoglu

Clinical Neurotechnology Laboratory, Department of Psychiatry and Neurosciences at the Charité Campus Mitte, Charité – Universitätsmedizin Berlin, Berlin, Germany

Annalisa Colucci, Mareike Vermehren, Maria Buthut & Surjo R. Soekadar

Centre for Health Law Policy and Ethics, University of Ottawa, Ottawa, ON, Canada

Amanda van Beinum

Bertram Loeb Research Chair, Faculty of Law, University of Ottawa, Ottawa, ON, Canada

Jennifer A. Chandler

Faculty of Law, Universität Hamburg, Hamburg, Germany

Christoph Bublitz

You can also search for this author in PubMed   Google Scholar

Contributions

GS, TBA, AC, MV, SS, CB, JC and MI contributed to the design and planning of the review, conducted the literature searches and organized and analyzed collected references. GS and MI wrote different sections of the article. All authors provided review of analysis results and suggested revisions for the write-up. All authors reviewed and approved the manuscript before submission.

Corresponding author

Correspondence to Georg Starke .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Starke, G., Akmazoglu, T.B., Colucci, A. et al. Qualitative studies involving users of clinical neurotechnology: a scoping review. BMC Med Ethics 25 , 89 (2024). https://doi.org/10.1186/s12910-024-01087-z

Download citation

Received : 23 January 2023

Accepted : 02 August 2024

Published : 14 August 2024

DOI : https://doi.org/10.1186/s12910-024-01087-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Neurotechnology
  • Qualitative research
  • Subjective experience
  • Self-perception
  • Patient-centred technology

BMC Medical Ethics

ISSN: 1472-6939

how to review qualitative research

  • Open access
  • Published: 14 August 2024

Investigating the implementation challenges of the research doctoral program and providing related solutions: a qualitative study

  • Alireza Koohpaei 1 ,
  • Maryam Hoseini Abardeh 2 ,
  • Shahnaz Sharifi 3 ,
  • Majid Heydari 2 &
  • Zeynab Foroughi 4  

BMC Medical Education volume  24 , Article number:  878 ( 2024 ) Cite this article

Metrics details

Doctoral programs have consistently garnered the attention of policymakers in medical education systems due to their significant impact on the socio-economic advancement of countries. Therefore, various doctoral programs have been implemented with diverse goals. In Iran, a research doctorate program, known as PhD by Research, was introduced primarily to engage in applied research related to healthcare needs. Nevertheless, the achievement of the program’s goals has been questioned. This study aimed to identify the implementation challenges of the Research Doctorate Program and its solutions in Iran.

This descriptive qualitative study followed the Standards for Reporting Qualitative Research: A Synthesis of Recommendations and was conducted in two steps. Firstly, the challenges of the Iranian Ph.D. by research program were identified through the perspectives of the program’s students and graduates. In the second step, relevant solutions to these challenges were determined by focus groups of key informant experts. The transcripts were analyzed using qualitative content analysis.

Five students and six graduates were interviewed in the first step and seven experts participated in the second one. The challenges and related solutions are explored in four main themes, including: (1) admission criteria, (2) program goals and expected outcomes, (3) curricula, and (4) financial and human resources. The study showed that various dimensions of the doctoral program are not aligned with each other and how to adapt the program in these dimensions.

The study revealed the importance of a systematic approach in defining various dimensions of doctoral programs according to program goals and provided specific solutions for defining a research doctorate program in the context of a low- and middle-income country.

Peer Review reports

Doctoral education plays a strategic role in national and regional economic, scientific, technological, and social development [ 1 ]. It lies at the heart of a university’s research capacity, which is also recognized as the primary source for research productivity and innovation in the global knowledge economy [ 2 ]. Hence, the significance of doctoral education captures the interest of policymakers at both international and national levels, as well as institutional leaders [ 3 , 4 ].

Over the past decades, doctoral education has witnessed a profound transformation [ 5 ] and takes various forms that can impact the quality and success of doctoral programs [ 6 ]. Doctoral programs offer students a study plan in their chosen field, which helps them gain a broad understanding of their discipline, develop expertise in the fundamental knowledge and methodologies, and acquire competencies to contribute to meaningful and practical scientific advancements [ 7 ]. Also, it prepares candidates for their various academic tasks [ 8 ].

Around the world, universities and medical education systems have established various types of doctoral programs tailored to their unique goals and requirements. Therefore, there is a wide range of doctoral programs. The most prevalent form of doctoral degree is the ‘Doctor of Philosophy’ or Ph.D., which signifies the recognition of students’ expertise in conducting research and contributing to generating novel knowledge [ 3 ]. In addition, the highest level of formal education belongs to the Doctor of Philosophy (Ph.D.) degree, because it equips individuals with the necessary knowledge and skills to push forward the boundaries of knowledge in a specific field [ 9 ]. Traditional Ph.D. programs typically center around dissertations. Additionally, there are also taught Ph.D. and Ph.D. by publication models, which respectively emphasize coursework and publications. Also, to enhance graduates’ preparation for the work environment, there are various types of work-based and professional doctoral programs [ 10 ]. The most important reasons for reforming traditional doctoral programs and creating diversity within them include: increasing the employment opportunities for graduates in the private sector [ 11 ], heightened focus on commercializing research outcomes [ 12 ], fostering competition and enhancing skills among graduates, facilitating a transition in career paths from academia to industry through collaborations between industry and universities [ 13 ], and aligning with market demands in the context of a competitive and dynamic knowledge-based economy [ 14 ].

Extensive research has been conducted on doctoral programs, resulting in a substantial amount of literature available. Some studies focused on students ‘experiences during the doctorate journey, because students go through an emotionally and intellectually demanding journey that encompasses a diverse range of both positive and negative experiences [ 15 ]. As well as, their live truly is a ‘constant juggling act’ and they may encounter different challenges and experiences that undergraduate may not come across [ 16 , 17 ]. From this perspective, Pyhältö and his et al. (2012) reported doctoral students’ problems which were related to supervision, the research community, domain specific, the general working process and resources [ 17 ]. Prendergast et al. studied the well-being of doctoral students [ 16 ].

Other studies are concentrated on the evaluation of doctoral programs. For example, Cross and Backhouse conducted a comprehensive investigation of the various limitations, obstacles, and possibilities within African doctoral education. They also proposed a framework for evaluating these programs which consisted of six elements including (1) expected outcomes, (2) candidates in context, (3) curriculum, (4) structures, (5) resources, and (6) funding, and partnership opportunities [ 18 ]. Meuleners et al. evaluated five aspects of the 82 life science doctoral programs in Germany, including (1) interdisciplinary, (2) the international orientation of these programs, (3) courses offered, (4) formal characteristics of supervision, and (5) examination regulations of the doctoral programs (6).

Assessment of research-doctorate programs have been conducted in different regions such as the United States [ 19 ] and Africa [ 20 ]. The University of Pennsylvania School of Nursing revised research-focused doctorate programs in October 2019. Some of the proposed changes involve enhancing the readiness of Ph.D. program graduates to connect research with practical applications, redesigning funding and support systems for students on an accelerated Ph.D. track, and developing ways to measure and evaluate the achievements of graduates [ 21 ].

In research-focused doctorate, it is crucial for doctoral students to gain a deep understanding of specific concepts in order to become independent researchers [ 22 ]. Studies in this area have demonstrated that traditional Ph.D. programs may not adequately provide graduates with the essential skills and knowledge they need [ 23 ]. To ensure the successful completion and achievement of doctoral graduates, it is important to consistently work towards developing doctoral programs that are adaptable to the learning needs of doctoral candidates and to overcome any barriers to desired outcomes [ 8 ].

In 2008, Iranian educational policymakers in the Ministry of Health and Medical Education (MoHME) made the decision to design a research-focused doctorate program (Ph.D. by research) to enhance the practicality of doctoral education and make a connection between doctoral education and job requirements. The purpose of this program was to educate candidates who can meet the needs of the country and expand the boundaries of knowledge by using advanced research methods and the latest research for problem solving [ 24 ]. This program consists of two parts, in the first part (M.Phil.), candidates learn research and technology theoretical and scientific skills, and in the second one, they should conduct a thesis and they are supported by a supervisory team which typically consists of two supervisors. The program was revised in 2013, 2014, and 2020. However, it appears that the program has not effectively achieved its intended goal. The evidence regarding the situation of graduates in the job market and their struggles in finding suitable employment confirms several obstacles within the program. Therefore, the aim of this study was to detect the implementation challenges of the Research Doctorate Programs from the students and graduates’ perspectives.

Materials and methods

This study was conducted according to the Standards for Reporting Qualitative Research: A Synthesis of Recommendations [ 25 ].

Study design

We applied a qualitative descriptive methodology to achieve an in-depth and rigorous description of the challenges of the research-focused doctorate program and relevant solutions. The study was conducted in two steps. Firstly, the challenges of the Iranian Ph.D. by research program were identified, and in the second step, relevant solutions to these challenges were determined.

Participant and sampling

Participants were selected based on their direct experience and knowledge of the Iranian Ph.D. by research program. Therefore, purposeful sampling was used to select participants, including students and graduates (P) from various fields in the doctoral program (first step). The purposeful sampling was of the maximum diversity type. This means that the students were selected from different fields so that the type of field does not lead to bias in available data. Also, information-rich experts were invited to participate in focus groups to propose solutions regarding the identified challenges (second step). In this step, experts (E) were selected from decision makers and policymakers in the doctorate program, medical education experts and researchers, professors and directors from academic institutions that conducted the program. In the first step, two participants were selected according to program records and the further participants were selected through snowball sampling technique. The interview guide and informed consent form were sent to potential research participants via email. If they agree, schedule the interview with them.

The inclusion criteria for the first step were students enrolled in a research doctorate program who were at least in their third year of study or had graduated from the program and had signed the informed consent form to participate in the research. The exclusion criteria included students who were below the third year of their study and those who did not wish to participate in the interview. For the second step, the inclusion criteria were decision-makers and policymakers in the doctorate program, medical education experts and researchers, faculty members, and directors from academic institutions who had been involved with the program for at least five years and had also signed the informed consent form to participate in the research. The exclusion criteria were experts who did not want to participate and did not have at least five years of experience with this program.

Data collection

For the first step, data collection was conducted through in-depth interviews with students and graduates (one in-depth interview with each participant). Data saturation determined the size of the study sample and the number of interviews. There are various models of saturation in qualitative studies. Saunders et al. identified four main saturation models including data saturation, a priori thematic saturation, Theoretical saturation and Inductive thematic saturation [ 26 ]. Data saturation implies on situation when data collection doesn’t provide any new data [ 27 , 28 ]. The interview guide was developed by conducting three pilot interviews. Transcripts of pilot interviews were included in the study analysis. The semi-structured interview was done face-to-face by MHA and ShSh and audio recorded with the participants’ permission. The interviews were transcribed verbatim from the audio recordings. The mean length of interviews was 45 min.

To addressing the identified challenges, we conducted semi-structured focus groups with experts. Data saturation was achieved by conducting five focus group sessions, each with an average of five participants. The team of facilitators included a discussion facilitator who motivated participants to engage in conversations with one another. The second one was responsible for taking notes and documenting the responses and memos. The third facilitator guided the focus group in answering the questions on the interview guide. Data was collected through audio recording and note-taking during the focus group sessions. The average duration of focus groups was 60 min. We have provided the study scripts in Supplementary files 1 & 2 .

Data analysis

The transcribed recorded in-depth and focus group interviews, as well as the notes of facilitators, were managed and organized using MAXQDA 20 software. The transcripts of in-depth interviews with students and graduates were analyzed conventionally. Accordingly, the transcripts were read word by word and key concept were highlighted where appropriate. In this step, three researchers independently analyzed the data, and the final codes, categories, and themes were discussed to achieve consensus. The analysis process includes repeatedly reading the transcripts, assigning meaning to each phrase, labeling the meaning units with codes, reviewing the codes, and organizing them into categories based on their similarities. Finally, the main themes are identified by interconnecting the categories.

In the second step, the focus group transcripts were analyzed using directed content analysis. In fact, the passages were coded using primary codes and categories from the first step.

Trustworthiness

This study describes the experience of conducting a doctoral program, including its challenges and solutions. Therefore, the study can provide guiding principles to consider when conducting any doctoral program. The credibility of study is confirmed by its adherence to the steps of the inductive content analysis method. Also, conformity was achieved by introducing the background of the researchers, who have various experiences and knowledge to analyze data from different perspectives. Additionally, the researchers confirmed the participants’ responses by transcribing the interviews and sharing the transcriptions with them. The interviewees confirmed that the transcripts contain their own words.

Description of participants

In the first step, out of the 15 individuals initially contacted, 11 agreed to participate and signed the consent form. Among the participants, five were actively enrolled in Ph.D. programs, while six had already graduated. Three participants self-identified as male (27%) and eight as female (73%). The backgrounds of the participants were illustrated in Table  1 . The shortest interview lasted 20 min, while the longest interview lasted 60 min. This phase was conducted from September 21, 2023, to December 10, 2024, at the research centers and their workplaces.

At the second stage, the invitation emails were sent to 10 experts and seven agreed to participate in this phase. The focus groups were conducted on January 2024, at the National Agency for Strategic Research in Medical Sciences Education.

Description of experts

Seven experts, including the program’s decision makers (2 participants, 28.5%), directors (2 participants, 28.5%), and medical education experts (3 participants, 43%) were emailed and recruited to discuss about the potential solutions in dealing with detected challenges (Table  2 ). Four experts were male (57%) and three as female (43%). The interview guide constitutes four main questions based on the detected challenges at the first step.

The authors concluded that data saturation had been achieved, indicating that additional interviews would not have resulted in new or distinctive findings.

The explored themes were related to: (1) unspecified admission criteria, (2) deviation from defined goals and expected outcomes, (3) ineffective curriculum to achieve program goals, (4) financial and human resources challenges. Detected themes, their classes and sub-classes are presented in Table  3 . As the focus groups were conducted based on the identified challenges in the first step, the solutions were categorized and presented within each theme as subcategories (Table  4 ).

Theme 1: unspecified admission criteria

Our analysis revealed some issues related to admission criteria, such as admission bias and special requirement.

1–1: admission bias

In many interviews selection based on supervisor ‘s preferences emerged: “ Since the acceptance (at the interview stage) is based on the supervisor’s opinion , the interest of the professors will play an important role in this process (P2). “Most centers choose candidates based on previous acquaintance with students. Personally , I was introduced to several centers based on my selection priorities , and later I found out that in the centers where I was not accepted , the accepted student had already been selected and the professor and student knew each other perfectly (P4)”.

1–2: special requirement

Our data illustrate that the specialized requirement of research institutes and the professional and occupational records of candidates in the specific field are not considered in admission process: “Most centers choose candidates based on previous acquaintance with students. Personally , I was introduced to several centers based on my selection priorities , and later I found out that in the centers where I was not accepted , the accepted student had already been selected and the professor and student knew each other perfectly (P4)”. “ In my opinion , that is better to admit candidates who have worked in the healthcare system for some time , they have known the problems of the system , and they can better solve system problems with their research projects (P6)”.

1–3: solutions

Adapting admission criteria based on program goals.

Experts emphasized the importance of redefining criteria for student admissions. According to their opinions, the criteria should be aligned with the institution’s mission and defined specific to program goal. In fact, students should be selected according to their potential to be a good fit for job in their expertise.

They reached a consensus on considering relevant work experience and published research in the field of study and alignment with the institution’s mission as effective criteria for achieving the objectives of the doctoral program. “ In fact , it is better that the students’ articles be related to the mission of the institution because it is effective in achieving the objective of conducting applied research and increasing the employability of the students (E1)”. “ The mission of the institution where the student is going to spend his/her education should be considered when choosing a student (E2)”.

Theme 2: deviation from defined goals and expected outcomes

This theme includes two classes (1) objectives unrelated to the program and (2) implementation barriers.

2–1: objectives unrelated to the program

This class includes two subclasses: 1) increase the ranks of the center,2) employment of graduates.

Candidates and graduates brought up how the goals and expected outcomes did change because the centers follow objectives which are not related to the goals and objectives of the program: “Many research centers accept Ph.D. students because they only want to increase the ranks of the center in the ranking systems , by implementing research projects that do not consider as the priority of the health system (P2)”. “ The goal of this initiative is to facilitate the employment of graduates in the job market , rather than solely focusing on training a few research doctoral students. (P7)”.

2–2: implementation barriers

This class is related to the providing working opportunities as an important goal of the program which are not reached because of various implementation barriers. Moreover, they acknowledge that the defined purposes and outcomes did not reach: “ No thought for recruitment after graduation. The decision makers should have thought about the working opportunities of the graduates , from the beginning (P5)”.

2–3: solutions

Clarifying students’ future duties and expectations during admission.

Regarding increasing commitment and adherence to the objectives of the institution and the field of study, it is also important for participants to be aware of the program goals, their duties, and the expectations placed on them during and after completing the program. “ At the beginning , we must clarify for the student what we want from her/him during the education , many times neither the student knows what we want from her nor we ourselves (E4)”.

Creating a robust control and evaluation system

Institutions should be continually monitored and evaluate regarding their adherence to the program goals. This requires the creation of a monitoring and evaluation system and the definition of indicators for successful performance in inputs, processes, and outputs. “ Research centers should admit students in a purposeful manner and their performance should be continuously evaluated and monitored by the Ministry of Health and Medical Education (E5)”.

Theme 3: ineffective curriculum to achieve program goals

This theme captured specific ideas and recommendations for the curriculum and includes two classes: (1) inefficient courses, (2) lack of priority setting.

3–1: inefficient courses

The non-applicable courses were emerged in this class. According to the results, the training methods and material of courses are not up to date and based on current relevant issues in field of studies: “ the lessons were not useful at all. We didn’t learn anything new in the general courses , which should have taught us about research , statistics , and epidemiology (P5)”.

3–2: lack of priority setting

Irrelevant lessons to fields priorities was proposed by the participants. Further, the thesis topics and research institutes’ priorities are not consistent: “ At least some theoretical courses should be customized for the scientific field of the student. All students pass shared courses in all research centers with different fields of activity (P1)”.

Curiously, most students suggested that the curriculum should be revised according to the candidates’ learning needs, current issues, and the competencies which they are required in their future jobs.

3–3: solutions

Aligning curriculums with program goals and structure.

Experts stated that the program structure and courses’ curriculums should be adjusted based on the fields of studies. “ Conducting need-based applied research requires students to have relevant professional skills and knowledge in their field of study (E3)”.

In addition, they believed that the program contents are needed to revise based on the program objectives. “ Currently , all students in different research centers study the same courses , while the needs of each center and field must be identified first , and then courses based on them should be defined (E6)”.

Theme 4: financial and human resources challenges

This theme consisted of two classes, (1) human resources problems and (2) financial issues.

4–1: human resources problems

Faculties are not able to prepare students for job market and conducting need-based researches. This might be due to the lack of sufficient faculty members in the educational system and their high workload which are stated by candidates. “ Supervisors need to dedicate more time to their students , but they are primarily focused on administrative tasks. (P1)”. In addition, faculty members have poor understanding of the program, have not sufficient practical experience in their field of expertise and they restrict candidates’ freedom of action. “ My supervisor did not have any learning program or research idea (P5)”. “ The supervisors turn the student into a task-fulfilling machine , and the student has no authority in any of the academic fields , including the courses and even the title of the thesis , and only says yes , sir! (P7)”. Many respondents mentioned unprepared faculty members as a challenge of the program. “ The professors themselves have not been well explained about the program and it seems that the professors are still not aware of the requirements of Ph.D. by research program (P3)”.

4–2: financial resources problems

Another aspect is the financial resources issues. Lack of financial support and failure in timely funding were defined as two subclasses.

Another aspect is the Lack of financial resources. This challenge is related to student perspective and suggestions about financial problems: “ Don’t talk about financial support! As much as the university gave a grant , I also spend additional cost for the thesis! (P5)”. “ Due to the high cost of the thesis , the payments were not made on time (P9)”. In addition, students noted the importance of timely funding in completion of their applied research: “ The professor admitted the student , then applied for a grant or research budget. It’s very late! (P5)”.

4–3: solutions

Providing additional supervisor with relevant practical experience.

Another important aspect of achieving the objective of conducting need-based applied research is to ensure that supervisors possess relevant practical experience and knowledge in the field of study. According to participants’ opinions, this achievement can be accomplished through collaboration between relevant academic institutions, health service providers, and product provision institutions in the introduction of supervisors. “ One important aspect to take into account in this program is the utilization of faculty members who have expertise in research and possess teaching relevant skills . (E4)”.

Clarity of duties and performance criteria

Lack of sufficient faculty members and their high workload necessitate managing them by standardizing and documenting their duties and clearly defining expectations. “ It is important to distribute students to supervisors based on their workload , such as assigning fewer students to professors with administrative responsibilities. (E5) ”.

Sustaining financial resources

Diversifying financial resources through collaboration with relevant public or private academic, health service, and product provision institutions was the main recommendation of experts to provide sustainable funding for the doctorate program. “ Faculty members should try to obtain national and international research grants such as World Health Organization grants (E7)”.

This study aimed to detect implementation challenges and relevant solutions of the research doctorate program in context of a low-middle income country from the perspectives of its beneficiary including students, graduates and key informants.

Based on the analysis of semi-structured interviews, four challenges were identifying, including unspecified admission criteria, deviation from defined goals and expected outcomes, ineffective curriculum to achieve program goals, financing and human resources.

Challenge 1: unspecified admission criteria

As Burford noted the doctoral admissions process is a subject of intense global discussion [ 29 ] and a wide range of admission criteria has been observed in doctoral programs which are encompass various aspects such as academic preparation, potentialities, attitudes, and competences [ 30 ]. Meanwhile, admission involves evaluative processes that are frequently unclear to those outside the system, but are considered routine by those within. In this regard professors play an important role as gatekeepers of the profession [ 31 ]. According to our findings, selection between applicants was based on supervisor ‘s preferences and previous acquaintance with applicants, and they were led to a decrease in the quality of research doctorate program. In addition, the lack of transparency in the terms and conditions for entering the program were reported by participants. These criteria should be clearly defined during the student recruitment process [ 32 , 33 ]. Therefore, admission criteria for research doctorate programs should be adjusted to ensure the admission of students with the necessary ability, motivation, and commitment to conduct problem-based research. It is essential to consider the diversity (geographical, racial, and ethnic) within the admitted groups.

In addition, having relevant work experience in the specialized field facilitates conducting applied research and enables teaching the course on a part-time basis. As well as ensuring the employability of students for related jobs is guaranteed [ 34 ].

Challenge 2: deviation from defined goals and expected outcomes

This issue emerged as the second challenge of the program. In Iran, the goal of establishing a research doctorate program is to maximize the benefits influenced by stakeholders and beneficiaries, including individuals, groups, parties, and institutions. Meanwhile, students and graduates of the program face some challenges as they are not trained according to the needs of research institutes. Additionally, they struggle to find suitable job positions and encounter issues related to academic-family integration which are consistence whit Rockinson-Szapkiw findings [ 35 ]. In general, the continuation of this process can lead to a lack of motivation among the beneficiaries of the research doctorate program, including professors and students. Urgent reforms should be implemented in this program. In accordance with our results, other researchers have also addressed this issue [ 8 , 36 , 37 ]. It is necessary to identify the potential success metrics of the doctoral program, collect information related to the results of each metric, and standardize them based on the reports provided by various higher education institutions [ 16 ].

Challenge 3: ineffective curriculum to achieve program goals

According to the results, students and graduates of research doctorate program in Iran are studying and working in ambiguous and ineffective conditions. The results of this research are in line with the results of studies by Anderson et al. [ 38 ], Keshmiri et al. [ 39 ], and Shin et al. [ 40 ], but there are differences in Iran. The main difference is that in research doctorate programs in Iran, special skills such as commercialization or other market skills are not included in the curriculum. There are no differences in terms of the designed and offered characteristics between research-oriented and education-oriented curriculums. Additionally, a significant aspect of the program is based on research. In fact, this program trains professional experts who are also researchers. Unlike the education-based doctorate, its goal is not to train researchers in a specific specialty. The various countries analyzed in this research follow two approaches: (1) Offering professional doctorate programs to managers, senior employees, and individuals with extensive experience, or (2) mandating a master’s degree, relevant work experience, and a concurrent affiliation with the relevant work environment [ 6 ].

As a result, the curriculum should primarily focus on new scientific topics, expanding current fields of knowledge, and the emergence of new fields that are influenced by economic, cultural, and technological conditions, as well as the provision of healthcare services and policies [ 41 ].

Challenge 4: financing and human resources

In relation to this problem, participants mentioned that they had various roles and responsibilities beyond those of a doctoral student, indicating that they are “more than just a doctoral student.”

They also expressed dissatisfaction with the low quality of student guidance programs and described mentorships as below average. In various countries, the standards of doctoral programs in medical sciences regarding mentoring activities are reviewed and presented in a consolidated format [ 42 ]. In this regard, the following principles are recommended: (1) Establish quality standards for student guidance activities (2). Create a guideline that supervisors and students can follow. Professors and students should be aware of the standards of student guidance activities. Additionally, providing incentives can enhance the productivity of the relationship between the supervisor and the students.

Students and candidates noted that their supervisors are busy and do not spend enough time on their duties as a supervisor. To address this issue, the following solutions are recommended based on expert feedback: (1) Establishing internal and external collaborations among various specialties and institutions, (2) Taking into account the professors’ workloads, (3) Sharing responsibilities and fostering participation, and (4) Providing flexibility in selecting supervisors.

Based on the study by Meuleners et al., it has been determined that assigning a single supervisor is usually not favourable for students. Instead, the use of a number of supervisors/mentors or a supervision team is recommended [ 6 ]. In this situation, it is possible to develop efficient projects based on the up-to-date needs of society. In Iran, although this possibility exists, the shortage of professors and various problems and challenges within academic groups prevent it. In the research- doctorate program, it is necessary for each student to have one or more senior researchers to guide, help, and support the student in developing their research skills. In fact, the vital role of authentic mentorship is to guide doctoral students through designing their career development plans, assisting in overcoming challenges in doctoral studies, and facilitating professional networking. This can lead to significant job opportunities not only during the doctoral program but also after graduation [ 43 ].

Financial resources also play a crucial role in the success of doctoral programs [ 15 ]. Based on our results, the limitation of financial resources for research doctorate education was another challenge. Therefore, it is recommended to develop a strategy on the best approach to ensure the resources required by the faculty. Utilizing the partnership method is an effective way to maximize resources through collaboration. Partnership is the process of collaborating with other institutions and individuals to achieve shared goals. Therefore, the partners share the same risks and benefits. The use of private financing programs can lead to increased initiatives in specialized doctoral education.

Based on our findings, it seems that in Iran, similar to East Asian countries, a hybrid system combining elements from the USA and European models has been utilized in designing research doctorate programs. This approach emphasizes both supervision and coursework components. On the one hand, this system reduces the level of creativity due to excessive supervision of students’ activities and emphasizes passing certain courses, thus limiting opportunities for defining problem-oriented projects. These conditions can be altered by transitioning to the European system and thoroughly evaluating the goals and anticipated results. Therefore, based on the results of this study, it is suggested to develop competency based curriculum or to reform the current program in order to solve its current problems. Future research is suggested to examine the practicality and effectiveness of the policy options proposed in the present study and prioritize them in terms of efficacy and effectiveness.

This study acknowledges a potential limitation in the alignment of proposed solutions with the actual challenges faced by students. While solutions are derived from experts’ interpretations of student-reported problems, there may be an inadvertent overlap of differing rationalities. This suggests a need for a more nuanced explanation of the contrasting perspectives between students and experts in the analysis. By analyzing the challenges raised by the students, the solutions proposed by experts, and reviewing similar studies in the discussion section, we aimed to elucidate this difference of opinion for the readers of the article.

This study proposes evidence-based solutions for a research doctorate program tailored to the specific context of Iran’s medical education system. Since the majority of researches on doctoral programs are grounded in Western perspectives on students, faculty, resources, and cultural contexts, this study has the potential to offer valuable insights and fresh perspectives.

The proposed framework is based on the outcome-based curriculum approach, which focuses on the essential competencies that students should achieve by the end of the program. The solutions consist of four main themes: admission criteria, goals and outcomes, curriculum, and resources, which aim to develop the technical and practical competencies of the students and graduates.

Research doctorate program graduates can play a vital role in improving the quality and performance of healthcare services by pursuing various career pathways and job categories that align with their skills and qualifications. However, to achieve this, they need to be supported by the MoHME, which should review and update the curriculum according to the program goals and international best practices. Additionally, redefining admission criteria, clarifying future duties, managing human and financial resources, and providing effective mentoring are essential. Moreover, graduates of research doctorate programs should collaborate with other health professionals, policymakers, and stakeholders to promote inter-professional collaboration and enhance integrated health system improvement.

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Ministry of Health and Medical Education

Doctor of Philosophy

Sonesson A, Stenson L, Edgren G. Research and education form competing activity systems in externally funded doctoral education. Nordic J Stud Educational Policy. 2023:1–18.

Nerad M, Heggelund M. Toward a global PhD? Forces and forms in doctoral education worldwide. University of Washington; 2011.

Diogo S, Gonçalves A, Cardoso S, Carvalho T. Tales of doctoral students: motivations and expectations on the route to the unknown. Educ Sci. 2022;12(4):286.

Article   Google Scholar  

Zhuchkova S, Bekova S. Building a strong foundation: how pre-doctorate experience shapes doctoral student outcomes. PLoS ONE. 2023;18(9):e0291448.

Cardoso S, Santos S, Diogo S, Soares D, Carvalho T. The transformation of doctoral education: a systematic literature review. High Educ. 2022;84(4):885–908.

Meuleners JS, Boone WJ, Fischer MR, Neuhaus BJ, Eberle J, editors. Evaluation of structured doctoral training programs in German life sciences: how much do such programs address hurdles faced by doctoral candidates? Frontiers in Education. Frontiers; 2023.

Weaver TE, Lott S, McMullen P, Leaver CA, Zangaro G, Rosseter R. Research focused doctoral nursing education in the 21st century: curriculum, evaluation, and postdoctoral considerations. J Prof Nurs. 2023;44:38–53.

Craig W, Khan W, Rambharose S, Stassen W. The views and experiences of candidates and graduates from a South African emergency medicine doctoral programme. Afr J Emerg Med. 2023;13(2):78–85.

Nursing AAoCo. The research-focused doctoral program in nursing: pathways to excellence. 2020.

Gill TG, Hoppe U. The business professional doctorate as an informing channel: a survey and analysis. Int J Doctoral Stud. 2009;4(1):27–57.

Lee H-f, Miozzo M. How does working on university–industry collaborative projects affect science and engineering doctorates’ careers? Evidence from a UK research-based university. J Technol Transf. 2015;40:293–317.

Bienkowska D, Klofsten M. Creating entrepreneurial networks: academic entrepreneurship, mobility and collaboration during PhD education. High Educ. 2012;64:207–22.

Kitagawa F. Industrial doctorates: employer engagement in research and skills formation: LLAKES centre, Institute of Education; 2011.

Kehm BM. Doctoral education in Europe and North America: a comparative analysis. Wenner Gren Int Ser. 2006;83:67.

Google Scholar  

Corcelles M, Cano M, Liesa E, González-Ocampo G, Castelló M. Positive and negative experiences related to doctoral study conditions. High Educ Res Dev. 2019;38(5):922–39.

Prendergast A, Usher R, Hunt E. A constant juggling act—the daily life experiences and well-being of doctoral students. Educ Sci. 2023;13(9):916.

Pyhältö K, Toom A, Stubb J, Lonka K. Challenges of becoming a scholar: a study of doctoral students’ problems and well-being. International Scholarly Research Notices. 2012;2012.

Cross M, Backhouse J. Evaluating doctoral programmes in Africa: Context and practices. High Educ Policy. 2014;27:155–74.

Ostriker JP, Kuh CV, Voytuk JA. A data-based assessment of research-doctorate programs in the United States. National Academies; 2011.

Voytuk JA, Kuh CV, Ostriker JP, Council NR. Assessing research-doctorate programs: a methodology study. 2003.

Fairman JA, Giordano NA, McCauley K, Villarruel A. Invitational summit: re-envisioning research focused PHD programs of the future. J Prof Nurs. 2021;37(1):221–7.

Tyndall DE, Firnhaber GC, Kistler KB. An integrative review of threshold concepts in doctoral education: implications for PhD nursing programs. Nurse Educ Today. 2021;99:104786.

Leniston N, Coughlan J, Cusack T, Mountford N, editors. A practice perspective on doctoral education–employer, policy, and industry views. Proceedings of the International Conference on Education and New Developments; 2022.

Ph.D by Reseach Program, (2020).

O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):1245–51.

Saunders B, Sim J, Kingstone T, Baker S, Waterfield J, Bartlam B, et al. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quant. 2018;52:1893–907.

Francis JJ, Johnston M, Robertson C, Glidewell L, Entwistle V, Eccles MP, et al. What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychol Health. 2010;25(10):1229–45.

Guest G, Bunce A, Johnson L. How many interviews are enough? An experiment with data saturation and variability. Field Methods. 2006;18(1):59–82.

Burford J, Kier-Byfield S, Dangeni, Henderson EF, Akkad A. Pre-application doctoral communications: a missing dimension in research on doctoral admissions. Educational Rev. 2024:1–22.

Dobrowolska B, Chruściel P, Pilewska-Kozak A, Mianowana V, Monist M, Palese A. Doctoral programmes in the nursing discipline: a scoping review. BMC Nurs. 2021;20(1):1–24.

Posselt JR. Toward inclusive excellence in graduate education: constructing merit and diversity in PhD admissions. Am J Educ. 2014;120(4):481–514.

Doody S. Interdisciplinary writing should be simple, but it isn’t: a study of meta-genres in interdisciplinary life sciences doctoral programs. McGill University (Canada); 2020.

Cutri J. The third space: fostering intercultural communicative competence within doctoral education. Wellbeing in doctoral education: Insights and guidance from the student experience. 2019:265–79.

Fulton J, Kuit J, Sanders G, Smith P. The role of the professional doctorate in developing professional practice. J Nurs Adm Manag. 2012;20(1):130–9.

Rockinson-Szapkiw A. Toward understanding factors salient to doctoral students’ persistence: the development and preliminary validation of the doctoral academic-family integration inventory. Int J Doctoral Stud. 2019;14:237.

Horta H, Li H, Chan SJ. Why do students pursue a doctorate in the era of the ‘PhD crisis’? Evidence from Taiwan. High Educ Q. 2023.

Terentev E, Bekova S, Maloshonok N. Three challenges to Russian system of doctoral education: why only one out of ten doctoral students defends thesis? Int J Chin Educ. 2021;10(1):22125868211007016.

Anderson V, Gold J. The value of the research doctorate: a conceptual examination. Int J Manage Educ. 2019;17(3):100305.

Keshmiri F, Gandomkar R, Hejri SM, Mohammadi E, Mirzazadeh A. Developing a competency framework for health professions education at doctoral level: the first step toward a competency based education. Med Teach. 2019;41(11):1298–306.

Shin JC, Kim SJ, Kim E, Lim H. Doctoral students’ satisfaction in a research-focused Korean university: socio-environmental and motivational factors. Asia Pac Educ Rev. 2018;19:159–68.

Lusk MD, Marzilli C. Innovation with strengths: a collaborative approach to PhD/DNP integration in doctoral education. Nurs Educ Perspect. 2018;39(5):327–8.

Johnson O, Marus E, Adyanga AF, Ayiga N. The experiences and challenges of doctoral education in public universities compared. J Social Humanity Educ. 2023;3(3):237–52.

Al Makhamreh M, Stockley D. Mentorship and well-being: examining doctoral students’ lived experiences in doctoral supervision context. Int J Mentor Coaching Educ. 2020;9(1):1–20.

Download references

Acknowledgements

The authors extend their appreciation to the National Agency for Strategic Research in Medical Sciences Education (NASR) for funding this research work.

This project was funded by the National Agency for Strategic Research in Medical Sciences Education (NASR). Tehran. Iran. Grant NO. 4020154.

Author information

Authors and affiliations.

Occupational Health and Safety Department, Health Faculty, Qom University of Medical Sciences, Qom, Iran, Islamic Republic of

Alireza Koohpaei

National Agency for Strategic Research in Medical Sciences Education, Ministry of Health and Medical Education, Tehran, Iran, Islamic Republic of

Maryam Hoseini Abardeh & Majid Heydari

Critical Care Quality Improvement Research Center, Shahid Modarres Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran

Shahnaz Sharifi

Education Development Center, Iran University of Medical Sciences, Tehran, Iran, Islamic Republic of

Zeynab Foroughi

You can also search for this author in PubMed   Google Scholar

Contributions

A.K. conceived the study and contributed to the study design, data analysis, drafting, and finalizing of the paper. Z.F., M.H.A. contributed to the data analysis and drafted the paper. Sh. Sh. contributed to data gathering and data entry. M.H.A., A.K., and Z.F. contributed to the study design, interpretation of data and intellectual development of the manuscript as well as critically reviewed the manuscript. MH contributed in writing, critical review and editing of manuscript. All authors read and approved the final version of the paper.

Corresponding authors

Correspondence to Maryam Hoseini Abardeh or Zeynab Foroughi .

Ethics declarations

Ethical approval and consent to participate.

This study was approved by the Ethics Committee of the NASR (No: IR. NASRME. REC. 1402. 073). The transcriptions of participants and their related analysis were anonymized to ensure confidentiality. First, we explained in detail to the interviewees the study objectives. The interview guide and focus group questions were sent to prospective study participants, and their informed consent for participation in the study was obtained prior to their involvement. Following that, since the research presents no risk of harm to interviewees, we acquired verbal consent from the participants as approved by the ethics committee. However, consent was audio recorded, where we guaranteed interviewees their privacy, confidentiality, and anonymity of any information they may provide. Afterward, interviewees made a voluntary choice about participating in the research and were given the right to opt out of the interview as and when they wished.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Koohpaei, A., Abardeh, M.H., Sharifi, S. et al. Investigating the implementation challenges of the research doctoral program and providing related solutions: a qualitative study. BMC Med Educ 24 , 878 (2024). https://doi.org/10.1186/s12909-024-05815-2

Download citation

Received : 01 March 2024

Accepted : 24 July 2024

Published : 14 August 2024

DOI : https://doi.org/10.1186/s12909-024-05815-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research doctorate
  • Doctoral program
  • Policymaking
  • Medical education

BMC Medical Education

ISSN: 1472-6920

how to review qualitative research

  • Open access
  • Published: 07 August 2024

Factors critical for the successful delivery of telehealth to rural populations: a descriptive qualitative study

  • Rebecca Barry   ORCID: orcid.org/0000-0003-2272-4694 1 ,
  • Elyce Green   ORCID: orcid.org/0000-0002-7291-6419 1 ,
  • Kristy Robson   ORCID: orcid.org/0000-0002-8046-7940 1 &
  • Melissa Nott   ORCID: orcid.org/0000-0001-7088-5826 1  

BMC Health Services Research volume  24 , Article number:  908 ( 2024 ) Cite this article

103 Accesses

Metrics details

The use of telehealth has proliferated to the point of being a common and accepted method of healthcare service delivery. Due to the rapidity of telehealth implementation, the evidence underpinning this approach to healthcare delivery is lagging, particularly when considering the uniqueness of some service users, such as those in rural areas. This research aimed to address the current gap in knowledge related to the factors critical for the successful delivery of telehealth to rural populations.

This research used a qualitative descriptive design to explore telehealth service provision in rural areas from the perspective of clinicians and describe factors critical to the effective delivery of telehealth in rural contexts. Semi-structured interviews were conducted with clinicians from allied health and nursing backgrounds working in child and family nursing, allied health services, and mental health services. A manifest content analysis was undertaken using the Framework approach.

Sixteen health professionals from nursing, clinical psychology, and social work were interviewed. Participants mostly identified as female (88%) and ranged in age from 26 to 65 years with a mean age of 47 years. Three overarching themes were identified: (1) Navigating the role of telehealth to support rural healthcare; (2) Preparing clinicians to engage in telehealth service delivery; and (3) Appreciating the complexities of telehealth implementation across services and environments.

Conclusions

This research suggests that successful delivery of telehealth to rural populations requires consideration of the context in which telehealth services are being delivered, particularly in rural and remote communities where there are challenges with resourcing and training to support health professionals. Rural populations, like all communities, need choice in healthcare service delivery and models to increase accessibility. Preparation and specific, intentional training for health professionals on how to transition to and maintain telehealth services is a critical factor for delivery of telehealth to rural populations. Future research should further investigate the training and supports required for telehealth service provision, including who, when and what training will equip health professionals with the appropriate skill set to deliver rural telehealth services.

Peer Review reports

Introduction

Telehealth is a commonly utilised application in rural health settings due to its ability to augment service delivery across wide geographical areas. During the COVID-19 pandemic, the use of telehealth became prolific as it was rapidly adopted across many new fields of practice to allow for healthcare to continue despite requirements for physical distancing. In Australia, the Medicare Benefits Scheme (MBS) lists health services that are subsidised by the federal government. Telehealth items were extensively added to these services as part of the response to COVID-19 [ 1 ]. Although there are no longer requirements for physical distancing in Australia, many health providers have continued to offer services via telehealth, particularly in rural areas [ 2 , 3 ]. For the purpose of this research, telehealth was defined as a consultation with a healthcare provider by phone or video call [ 4 ]. Telehealth service provision in rural areas requires consideration of contextual factors such as access to reliable internet, community members’ means to finance this access [ 5 ], and the requirement for health professionals to function across a broad range of specialty skills. These factors present a case for considering the delivery of telehealth in rural areas as a unique approach, rather than one portion of the broader use of telehealth.

Research focused on rural telehealth has proliferated alongside the rapid implementation of this service mode. To date, there has been a focus on the impact of telehealth on areas such as client access and outcomes [ 2 ], client and health professional satisfaction with services and technology [ 6 ], direct and indirect costs to the patient (travel cost and time), healthcare service provider staffing, lower onsite healthcare resource utilisation, improved physician recruitment and retention, and improved client access to care and education [ 7 , 8 ]. In terms of service implementation, these elements are important but do not outline the broader implementation factors critical to the success of telehealth delivery in rural areas. One study by Sutarsa et al. explored the implications of telehealth as a replacement for face-to-face services from the perspectives of general practitioners and clients [ 9 ] and articulated that telehealth services are not a like-for-like service compared to face-to-face modes. Research has also highlighted the importance of understanding the experience of telehealth in rural Australia across different population groups, including Aboriginal and Torres Strait Islander peoples, and the need to consider culturally appropriate services [ 10 , 11 , 12 , 13 ].

Research is now required to determine what the critical implementation factors are for telehealth delivery in rural areas. This type of research would move towards answering calls for interdisciplinary, qualitative, place-based research [ 12 ] that explores factors required for the sustainability and usability of telehealth in rural areas. It would also contribute to the currently limited understanding of implementation factors required for telehealth delivery to rural populations [ 14 ]. There is a reasonable expectation that there is consistency in the way health services are delivered, particularly across geographical locations. Due to the rapid implementation of telehealth services, there was limited opportunity to proactively identify factors critical for successful telehealth delivery in rural areas and this has created a lag in policy, process, and training. This research aimed to address this gap in the literature by exploring and describing rural health professionals’ experiences providing telehealth services. For the purpose of this research, rural is inclusive of locations classified as rural or remote (MM3-6) using the Modified Monash Model which considers remoteness and population size in its categorisation [ 15 ].

This research study adopted a qualitative descriptive design as described by Sandelowski [ 16 ]. The purpose of a descriptive study is to document and describe a phenomenon of interest [ 17 ] and this method is useful when researchers seek to understand who was involved, what occurred, and the location of the phenomena of interest [ 18 ]. The phenomenon of interest for this research was the provision of telehealth services to rural communities by health professionals. In line with this, a purposive sampling technique was used to identify participants who have experience of this phenomenon [ 19 ]. This research is reported in line with the consolidated criteria for reporting qualitative research [ 20 ] to enhance transparency and trustworthiness of the research process and results [ 21 ].

Research aims

This research aimed to:

Explore telehealth service provision in rural areas from the perspective of clinicians.

Describe factors critical to the successful delivery of telehealth in rural contexts.

Participant recruitment and data collection

People eligible to participate in the research were allied health (using the definition provided by Allied Health Professions Australia [ 22 ]) or nursing staff who delivered telehealth services to people living in the geographical area covered by two rural local health districts in New South Wales, Australia (encompassing rural areas MM3-6). Health organisations providing telehealth service delivery in the southwestern and central western regions of New South Wales were identified through the research teams’ networks and invited to be part of the research.

Telehealth adoption in these organisations was intentionally variable to capture different experiences and ranged from newly established (prompted by COVID-19) to well established (> 10 years of telehealth use). Organisations included government, non-government, and not-for-profit health service providers offering child and family nursing, allied health services, and mental health services. Child and family nursing services were delivered by a government health service and a not-for-profit specialist service, providing health professional advice, education, and guidance to families with a baby or toddler. Child and family nurses were in the same geographical region as the families receiving telehealth. Transition to telehealth services was prompted by the COVID-19 pandemic. The participating allied health service was a large, non-government provider of allied health services to regional New South Wales. Allied health professionals were in the same region as the client receiving telehealth services. Use of telehealth in this organisation had commenced prior to the COVID-19 pandemic. Telehealth mental health services were delivered by an emergency mental health team, located at a large regional hospital to clients in another healthcare facility or location to which the health professional could not be physically present (typically a lower acuity health service in a rural location).

Once organisations agreed to disseminate the research invitation, a key contact person employed at each health organisation invited staff to participate via email. Staff were provided with contact details of the research team in the email invitation. All recruitment and consent processes were managed by the research team to minimise risk of real or perceived coercion between staff and the key contact person, who was often in a supervisory or managerial position within the organisation. Data were collected using semi-structured interviews using an online platform with only the interviewer and participant present. Interviews were conducted by a research team member with training in qualitative data collection during November and December 2021 and were transcribed verbatim by a professional transcribing service. All participants were offered the opportunity to review their transcript and provide feedback, however none opted to do so. Data saturation was not used as guidance for participant numbers, taking the view of Braun and Clarke [ 23 ] that meaning is generated through the analysis rather than reaching a point of saturation.

Data analysis

Researchers undertook a manifest content analysis of the data using the Framework approach developed by Ritchie and Spencer [ 24 ]. All four co-authors were involved in the data analysis process. Framework uses five stages for analysis including (1) familiarisation (2) identifying a thematic framework based on emergent overarching themes, (3) application of the coding framework to the interview transcripts [indexing], (4) reviewing and charting of themes and subthemes, and (5) mapping and interpretation [ 24 , p. 178]. The research team analysed a common interview initially, identified codes and themes, then independently applied these to the remaining interviews. Themes were centrally recorded, reviewed, and discussed by the research team prior to inclusion into the thematic framework. Final themes were confirmed via collaborative discussion and consensus. The iterative process used to review and code data was recorded into an Excel spreadsheet to ensure auditability and credibility, and to enhance the trustworthiness of the analysis process.

This study was approved by the Greater Western NSW Human Research Ethics Committee and Charles Sturt University Human Research Ethics Committee (approval numbers: 2021/ETH00088 and H21215). All participants provided written consent.

Eighteen health professionals consented to be interviewed. Two were lost to follow-up, therefore semi-structured interviews were conducted with 16 of these health professionals, the majority of which were from the discipline of nursing ( n  = 13, 81.3%). Participant demographics and their pseudonyms are shown in Table  1 .

Participants mostly identified as female ( n  = 14, 88%) and ranged in age from 26 to 65 years with a mean age of 47 years. Participants all delivered services to rural communities in the identified local health districts and resided within the geographical area they serviced. The participants resided in areas classified as MM3-6 but were most likely to reside in an area classified MM3 (81%). Average interview time was 38 min, and all interviews were conducted online via Zoom.

Three overarching themes were identified through the analysis of interview transcripts with health professionals. These themes were: (1) Navigating the role of telehealth to support rural healthcare; (2) Preparing clinicians to engage in telehealth service delivery; and (3) Appreciating the complexities of telehealth implementation across services and environments.

Theme 1: navigating the role of telehealth to support rural healthcare

The first theme described clinicians’ experiences of using telehealth to deliver healthcare to rural communities, including perceived benefits and challenges to acceptance, choice, and access. Interview participants identified several factors that impacted on or influenced the way they could deliver telehealth, and these were common across the different organisational structures. Clinicians highlighted the need to consider how to effectively navigate the role of telehealth in supporting their practice, including when it would enhance their practice, and when it might create barriers. The ability to improve rural service provision through greater access was commonly discussed by participants. In terms of factors important for telehealth delivery in rural contexts, the participants demonstrated that knowledge of why and how telehealth was used were important, including the broadened opportunity for healthcare access and an understanding of the benefits and challenges of providing these services.

Access to timely and specialist healthcare for rural communities

Participants described a range of benefits using telehealth to contact small, rural locations and facilitate greater access to services closer to home. This was particularly evident when there was lack of specialist support in these areas. These opportunities meant that rural people could receive timely care that they required, without the burden of travelling significant distances to access health services.

The obvious thing in an area like this, is that years ago, people were being transported three hours just to see us face to face. It’s obviously giving better, more timely access to services. (Patrick)

Staff access to specialist support was seen as an important aspect for rural healthcare by participants, because of the challenges associated with lack of staffing and resources within these areas which potentially increased the risks for staff in these locations, particularly when managing clients with acute mental illnesses.

Within the metro areas they’ve got so many staff and so many hospitals and they can manage mental health patients quite well within those facilities, but with us some of these hospitals will have one RN on overnight and it’s just crappy for them, and so having us able to do video link, it kind of takes the pressure off and we’re happy to make the decisions and the risky decisions for what that person needs. (Tracey)

Participants described how the option to use telehealth to provide specialised knowledge and expertise to support local health staff in rural hospitals likely led to more appropriate outcomes for clients wanting to be able to remain in their community. Conversely, Amber described the implications if telehealth was not available.

If there was some reason why the telehealth wasn’t available… quite often, I suppose the general process be down to putting the pressure on the nursing and the medical staff there to make a decision around that person, which is not a fair or appropriate thing for them to do. (Amber)

Benefits and challenges to providing telehealth in rural communities

Complementing the advantage of reduced travel time to access services, was the ability for clients to access additional support via telehealth, which was perceived as a benefit. For example, one participant described how telehealth was useful for troubleshooting client’s problems rather than waiting for their next scheduled appointment.

If a mum rings you with an issue, you can always say to them “are you happy to jump onto My Virtual Care with me now?” We can do that, do a consult over My Virtual Care. Then I can actually gauge how mum is. (Jade)

While accessibility was a benefit, participants highlighted that rural communities need to be provided with choice, rather than the assumption that telehealth be the preferred option for everyone, as many rural clients want face-to-face services.

They’d all prefer, I think, to be able to see someone in person. I think that’s generally what NSW rural [want] —’cause I’m from country towns as well—there’s no substitute, like I said, for face-to-face assessment. (Adam)

Other, more practical limitations of broad adoption of telehealth raised by the participants included issues with managing technology and variability in internet connectivity.

For many people in the rural areas, it’s still an issue having that regular [internet] connection that works all the time. I think it’s a great option but I still think it’s something that some rural people will always have some challenges with because it’s not—there’s so many black spots and so many issues still with the internet connection in rural areas. Even in town, there’s certain areas that are still having lots of problems. (Chloe)

Participants also identified barriers related to assumptions that all clients will have access to technology and have the necessary data to undertake a telehealth consultation, which wasn’t always the case, particularly with individuals experiencing socioeconomic disadvantage.

A lot of [Aboriginal] families don’t actually have access to telehealth services. Unless they use their phone. If they have the technology on their phones. I found that was a little bit of an issue to try and help those particular clients to get access to the internet, to have enough data on their phone to make that call. There was a lot of issues and a lot of things that we were putting in complaints about as they were going “we’re using up a lot of these peoples’ data and they don’t have internet in their home.” (Evelyn).

Other challenges identified by the participants were related to use of telehealth for clients that required additional support. Many participants talked about the complexities of using an interpreter during a telehealth consultation for culturally and linguistically diverse clients.

Having interpreters, that’s another element that’s really, really difficult because you’re doing video link, but then you’ve also got the phone on speaker and you’re having this three-way conversation. Even that, in itself, that added element on video link is really, really tough. It’s a really long process. (Tracey)

In summary, this theme described some of the benefits and constraints when using telehealth for the delivery of rural health services. The participants demonstrated the importance of understanding the needs and contexts of individual clients, and accounting for this when making decisions to incorporate telehealth into their service provision. Understanding how and why telehealth can be implemented in rural contexts was an important foundation for the delivery of these services.

Theme 2: preparing clinicians to engage in telehealth service delivery

The preparation required for clinicians to engage with telehealth service delivery was highlighted and the participants described the unique set of skills required to effectively build rapport, engage, and carry out assessments with clients. For many participants who had not routinely used telehealth prior to the COVID-19 pandemic, the transition to using telehealth had been rapid. The participants reflected on the implications of rapidly adopting these new practices and the skills they required to effectively deliver care using telehealth. These skills were critical for effective delivery of telehealth to rural communities.

Rapid adoption of new skills and ways of working

The rapid and often unsupported implementation of telehealth in response to the COVID-19 pandemic resulted in clinicians needing to learn and adapt to telehealth, often without being taught or with minimal instruction.

We had to do virtual, virtually overnight we were changed to, “Here you go. Do it this way,” without any real education. It was learned as we went because everybody was in the same boat. Everyone was scrabbling to try and work out how to do it. (Chloe)

In addition to telehealth services starting quickly, telehealth provision requires clinicians to use a unique set of skills. Therapeutic interventions and approaches were identified as being more challenging when seeing a client through a screen, compared to being physically present together in a room.

The body language is hidden a little bit when you’re on teleconference, whereas when you’re standing up face to face with someone, or standing side by side, the person can see the whole picture. When you’re on the video link, the patient actually can’t—you both can’t see each other wholly. That’s one big barrier. (Adam)

There was an emphasis on communication skills such as active listening and body language that were required when engaging with telehealth. These skills were seen as integral to building rapport and connection. The importance of language in an environment with limited visualisation of body language, is further demonstrated by one participant describing how they tuned into the timing and flow of the conversation to avoid interrupting and how these skills were pertinent for using telehealth.

In the beginning especially, we might do this thing where I think they’ve finished or there’s a bit of silence, so I go to speak and then they go to speak at the same time, and that’s different because normally in person you can really gauge that quite well if they’ve got more to say. I think those little things mean that you’ve got to work a bit harder and you’ve got to bring those things to the attention of the client often. (Robyn)

Preparing clinicians to engage in telehealth also required skills in sharing clear and consistent information with clients about the process of interacting via telehealth. This included information to reassure the client that the telehealth appointment was private as well as prepare them for potential interruptions due to connection issues.

I think being really explicitly clear about the fact that with our setups we have here, no one can dial in, no one else is in my room even watching you. We’re not recording, and there’s a lot of extra information, I think around that we could be doing better in terms of delivering to the person. (Amber)

Becoming accustomed to working through the ‘window’

Telehealth was often described as a window and not a view of the whole person which presented limitations for clinicians, such as seeing nuance of expression. Participants described the difficulties of assessing a client using telehealth when you cannot see the whole picture such as facial expressions, movement, behaviour, interactions with others, dress, and hygiene.

I found it was quite difficult because you couldn’t always see the actual child or the baby, especially if they just had their phone. You couldn’t pick up the body language. You couldn’t always see the facial expressions. You couldn’t see the child and how the child was responding. It did inhibit a lot of that side of our assessing. Quite often you’d have to just write, “Unable to view child.” You might be able to hear them but you couldn’t see them. (Chloe)

Due to the window view, the participants described how they needed to pay even greater attention to eye contact and tone of voice when engaging with clients via telehealth.

I think the eye contact is still a really important thing. Getting the flow of what they’re comfortable with a little bit too. It’s being really careful around the tone of voice as well too, because—again, that’s the same for face-to-face, but be particularly careful of it over telehealth. (Amber)

This theme demonstrates that there are unique and nuanced skills required by clinicians to effectively engage in provision of rural healthcare services via telehealth. Many clinicians described how the rapid uptake of telehealth required them to quickly adapt to providing telehealth services, and they had to modify their approach rather than replicate what they would do in face-to-face contexts. Appreciating the different skills sets required for telehealth practice was perceived as an important element in supporting clinicians to deliver quality healthcare.

Theme 3: appreciating the complexities of telehealth implementation across services and environments

It was commonly acknowledged that there needed to be an appreciation by clinicians of the multiple different environments that telehealth was being delivered in, as well as the types of consultations being undertaken. This was particularly important when well-resourced large regional settings were engaging with small rural services or when clinicians were undertaking consultations within a client’s home.

Working from a different location and context

One of the factors identified as important for the successful delivery of services via telehealth was an understanding of the location and context that was being linked into. Participants regularly talked about the challenges when undertaking a telehealth consultation with clients at home, which impacted the quality of the consultation as it was easy to “ lose focus” (Kelsey) and become distracted.

Instead of just coming in with one child, they had all the kids, all wanting their attention. I also found that babies and kids kept pressing the screen and would actually disconnect us regularly. (Chloe)

For participants located in larger regional locations delivering telehealth services to smaller rural hospitals, it was acknowledged that not all services had equivalent resources, skills, and experience with this type of healthcare approach.

They shouldn’t have to do—they’ve gotta double-click here, login there. They’re relying on speakers that don’t work. Sometimes they can’t get the cameras working. I think telehealth works as long as it’s really user friendly. I think nurses—as a nurse, we’re not supposed to be—I know IT’s in our job criteria, but not to the level where you’ve got to have a degree in technology to use it. (Adam)

Participants also recognised that supporting a client through a telehealth consultation adds workload stress as rural clinicians are often having pressures with caseloads and are juggling multiple other tasks while trying to trouble shoot technology issues associated with a telehealth consultation.

Most people are like me, not great with computers. Sometimes the nurse has got other things in the Emergency Department she’s trying to juggle. (Eleanor)

Considerations for safety, privacy, and confidentiality

Participants talked about the challenges that arose due to inconsistencies in where and how the telehealth consultation would be conducted. Concerns about online safety and information privacy were identified by participants.

There’s the privacy issue, particularly when we might see someone and they might be in a bed and they’ve got a laptop there, and they’re not given headphones, and we’re blaring through the speaker at them, and someone’s three meters away in another bed. That’s not good. That’s a bit of a problem. (Patrick)

When telehealth was offered as an option to clients at a remote healthcare site, clinicians noted that some clients were not provided with adequate support and were left to undertake the consultation by themselves which could cause safety risks for the client and an inability for the telehealth clinician to control the situation.

There were some issues with patients’ safety though. Where the telehealth was located was just in a standard consult room and there was actually a situation where somebody self-harmed with a needle that was in a used syringe box in that room. Then it was like, you just can’t see high risk—environment. (Eleanor)

Additionally, participants noted that they were often using their own office space to conduct telehealth consultations rather than a clinical room which meant there were other considerations to think about.

Now I always lock my room so nobody can enter. That’s a nice little lesson learnt. I had a consult with a mum and some other clinicians came into my room and I thought “oh my goodness. I forgot to lock.” I’m very mindful now that I lock. (Jade)

This theme highlights the complexities that exist when implementing telehealth across a range of rural healthcare settings and environments. It was noted by participants that there were variable skills and experience in using telehealth across staff located in smaller rural areas, which could impact on how effective the consultation was. Participants identified the importance of purposely considering the environment in which the telehealth consultation was being held, ensuring that privacy, safety, and distractibility concerns have been adequately addressed before the consultation begins. These factors were considered important for the successful implementation of telehealth in rural areas.

This study explored telehealth service delivery in various rural health contexts, with 16 allied health and nursing clinicians who had provided telehealth services to people living in rural communities prior to, and during the COVID-19 pandemic. Reflections gained from clinicians were analysed and reported thematically. Major themes identified were clinicians navigating the role of telehealth to support rural healthcare, the need to prepare clinicians to engage in telehealth service delivery and appreciating the complexities of telehealth implementation across services and environments.

The utilisation of telehealth for health service delivery has been promoted as a solution to resolve access and equity issues, particularly for rural communities who are often impacted by limited health services due to distance and isolation [ 6 ]. This study identified a range of perceived benefits for both clients and clinicians, such as improved access to services across large geographic distances, including specialist care, and reduced travel time to engage with a range of health services. These findings are largely supported by the broader literature, such as the systematic review undertaken by Tsou et al. [ 25 ] which found that telehealth can improve clinical outcomes and increase the timeliness to access services, including specialist knowledge. Clinicians in our study also noted the benefits of using telehealth for ad hoc clinical support outside of regular appointment times, which to date has not been commonly reported in the literature as a benefit. Further investigation into this aspect may be warranted.

The findings from this study identify a range of challenges that exist when delivering health services within a virtual context. It was common for participants to highlight that personal preference for face-to-face sessions could not always be accommodated when implementing telehealth services in rural areas. The perceived technological possibilities to improve access can have unintended consequences for community members which may contribute to lack of responsiveness to community needs [ 12 ]. It is therefore important to understand the client and their preferences for using telehealth rather than making assumptions on the appropriateness of this type of health service delivery [ 26 ]. As such, telehealth is likely to function best when there is a pre-established relationship between the client and clinician, with clients who have a good knowledge of their personal health and have access to and familiarity with digital technology [ 13 ]. Alternatively, it is appropriate to consider how telehealth can be a supplementary tool rather than a stand-alone service model replacing face-to-face interactions [ 13 ].

As identified in this study, managing technology and internet connectivity are commonly reported issues for rural communities engaging in telehealth services [ 27 , 28 ]. Additionally, it was highlighted that within some rural communities with higher socioeconomic disadvantage, limited access to an appropriate level of technology and the required data to undertake a telehealth consult was a deterrent to engage in these types of services. Mathew et al. [ 13 ] found in their study that bandwidth impacted video consultations, which was further compromised by weather conditions, and clients without smartphones had difficulty accessing relevant virtual consultation software.

The findings presented here indicate that while telehealth can be a useful model, it may not be suitable for all clients or client groups. For example, the use of interpreters in telehealth to support clients was a key challenge identified in this study. This is supported by Mathew et al. [ 13 ] who identified that language barriers affected the quality of telehealth consultations and accessing appropriate interpreters was often difficult. Consideration of health and digital literacy, access and availability of technology and internet, appropriate client selection, and facilitating client choice are all important drivers to enhance telehealth experiences [ 29 ]. Nelson et al. [ 6 ] acknowledged the barriers that exist with telehealth, suggesting that ‘it is not the groups that have difficulty engaging, it is that telehealth and digital services are hard to engage with’ (p. 8). There is a need for telehealth services to be delivered in a way that is inclusive of different groups, and this becomes more pertinent in rural areas where resources are not the same as metropolitan areas.

The findings of this research highlight the unique set of skills required for health professionals to translate their practice across a virtual medium. The participants described these modifications in relation to communication skills, the ability to build rapport, conduct healthcare assessments, and provide treatment while looking at a ‘window view’ of a person. Several other studies have reported similar skillsets that are required to effectively use telehealth. Uscher-Pines et al. [ 30 ] conducted research on the experiences of psychiatrists moving to telemedicine during the COVID-19 pandemic and noted challenges affecting the quality of provider-patient interactions and difficulty conducting assessment through the window of a screen. Henry et al. [ 31 ] documented a list of interpersonal skills considered essential for the use of telehealth encompassing attributes related to set-up, verbal and non-verbal communication, relationship building, and environmental considerations.

Despite the literature uniformly agreeing that telehealth requires a unique skill set there is no agreement on how, when and for whom education related to these skills should be provided. The skills required for health professionals to use telehealth have been treated as an add-on to health practice rather than as a specialty skill set requiring learning and assessment. This is reflected in research such as that by Nelson et al. [ 6 ] who found that 58% of mental health professionals using telehealth in rural areas were not trained to use it. This gap between training and practice is likely to have arisen from the rapid and widespread implementation of telehealth during the COVID-19 pandemic (i.e. the change in MBS item numbers [ 1 ]) but has not been addressed in subsequent years. For practice to remain in step with policy and funding changes, the factors required for successful implementation of telehealth in rural practice must be addressed.

The lack of clarity around who must undertake training in telehealth and how regularly, presents a challenge for rural health professionals whose skill set has been described as a specialist-generalist that covers a significant breadth of knowledge [ 32 ]. Maintaining knowledge currency across this breadth is integral and requires significant resources (time, travel, money) in an environment where access to education can be limited [ 33 ]. There is risk associated with continually adding skills on to the workload of rural health professionals without adequate guidance and provision for time to develop and maintain these skills.

While the education required to equip rural health professionals with the skills needed to effectively use telehealth in their practice is developing, until education requirements are uniformly understood and made accessible this is likely to continue to pose risk for rural health professionals and the community members accessing their services. Major investment in the education of all health professionals in telehealth service delivery, no matter the context, has been identified as critical [ 6 ].

This research highlights that the experience of using telehealth in rural communities is unique and thus a ‘one size fits all’ approach is not helpful and can overlook the individual needs of a community. Participants described experiences of using telehealth that were different between rural communities, particularly for smaller, more remote rural locations where resources and staff support and experience using telehealth were not always equivalent to larger rural locations. Research has indicated the need to invest in resourcing and education to support expansion of telehealth, noting this is particularly important in rural, regional, and remote areas [ 34 ]. Our study recognises that this is an ongoing need as rural communities continue to have diverse experiences of using telehealth services. Careful consideration of the context of individual rural health services, including the community needs, location, and resource availability on both ends of the consultation is required. Use of telehealth cannot have the same outcomes in every area. It is imperative that service providers and clinicians delivering telehealth from metropolitan areas to rural communities appreciate and understand the uniqueness of every community, so their approach is tailored and is helpful rather than hindering the experience for people in rural communities.

Limitations

There are a number of limitations inherent to the design of this study. Participants were recruited via their workplace and thus although steps were taken to ensure they understood the research would not affect their employment, it is possible some employees perceived an association between the research and their employment. Health professionals who had either very positive or very negative experiences with telehealth may have been more likely to participate, as they may be more likely to want to discuss their experiences. In addition to this, only health services that were already connected with the researchers’ networks were invited to participate. Other limitations include purposive sampling, noting that the opinions of the participants are not generalisable. The participant group also represented mostly nursing professionals whose experiences with telehealth may differ from other health disciplines. Finally, it is important to acknowledge that the opinions of the health professionals who participated in the study, may not represent, or align with the experience and opinions of service users.

This study illustrates that while telehealth has provided increased access to services for many rural communities, others have experienced barriers related to variability in connectivity and managing technology. The results demonstrated that telehealth may not be the preferred or appropriate option for some individuals in rural communities and it is important to provide choice. Consideration of the context in which telehealth services are being delivered, particularly in rural and remote communities where there are challenges with resourcing and training to support health professionals, is critical to the success of telehealth service provision. Another critical factor is preparation and specific, intentional training for health professionals on how to transition to manage and maintain telehealth services effectively. Telehealth interventions require a unique skill set and guidance pertaining to who, when and what training will equip health professionals with the appropriate skill set to deliver telehealth services is still to be determined.

Data availability

The qualitative data collected for this study was de-identified before analysis. Consent was not obtained to use or publish individual level identified data from the participants and hence cannot be shared publicly. The de-identified data can be obtained from the corresponding author on reasonable request.

Commonwealth of Australia. COVID-19 Temporary MBS Telehealth Services: Department of Health and Aged Care, Australian Government; 2022 [ https://www.mbsonline.gov.au/internet/mbsonline/publishing.nsf/Content/Factsheet-TempBB .

Caffery LA, Muurlink OT, Taylor-Robinson AW. Survival of rural telehealth services post‐pandemic in Australia: a call to retain the gains in the ‘new normal’. Aust J Rural Health. 2022;30(4):544–9.

Article   PubMed   PubMed Central   Google Scholar  

Shaver J. The state of telehealth before and after the COVID-19 pandemic. Prim Care: Clin Office Pract. 2022;49(4):517–30.

Article   Google Scholar  

Australian Digital Health Agency. What is telehealth? 2024 [ https://www.digitalhealth.gov.au/initiatives-and-programs/telehealth .

Hirko KA, Kerver JM, Ford S, Szafranski C, Beckett J, Kitchen C, et al. Telehealth in response to the COVID-19 pandemic: implications for rural health disparities. J Am Med Inform Assoc. 2020;27(11):1816–8.

Nelson D, Inghels M, Kenny A, Skinner S, McCranor T, Wyatt S, et al. Mental health professionals and telehealth in a rural setting: a cross sectional survey. BMC Health Serv Res. 2023;23(1):200.

Butzner M, Cuffee Y. Telehealth interventions and Outcomes Across Rural Communities in the United States: Narrative Review. J Med Internet Res. 2021;23(8):NPAG–NPAG.

Calleja Z, Job J, Jackson C. Offsite primary care providers using telehealth to support a sustainable workforce in rural and remote general practice: a rapid review of the literature. Aust J Rural Health. 2023;31(1):5–18.

Article   PubMed   Google Scholar  

Sutarsa IN, Kasim R, Steward B, Bain-Donohue S, Slimings C, Hall Dykgraaf S, et al. Implications of telehealth services for healthcare delivery and access in rural and remote communities: perceptions of patients and general practitioners. Aust J Prim Health. 2022;28(6):522–8.

Bradford NK, Caffery LJ, Smith AC. Telehealth services in rural and remote Australia: a systematic review of models of care and factors influencing success and sustainability. Rural Remote Health. 2016;16(4):1–23.

Google Scholar  

Caffery LJ, Bradford NK, Wickramasinghe SI, Hayman N, Smith AC. Outcomes of using telehealth for the provision of healthcare to Aboriginal and Torres Strait Islander people: a systematic review. Aust N Z J Public Health. 2017;41(1):48–53.

Warr D, Luscombe G, Couch D. Hype, evidence gaps and digital divides: Telehealth blind spots in rural Australia. Health. 2023;27(4):588–606.

Mathew S, Fitts MS, Liddle Z, Bourke L, Campbell N, Murakami-Gold L, et al. Telehealth in remote Australia: a supplementary tool or an alternative model of care replacing face-to-face consultations? BMC Health Serv Res. 2023;23(1):1–10.

Campbell J, Theodoros D, Hartley N, Russell T, Gillespie N. Implementation factors are neglected in research investigating telehealth delivery of allied health services to rural children: a scoping review. J Telemedicine Telecare. 2020;26(10):590–606.

Commonwealth of Australia. Modified Monash Model: Department of Health and Aged Care Commonwealth of Australia; 2021 [updated 14 December 2021. https://www.health.gov.au/topics/rural-health-workforce/classifications/mmm .

Sandelowski M. Whatever happened to qualitative description? Research in nursing & health. 2000;23(4):334 – 40.

Marshall C, Rossman GB. Designing qualitative research: Sage; 2014.

Caelli K, Ray L, Mill J. Clear as mud’: toward greater clarity in generic qualitative research. Int J Qualitative Methods. 2003;2(2):1–13.

Tolley EE. Qualitative methods in public health: a field guide for applied research. Second edition. ed. San Francisco, CA: Jossey-Bass & Pfeiffer Imprints, Wiley; 2016.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Levitt HM, Motulsky SL, Wertz FJ, Morrow SL, Ponterotto JG. Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity. Qualitative Psychol. 2017;4(1):2.

Allied Health Professions Australia. Allied health professions 2024 [ https://ahpa.com.au/allied-health-professions/ .

Braun V, Clarke V. To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qualitative Res Sport Exerc Health. 2021;13(2):201–16.

Ritchie J, Spencer L. In: Alan B, Burgess RG, editors. Qualitative data analysis for applied policy research. Analyzing qualitative data: Routledge; 1994. pp. 173–94.

Tsou C, Robinson S, Boyd J, Jamieson A, Blakeman R, Yeung J, et al. Effectiveness of telehealth in rural and remote emergency departments: systematic review. J Med Internet Res. 2021;23(11):e30632.

Pullyblank K. A scoping literature review of rural beliefs and attitudes toward telehealth utilization. West J Nurs Res. 2023;45(4):375–84.

Jonnagaddala J, Godinho MA, Liaw S-T. From telehealth to virtual primary care in Australia? A Rapid scoping review. Int J Med Informatics. 2021;151:104470.

Jonasdottir SK, Thordardottir I, Jonsdottir T. Health professionals’ perspective towards challenges and opportunities of telehealth service provision: a scoping review. Int J Med Informatics. 2022;167:104862.

Clay-Williams R, Hibbert P, Carrigan A, Roberts N, Austin E, Fajardo Pulido D, et al. The diversity of providers’ and consumers’ views of virtual versus inpatient care provision: a qualitative study. BMC Health Serv Res. 2023;23(1):724.

Uscher-Pines L, Sousa J, Raja P, Mehrotra A, Barnett ML, Huskamp HA. Suddenly becoming a virtual doctor: experiences of psychiatrists transitioning to telemedicine during the COVID-19 pandemic. Psychiatric Serv. 2020;71(11):1143–50.

Henry BW, Ames LJ, Block DE, Vozenilek JA. Experienced practitioners’ views on interpersonal skills in telehealth delivery. Internet J Allied Health Sci Pract. 2018;16(2):2.

McCullough K, Bayes S, Whitehead L, Williams A, Cope V. Nursing in a different world: remote area nursing as a specialist–generalist practice area. Aust J Rural Health. 2022;30(5):570–81.

Reeve C, Johnston K, Young L. Health profession education in remote or geographically isolated settings: a scoping review. J Med Educ Curric Dev. 2020;7:2382120520943595.

PubMed   PubMed Central   Google Scholar  

Cummings E, Merolli M, Schaper L, editors. Barriers to telehealth uptake in rural, regional, remote Australia: what can be done to expand telehealth access in remote areas. Digital Health: Changing the Way Healthcare is Conceptualised and Delivered: Selected Papers from the 27th Australian National Health Informatics Conference (HIC 2019); 2019: IOS Press.

Download references

Acknowledgements

The authors would like to acknowledge Georgina Luscombe, Julian Grant, Claire Seaman, Jennifer Cox, Sarah Redshaw and Jennifer Schwarz who contributed to various elements of the project.

The study authors are employed by Three Rivers Department of Rural Health. Three Rivers Department of Rural Health is funded by the Australian Government under the Rural Health Multidisciplinary Training (RHMT) Program.

Author information

Authors and affiliations.

Three Rivers Department of Rural Health, Charles Sturt University, Locked Bag 588, Tooma Way, Wagga Wagga, NSW, 2678, Australia

Rebecca Barry, Elyce Green, Kristy Robson & Melissa Nott

You can also search for this author in PubMed   Google Scholar

Contributions

RB & EG contributed to the conceptualisation of the study and methodological design. RB & MN collected the research data. RB, EG, MN, KR contributed to analysis and interpretation of the research data. RB, EG, MN, KR drafted the manuscript. All authors provided feedback on the manuscript and approved the final submitted manuscript.

Corresponding author

Correspondence to Rebecca Barry .

Ethics declarations

Ethics approval and consent to participate.

Ethics approvals were obtained from the Greater Western NSW Human Research Ethics Committee and Charles Sturt University Human Research Ethics Committee (approval numbers: 2021/ETH00088 and H21215). Informed written consent was obtained from all participants. All methods were carried out in accordance with the relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Barry, R., Green, E., Robson, K. et al. Factors critical for the successful delivery of telehealth to rural populations: a descriptive qualitative study. BMC Health Serv Res 24 , 908 (2024). https://doi.org/10.1186/s12913-024-11233-3

Download citation

Received : 19 March 2024

Accepted : 23 June 2024

Published : 07 August 2024

DOI : https://doi.org/10.1186/s12913-024-11233-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Service provision
  • Rural health
  • Allied health
  • Rural workforce

BMC Health Services Research

ISSN: 1472-6963

how to review qualitative research

Language selection

  • Français fr

Letter to the Editor – Re: Indigenous people’s experiences of primary health care in Canada: a qualitative systematic review

Health Promotion and Chronic Disease Prevention in Canada Journal

HPCDP Journal Home

Letter to the Editor – Re: Indigenous people’s experiences of primary health care in Canada: a qualitative systematic review

Submit a manuscript

  • Information for authors

About HPCDP

  • About the Journal
  • Contact the Editors
  • Past issues

Previous | Table of Contents | Next

Chandrakant P. Shah, MD, FRCPC, SM (Hyg), DrSc(Hon), OOnt Author reference footnote 1 Author reference footnote 2

https://doi.org/10.24095/hpcdp.44.7/8.06

Creative Commons License

Recommended Attribution

Letter to the Editor by Shah CP in the HPCDP Journal licensed under a Creative Commons Attribution 4.0 International License

Chandrakant P. Shah; Email: [email protected]

Shah CP. Re: Indigenous people’s experiences of primary health care in Canada: a qualitative systematic review. Health Promot Chronic Dis Prev Can. 2024;44(7/8):347-8. https://doi.org/10.24095/hpcdp.44.7/8.06

Dear Editor,

I read the article by G. Barbo and S. Alam titled “Indigenous people’s experiences of primary health care in Canada: a qualitative systematic review,” which was published in the April issue of your journal. Footnote 1 As someone who has been involved in providing primary care to the Indigenous communities of Northwest Ontario and Anishnawbe Health Toronto, I found the article to be insightful. It reaffirmed commonly known facts about the issues facing Indigenous peoples in health care, such as privacy concerns, racism, discrimination and lack of culturally safe care. Footnote 2 Although organizations such as the Indigenous Physicians Association of Canada have developed needed core competency for health care professionals Footnote 3 and the Provincial Health Services Authority of British Columbia has developed courses to provide culturally safe care, the pace of change remains slow, and we continue to read stories about racism and discrimination against Indigenous people to this day.

What is the root cause of these issues? In the late 1980s, when I taught Indigenous health to health sciences students at the University of Toronto, I asked my class to describe Indigenous people and one other racial group, such as Italians or Japanese in Canada. I was dismayed to find that nearly 90% of the adjectives cited for Indigenous people were stereotypically negative, compared to only 10% for the other racial group. Most students had no encounter with Indigenous people and based their opinions on media encounters. Footnote 4 I realized that the students harboured “unconscious bias,” and to remedy this, they needed Indigenous cultural safety courses provided by Indigenous teachers who had “lived experiences.” I also conducted an environment scan to assess the teaching of Indigenous health courses across health sciences programs in Ontario’s colleges and universities and found a lack of such courses in most of them due to a lack of Indigenous teachers. Footnote 5 With the help of Indigenous professionals, we developed an Indigenous cultural safety course and trained Indigenous preceptors across Ontario to deliver such courses in the health sciences programs. We found positive changes in students’ attitudes towards Indigenous people. Footnote 6

What is needed is the education of all Canadians, young and old, as well as new and naturalized Canadians, about Indigenous history and the impact of colonization, residential schools, the Sixties Scoop and our postcolonial policies such as the Indian Act on the health and well-being of Indigenous peoples. To start with, I recommend reading the Honouring the Truth, Reconciling for the Future by the Truth and Reconciliation Commission of Canada. Footnote 7

Thank you for bringing this issue to light.

Page details

IMAGES

  1. Qualitative Research Data Collection Methods| Documents Review

    how to review qualitative research

  2. Qualitative Research: Definition, Types, Methods and Examples (2023)

    how to review qualitative research

  3. Understanding Qualitative Research: An In-Depth Study Guide

    how to review qualitative research

  4. 18 Qualitative Research Examples (2024)

    how to review qualitative research

  5. Literature Review For Qualitative Research

    how to review qualitative research

  6. case study method of qualitative research

    how to review qualitative research

COMMENTS

  1. What Is Qualitative Research? An Overview and Guidelines

    This guide explains the focus, rigor, and relevance of qualitative research, highlighting its role in dissecting complex social phenomena and providing in-depth, human-centered insights. The guide also examines the rationale for employing qualitative methods, underscoring their critical importance. An exploration of the methodology's ...

  2. How to use and assess qualitative research methods

    This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions ...

  3. Critically appraising qualitative research

    Ethics in qualitative research goes beyond review boards' requirements to involve complex issues of confidentiality, reflexivity, and power. Over the past decade, readers of medical journals have gained skills in critically appraising studies to determine whether the results can be trusted and applied to their own practice settings.

  4. 10 Tips for Reviewing a Qualitative Paper

    A good qualitative researcher recognizes that the way they make sense of and attach meaning to the data is partly shaped by the characteristics of the researcher (i.e. age, gender, social class, ethnicity, professional status, etc.) and the assumptions they hold. The researcher should make explicit the perspectives they are coming from so that ...

  5. Criteria for Good Qualitative Research: A Comprehensive Review

    This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then ...

  6. Planning Qualitative Research: Design and Decision Making for New

    Abstract For students and novice researchers, the choice of qualitative approach and subsequent alignment among problems, research questions, data collection, and data analysis can be particularly tricky. Therefore, the purpose of this paper is to provide a concise explanation of four common qualitative approaches, case study, ethnography, narrative, and phenomenology, demonstrating how each ...

  7. PDF Reporting Qualitative Research in Psychology

    The qualitative reporting standards described in this book were designed to guide authors and reviewers to think through how to strengthen the presentation of their work to increase its impact. I encourage you, as you read this book, to consider how these standards can help you communicate the story of your research more clearly and persuasively.

  8. What Is Qualitative Research?

    Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which involves collecting and ...

  9. Revisiting Bias in Qualitative Research: Reflections on Its

    Recognizing and understanding research bias is crucial for determining the utility of study results and an essential aspect of evidence-based decision-making in the health professions. Research proposals and manuscripts that do not provide satisfactory detail on the mechanisms employed to minimize bias are unlikely to be viewed favorably. But what are the rules for qualitative research studies ...

  10. How to use and assess qualitative research methods

    Aim The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

  11. Scientific Writing: A reporting guide for qualitative studies

    Qualitative research analyzes data from direct field observations, in-depth, open-ended interviews and written documents. Inductive analyses yield patterns and themes that generate hypotheses and offer a basis for future research. Although qualitative studies do not create generalizable evidence, well-reported studies provide enough information for readers to assess the applicability or ...

  12. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information.

  13. Chapter 21: Qualitative evidence

    A qualitative evidence synthesis can be undertaken and integrated with a corresponding intervention review; or Undertaken using a mixed-method design that integrates a qualitative evidence synthesis with an intervention review in a single protocol. Methods for qualitative evidence synthesis are complex and continue to develop.

  14. Literature Review

    In The Literature Review: A Step-by-Step Guide for Students, Ridley presents that literature reviews serve several purposes (2008, p. 16-17). Included are the following points: Historical background for the research; Overview of current field provided by "contemporary debates, issues, and questions;" Theories and concepts related to your research;

  15. Literature review as a research methodology: An overview and guidelines

    This paper discusses literature review as a methodology for conducting research and offers an overview of different types of reviews, as well as some guidelines to how to both conduct and evaluate a literature review paper. It also discusses common pitfalls and how to get literature reviews published. 1.

  16. What is Qualitative in Qualitative Research

    Qualitative research is multimethod in focus, involving an interpretative, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them.

  17. A Guide to Writing a Qualitative Systematic Review Protocol to Enhance

    Qualitative systematic reviews should be based on well planned, peer reviewed protocols to enhance the trustworthiness of results and thus their usefulness in clinical practice. Protocols should outline, in detail, the processes which will be used to undertake the review, including key search terms, …

  18. Qualitative Research

    Qualitative Research Qualitative research is a type of research methodology that focuses on exploring and understanding people's beliefs, attitudes, behaviors, and experiences through the collection and analysis of non-numerical data. It seeks to answer research questions through the examination of subjective data, such as interviews, focus groups, observations, and textual analysis.

  19. Ethical review and qualitative research competence: Guidance for

    Ethical review committees have the responsibility to judge claimed research competence. This article provides practical guidance to researchers and review committees on using formal qualifications and training, explicit claims of competence, and markers of in/competence to assess qualitative research competence.

  20. What feedback do reviewers give when reviewing qualitative manuscripts

    Besides that, some results suggest an underlying quantitative mindset of reviewers. Results are compared and contrasted in relation to established reporting guidelines for qualitative research to inform reviewers and authors of frequent feedback offered to enhance the quality of manuscripts.

  21. What Is Qualitative Research? An Overview and Guidelines

    A review of literature indicates that there are different types of qualitative research methods such as action research, content analysis, ethnography, grounded theory, historical analysis ...

  22. Are Systematic Reviews Qualitative or Quantitative

    A systematic review can be qualitative, quantitative, or a combination of the two. The approach that is chosen is determined by the research question and the scope of the research. When qualitative and quantitative techniques are used together in a given study, it is called a mixed method. In a mixed-method study, synthesis for the quantitative ...

  23. Qualitative systematic reviews: their importance for our understanding

    Keywords: Qualitative systematic review, meta-ethnography, qualitative synthesis Many of us use evidence of effectiveness for various interventions when working with people in pain. A good systematic review can be invaluable in bringing together research evidence to help inform our practice and help us understand what works.

  24. Qualitative studies involving users of clinical neurotechnology: a

    This review provides a comprehensive synthesis of the current qualitative research landscape on neurotechnology and the limitations thereof. These findings can inform researchers on how to study the subjective experience of neurotechnology users more holistically and build patient-centred neurotechnology.

  25. Investigating the implementation challenges of the research doctoral

    This study was conducted according to the Standards for Reporting Qualitative Research: A Synthesis of Recommendations [].Study design. We applied a qualitative descriptive methodology to achieve an in-depth and rigorous description of the challenges of the research-focused doctorate program and relevant solutions.

  26. PDF Ilr Review Guidelines for Qualitative Research

    The ILR Review is interested in expanding its publication of qualitative research. To do so, we have developed guidelines designed to help authors provide consistent and transparent information regarding their sources of data and their research methods.

  27. Factors critical for the successful delivery of telehealth to rural

    This research used a qualitative descriptive design to explore telehealth service provision in rural areas from the perspective of clinicians and describe factors critical to the effective delivery of telehealth in rural contexts.

  28. Re: Indigenous people's experiences of primary health care in Canada: a

    Dear Editor, I read the article by G. Barbo and S. Alam titled "Indigenous people's experiences of primary health care in Canada: a qualitative systematic review," which was published in the April issue of your journal.