Child Care and Early Education Research Connections

Assessing research quality.

Using research is one method to identify effective strategies to help inform practice and policymaking decisions. Effective use of research in decision-making can help agencies and organizations:

  • Understand social and educational challenges and their root causes,
  • Make well-informed decisions about programs and policies, and 
  • Leverage public and private resources effectively

Early care and education leaders are responsible for developing policy and making practice decisions that impact providers, families, and children. Often, leaders will seek input and guidance from different sources when making these decisions. Some of these sources may include assessing community needs, reviewing administrative data, understanding financial resources, and searching for research, and other sources of information. 

As beneficial as research can be to help shape policy and practice decisions, understanding research can feel overwhelming. The technical language used in research may make it hard to understand and prevent you from fully using it as a tool. Also, it might not be easy to determine how trustworthy or useful a research-based resource might be for your decision-making process.

This resource is designed to help you think about key questions about the relevance, credibility, and the rigor of research as you consider decisions about investments in early care and education practice and policy. 

How can I learn more about understanding research?

For more information on understanding research, consider exploring other tools available on the Research Connections website. 

  • Consider using this  research glossary when reading research articles.
  • Use this quick reference to better understand different types of  study designs .
  • Check out the  Research Assessment Tools for more information on assessing the rigor of qualitative or quantitative research. 
  • Use the Assessing Research Quality Checklist as you think about the usefulness of research when considering policy and practice decisions

Contact the authors of the study if you have additional questions. Most researchers would be open to hearing from practitioners and policymakers.

research quality

Announcements

Find announcements, including conferences and meetings, Research Connections newsletters, opportunities, and more.

research quality

Search Resources

Search all resources in the Research Connections Library.

research quality

Explore Our Topics

Research Connections' resources are organized into topical categories and subcategories.

Research Assessment Tools

This section provides resources related to quantitative and qualitative assessment tools.

Assessing Research Quality Checklist

This section provides a checklist designed to help you think about the usefulness of research when considering policy and practice decisions.

Ethics of Research

This section provides an overview of three basic ethical principles.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Research and Quality Improvement: How Can They Work Together?

Affiliations.

  • 1 Director, Data Science, Quality Insights, Williamsburg, VA.
  • 2 Chair, ANNA Research Committee.
  • 3 President, ANNA's Old Dominion Chapter.
  • 4 Instructor, Case Western Reserve University, Cleveland, OH.
  • 5 Associate Degree Nursing Instructor, Northeast Wisconsin Technical College, Green Bay, WI.
  • PMID: 35503694

Research and quality improvement provide a mechanism to support the advancement of knowledge, and to evaluate and learn from experience. The focus of research is to contribute to developing knowledge or gather evidence for theories in a field of study, whereas the focus of quality improvement is to standardize processes and reduce variation to improve outcomes for patients and health care organizations. Both methods of inquiry broaden our knowledge through the generation of new information and the application of findings to practice. This article in the "Exploring the Evidence: Focusing on the Fundamentals" series provides nephrology nurses with basic information related to the role of research and quality improvement projects, as well as some examples of ways in which they have been used together to advance clinical knowledge and improve patient outcomes.

Keywords: kidney disease; nephrology; quality improvement; research.

Copyright© by the American Nephrology Nurses Association.

PubMed Disclaimer

Conflict of interest statement

The authors reported no actual or potential conflict of interest in relation to this nursing continuing professional development (NCPD) activity.

Similar articles

  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • Improving care coordination between nephrology and primary care: a quality improvement initiative using the renal physicians association toolkit. Haley WE, Beckrich AL, Sayre J, McNeil R, Fumo P, Rao VM, Lerma EV. Haley WE, et al. Am J Kidney Dis. 2015 Jan;65(1):67-79. doi: 10.1053/j.ajkd.2014.06.031. Epub 2014 Aug 30. Am J Kidney Dis. 2015. PMID: 25183380
  • Quality improvement in pediatric nephrology-a practical guide. Gaudreault-Tremblay MM, McQuillan RF, Parekh RS, Noone D. Gaudreault-Tremblay MM, et al. Pediatr Nephrol. 2020 Feb;35(2):199-211. doi: 10.1007/s00467-018-4175-0. Epub 2019 Jan 5. Pediatr Nephrol. 2020. PMID: 30612204 Review.
  • The Health and Safety of Nephrology Nurses and the Environments in Which They Work: Important for Nurses, Patients, and Organizations. Ulrich BT, Kear TM. Ulrich BT, et al. Nephrol Nurs J. 2018 Mar-Apr;45(2):117-168. Nephrol Nurs J. 2018. PMID: 30303636
  • The learning health system for pediatric nephrology: building better systems to improve health. Varnell CD Jr, Margolis P, Goebel J, Hooper DK. Varnell CD Jr, et al. Pediatr Nephrol. 2023 Jan;38(1):35-46. doi: 10.1007/s00467-022-05526-0. Epub 2022 Apr 20. Pediatr Nephrol. 2023. PMID: 35445971 Free PMC article. Review.
  • Search in MeSH

Grants and funding

  • K23 NR019744/NR/NINR NIH HHS/United States

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

What is quality research? A guide to identifying the key features and achieving success

research quality

Every researcher worth their salt strives for quality. But in research, what does quality mean?

Simply put, quality research is thorough, accurate, original and relevant. And to achieve this, you need to follow specific standards. You need to make sure your findings are reliable and valid. And when you know they're quality assured, you can share them with absolute confidence.

You’ll be able to draw accurate conclusions from your investigations and contribute to the wider body of knowledge in your field.

Importance of quality research

Quality research helps us better understand complex problems. It enables us to make decisions based on facts and evidence. And it empowers us to solve real-world issues. Without quality research, we can't advance knowledge or identify trends and patterns. We also can’t develop new theories and approaches to solving problems.

With rigorous and transparent research methods, you’ll produce reliable findings that other researchers can replicate. This leads to the development of new theories and interventions. On the other hand, low-quality research can hinder progress by producing unreliable findings that can’t be replicated, wasting resources and impeding advancements in the field.

In all cases, quality control is critical. It ensures that decisions are based on evidence rather than gut feeling or bias.

Standards for quality research

Over the years, researchers, scientists and authors have come to a consensus about the standards used to check the quality of research. Determined through empirical observation, theoretical underpinnings and philosophy of science, these include:

1. Having a well-defined research topic and a clear hypothesis

This is essential to verify that the research is focused and the results are relevant and meaningful. The research topic should be well-scoped and the hypothesis should be clearly stated and falsifiable .

For example, in a quantitative study about the effects of social media on behavior, a well-defined research topic could be, "Does the use of TikTok reduce attention span in American adolescents?"

This is good because:

  • The research topic focuses on a particular platform of social media ( TikTok ). And it also focuses on a specific group of people (American adolescents).
  • The research question is clear and straightforward, making it easier to design the study and collect relevant data.
  • You can test the hypothesis and a research team can evaluate it easily. This can be done through the use of various research methods, such as survey research , experiments or observational studies.
  • The hypothesis is focused on a specific outcome (the attention span). Then, this can be measured and compared to control groups or previous research studies.

2. Ensuring transparency

Transparency is crucial when conducting research. You need to be upfront about the methods you used, such as:

  • Describing how you recruited the participants.
  • How you communicated with them.
  • How they were incentivized.

You also need to explain how you analyzed the data, so other researchers can replicate your results if necessary. re-registering your study is a great way to be as transparent in your research as possible. This  involves publicly documenting your study design, methods and analysis plan before conducting the research. This reduces the risk of selective reporting and increases the credibility of your findings.

3. Using appropriate research methods

Depending on the topic, some research methods are better suited than others for collecting data. To use our TikTok example, a quantitative research approach, such as a behavioral test that measures the participants' ability to focus on tasks, might be the most appropriate.

On the other hand, for topics that require a more in-depth understanding of individuals' experiences or perspectives, a qualitative research approach, such as interviews or focus groups, might be more suitable. These methods can provide rich and detailed information that you can’t capture through quantitative data alone.

4. Assessing limitations and the possible impact of systematic bias

When you present your research, it’s important to consider how the limitations of your study could affect the result. This could be systematic bias in the sampling procedure or data analysis, for instance. Let’s say you only study a small sample of participants from one school district. This would limit the generalizability and content validity of your findings.

5. Conducting accurate reporting

This is an essential aspect of any research project. You need to be able to clearly communicate the findings and implications of your study . Also, provide citations for any claims made in your report. When you present your work, it’s vital that you describe the variables involved in your study accurately and how you measured them.

Curious to learn more? Read our Data Quality eBook .

How to identify credible research findings

To determine whether a published study is trustworthy, consider the following:

  • Peer review: If a study has been peer-reviewed by recognized experts, rest assured that it’s a reliable source of information. Peer review means that other scholars have read and verified the study before publication.
  • Researcher's qualifications: If they're an expert in the field, that’s a good sign that you can trust their findings. However, if they aren't, it doesn’t necessarily mean that the study's information is unreliable. It simply means that you should be extra cautious about accepting its conclusions as fact.
  • Study design: The design of a study can make or break its reliability. Consider factors like sample size and methodology.
  • Funding source: Studies funded by organizations with a vested interest in a particular outcome may be less credible than those funded by independent sources.
  • Statistical significance: You've heard the phrase "numbers don't lie," right? That's what statistical significance is all about. It refers to the likelihood that the results of a study occurred by chance. Results that are statistically significant are more credible.

Achieve quality research with Prolific

Want to ensure your research is high-quality? Prolific can help.

Our platform gives you access to a carefully vetted pool of participants. We make sure they're attentive, honest, and ready to provide rich and detailed answers where needed. This helps to ensure that the data you collect through Prolific is of the highest quality.

With Prolific, you can streamline your research process and feel confident in the results you receive. Our minimum pay threshold and commitment to fair compensation motivate participants to provide valuable responses and give their best effort. This ensures the quality of your research and helps you get the results you need. Sign up as a researcher today to get started!

You might also like

research quality

High-quality human data to deliver world-leading research and AIs.

research quality

Follow us on

All Rights Reserved Prolific 2024

  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Research Evaluation
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

1. introduction and objectives, 2. situating the account: research evaluation and the humanities, 3. exploring quality from within: conceptualizations and procedures, 4. entry points to field-type quality notions in humanities research, 5. discussion and concluding remarks.

  • < Previous

Quality from within: Entry points to research quality in the humanities

  • Article contents
  • Figures & tables
  • Supplementary Data

Klara Müller, Linus Salö, Sverker Sörlin, Quality from within: Entry points to research quality in the humanities, Research Evaluation , Volume 33, 2024, rvae029, https://doi.org/10.1093/reseval/rvae029

  • Permissions Icon Permissions

It is well known that research quality notions vary across research fields. Despite this, humanities quality notions are often portrayed as deviant or particularly hard to grasp. To some extent, this has a historical explanation, as notions from within the humanities have not been the standards used in the development of research evaluation tools. Accordingly, we argue that current discussions on research evaluation and quality notions reflect a lack of understanding of how field-type quality notions in the humanities can be studied. We therefore identify entry points to further studies on how humanities scholars address quality notions in their own words, what one might call ‘quality from within’. The suggested entry points are assessment for recruitment, field-type internal evaluations, public debates on the humanities, book reviews, the academic seminar, PhD supervision, academic memoirs, obituaries and the Festschrift . We here outline how an empirically grounded research agenda around quality in humanities research can be fruitfully pursued. Thus, the study aims to contribute insights into why and how a fresh perspective can provide us with much-needed entry points to understanding quality from within.

In this day and age, quality is everywhere, and almost everything is evaluated in one way or another. The realm of science is certainly no exception. On the contrary, quality is at once a buzzword in science policy and a feature of everyday academic life (e.g. Lamont 2009 ; Benner 2011 ; Dahler-Larsen 2019 ). Grant proposals, scientific articles, student essays, instructor performance—whatever the task or genre, ample efforts go into gauging, establishing or increasing its quality. Yet, given the centrality of quality notions, it is surprising that the stock of knowledge about what constitutes research quality is so scant. This may be particularly true for the humanities. Much of what we know about quality in this vast, diverse and multicore research area pertains to negative definitions: what quality is not in the humanities. An explanation for this state of affairs could be that the humanities have, for a period of several decades, perceived themselves—rightly or not—as being on the defensive, fending off what have been perceived as externally imposed quality articulations ( Ochsner, Hug and Galleron 2017b ; Ekström and Östh Gustafsson 2022 ; Ekström and Sörlin 2022 ).

As scholars immersed in humanities knowledge environments, we have not found substantive empirical support for the concern—real or perceived—that research quality occupies a less central place in the humanities than it does in other research areas. Our concern is, however, the lack of clearly signposted entry points through which to empirically explore research quality notions from within the humanities. In this article, accordingly, we focus on research quality originating from within specific research fields, identified by Langfeldt et al. (2020) as ‘field-type’ quality notions. Rather than interrogating how quality is evaluated by external means and instances, we turn to how researchers have reasoned about the sort of ‘quality cultures’ that are locally cultivated within their own disciplinary practices ( Lamont 2009 ). We are interested in what one might call ‘quality from within’, as opposed to those indicators of quality that are perceived as ‘imposing’, as quality notions coming from the ‘outside’. Those from the ‘outside’ have often been questioned by members of the humanities research community as imprecise, irrelevant or serving controversial ends ( Ochsner, Hug and Daniel 2016 ). However, we acknowledge that all research quality notions are, to varying degrees, created through negotiations between what can be described as the internal and external. This phenomenon is captured in what Müller (2024) denotes as responsive quality articulations, which points to how quality notions of the humanities percolate as responses to other quality notions. These ‘other’ notions may, in many cases, be identified as space-type quality notions, which originate in policy and funding spaces and are thus distinct from field-type notions ( Langfeldt et al. 2020 ).

The purpose of this article is to identify and critically discuss a set of empirical entry points for studying quality notions in humanities research. More specifically, we seek to signpost where ‘field-type’ notions of research quality linger by accounting for some of the ways in which humanities scholars themselves have understood, discussed and evaluated research quality, both as individuals and as socially organized epistemic communities, such as professional organizations, institutes and disciplines. While not exhaustive, the examples we present make up a broad and representative set of ways in which research quality in the humanities has been negotiated historically and contemporarily. These entry points serve to outline how an empirically grounded research agenda around quality in humanities research can be fruitfully pursued.

Our approach to the topic is not exclusive to any particular region, but we draw primarily on empirical observations from the development of humanities in Sweden, a population-wise small country with a high R&D intensity in STEM fields and a sizeable but less prominent position in humanities research. 1 In this regard, Sweden is certainly not unique ( Holm, Jarrick and Scott 2015 ; Benneworth, Gulbrandsen and Hazelkorn 2016 ).

We argue that current research on research quality and evaluation lacks perspectives coming from within the humanities and, more generally, a historical understanding of the emergence of field-type quality notions. The time depth of most studies, to our knowledge, is low indeed, almost as if there had been no articulated understanding of research quality before metrics intervened. Consequently, a wide range of empirical material that we believe could enrich the discourse and practice of quality has been overlooked. By addressing this knowledge gap, we want to reframe current discussions on what research quality means in the humanities, and what it meant in the past. Our contribution aims to investigate this in practice by drawing on field-type empirical material to provide a number of previously overlooked entry points, however challenging they may be to capture.

In many countries across the global north, the advent of heightened governance and metrics-based evaluation and rankings of research took off in the 1990s, after which ‘the search for excellence’ swiftly picked up in speed and intensity ( Benner 2011 ; see also Franssen and Wouters 2019 ). Scholarly communities followed suit as quality-ensuring practices, notably systematized peer review, were broadly implemented ( Baldwin 2018 ). This development, which certainly included Sweden, was linked to ambitions to enhance accountability and differentiate the research component of national higher education systems ( Sörlin 2007 ; Benner and Holmqvist 2023 ). The new regime of evaluation did not affect all scientific fields evenly—neither at the same time nor in the same vein. According to Whitley (2007) , the use of peer review to secure the quality of individual works, or metrics to evaluate and rank individuals and collectives, impacted different fields and their existing quality notions in varying ways. This interaction sometimes led to clashes, with whatever was already in place within fields and their historically developed value economies, publishing practices and, thus, quality notions ( Ekström and Sörlin 2012 ; Hammarfelt and de Rijcke 2015 ; Salö 2017 ). From this, it followed that the metrics-driven quality culture was most intensely criticized by parts of the humanities and the social sciences. What was initially described as an emerging economy of publications and citations ( Larsson 2009 ) soon became an object of critical concern (e.g. Nylander et al. 2013 ; Widmalm 2013 ; see Engwall, Edlund and Wedlin 2023 for a recent critical analysis).

During the early 2000s, humanities scholars opposed the use of certain tools for evaluating research quality (e.g. Glänta 2005 ; Elvebakk and Enebakk 2006 ). This critique was particularly aimed at the use of quantitative measurements, originally developed for other areas connected to ideas about ‘objectifying the judgment’ ( Guillory 2005 ). Metrics such as publication counts, journal impact factors, H- and other indexes or citation numbers were perceived by some humanities scholars as reasonable and useful tools, while others regarded them as conflicting with prevailing quality notions ( Sörlin 2018 ).

The main fault line in these debates or controversies, waged as they were within and between different spheres of society, concerned the idea, or concept, of quality. As noted earlier, quality has been omnipresent as a desire among policy makers and as a terrain of struggle among scholars—particularly those within the humanities. Yet, the meanings of quality are not absolute, fixed or timeless but rather flexible and context-dependent. The evolution of quality notions thus constitutes a relevant research object to explore using historical and sociological approaches ( Langfeldt et al. 2020 ).

The lack of knowledge on research quality in the social sciences and the humanities has, however, led to an increased interest in empirical studies more recently, with organizations such as the European Network for Research Evaluation in the Social Sciences and Humanities serving as the meeting place for these efforts ( Ochsner, Galleron and Ionescu 2017a ). In this article, we draw inspiration from work by Ochsner et al. that has emphasized the need to assess humanities research through its ‘own approaches’ ( Hug, Ochsner and Daniel 2013 ; Hug and Ochsner 2014 ; Ochsner, Hug and Daniel 2016 ; Ochsner, Hug and Galleron 2017b ).

In the general discourse on research evaluation, highlighting the ‘otherness’ of the humanities has been a common practice ( Franssen and Wouters 2019 ). One type of research drawing on this claim highlights the flaws in currently used tools and practices, such as research on the (under)representation of the humanities in databases used for bibliometric analysis ( Hammarfelt 2014 ; 2016 ). Another strand of critique is about the implications of certain evaluation practices in the humanities, primarily the implications of metrics in academic knowledge production ( Hammarfelt and de Rijcke 2015 ; Nästesjö 2021 ).

This research, derived from the ‘quality wars’ in academia in general and the humanities in particular, has enhanced our understanding of research quality and energized the discussion of evaluation practices. However, all examples mentioned here have a contemporary focus, looking primarily at how to change currently used tools in order to make them ‘fairer’ for humanities scholars. While important and relevant, this focus on ‘mend-and-repair’ – how to formulate and operationalize quality vis-à-vis the contemporary situation—disregards how certain quality notions came to be dominant in the first place, how the discourse of quality was enabled and what this history might imply for our current understanding of quality issues.

Two pivotal concepts in our approach are ‘entry points’ and the idea of exploring the humanities ‘from within’. A third one, of course, is ‘the humanities’, here understood as a broad and heterogeneous denominator of scholarly activity which comprises numerous disciplines as well as trans-, post- and anti-disciplinary environments ( Benneworth, Gulbrandsen and Hazelkorn 2016 ; Krämer 2018 ). The humanities comprise a wide range of organizational units, which harness their own quality notions to varying degrees, akin to other areas of research where field-type and space-type quality notions coexist and conflict ( Langfeldt et al. 2020 ). However, drawing on recent efforts to historicize the humanities as a heterogeneous and yet still comprehensible area of knowledge, we follow the impetus of tracing the humanities beyond their individual disciplinary arrays ( Bod et al. 2016 ; Ekström and Östh Gustafsson 2022 ; Paul 2022a ). This broad approach makes it possible to establish a set of preliminary insights into research quality in the humanities.

Yet, to invoke the view ‘from within’ is to opt for an approach where locally and internally cultivated and perceived quality notions—emic perspectives—are brought to the fore. The basic assumption is that understandings of quality, or clues thereof, are empirically accessible within the practices of academic life. Conceptually, our stance is that the study of individuals’ perceptual involvement in the fields they inhabit is part and parcel of field-type quality notions, insofar as the value economies of fields are taken up or embodied throughout the life histories of those who have dwelled therein over a prolonged period of time ( Ingold 2000 ). For example, through their experiences of taking part in the key practices of the first category, their sense and judgement of research quality is at once shaped and shaping.

This aspiration is reflected in our cases. Initially, we attempted to map available methods based on existing literature, inspired by Budtz Pedersen, Følsgaard Grønvad and Hvidtfeldt (2020) . This proved challenging due to the scarcity of literature dealing directly with matters of research quality in the humanities, and since quality articulations from within often appear against the backcloth of a primary object or as an implicit feature of something else, such as peer review ( Forsberg et al. 2022 ). Quality may go under another name or be rendered salient only by implication. Moreover, rather than focusing on how to explore quality methodologically, our sensitization to ‘entry points’ reflects an interest in signposting where quality notions may be found. Thus, we take an interest in identifying more or less institutionalised academic practices where different types of quality notions linger, as a first step in a pursuit of investigating them empirically. From this vantage point, we collectively undertook a literature review in search of work describing arenas of quality. The outcome of this exercise was a long list of plausible cases, many of which pertained to institutionalized academic activities where systematic quality work has been practised for a long time and where quality notions, accordingly, are discernible. Typically, these are local arenas where field-type value economies are rendered salient, albeit often by normatively expressed dominant agents licensed to speak in the name of the field in question. Examples of such arenas include procedures of evaluation in recruitment and field-type internal evaluations but also, digging deeper into the empirical realities of everyday work life, practices such as the academic seminar and PhD supervision.

In the following sections, the findings are presented according to the various entry points identified from our research in the following order:

Assessment for recruitment

Field-type internal evaluations, public debates on the humanities, the academic seminar, phd supervision, scholarly book reviews, academic memoirs, obituaries and ‘festschriften’.

Some sections, such as ‘Assessment for recruitment’, draw more extensively on previous studies, with concrete examples and insights to sources which have not yet been framed as field-type quality notions. Others, such as ‘The academic seminar’, explore plausible data sources that have received less attention in the context of research quality notions, thus, serving as avenues for further inquiry.

Notably, while some entry points, such as PhD supervision, stand out as being broad, nested and multi-activity practices, others, such as scholarly book reviews, are characterized by being fairly specialized and by materializing in a particular artifact where notions of research quality are encoded. While acknowledging that the selected entry points differ in this vein, we also hold that the approach opted for has allowed us to examine the complex nature of how research quality notions of the humanities coexist and develop, drawing from both more established understandings as well as overlooked areas of interest. Navigating through these entry points, we aim to contribute to a deeper understanding of how quality is perceived, and evaluated in humanities research.

One of the more frequently used entry points in previous studies of research quality and related themes are external expert reviews in processes of position appointments, promotion and other procedures linked to professional advancement (e.g. Hamann 2019 ). It follows that most of what is currently known about quality notions from within the humanities has been extracted by analysing expert reports (see e.g. Nilsson 2009 ; Gunvik-Grönbladh 2014 ; Hammarfelt 2017 ; Hamann and Kaltenbrunner 2022 ; Ganuza and Salö 2023 ). By considering these expert opinions as a potential entry point to the study of quality from within , we redefine the category of empirical material.

While procedures of this sort are practised in many academic systems, it is a longstanding and deeply institutionalized practice in Sweden and other countries alike, where senior peers who work elsewhere but are active in the same or an adjacent field are tasked to evaluate all candidates and rank them vis-à-vis a number of criteria ( Riis, Hartman and Levander 2011 ; Gunvik-Grönbladh 2014 ). The professional judgement of the referees thus becomes the core of a gatekeeping mechanism based on, among other things, field-type quality notions (e.g. Hammarfelt, Rushforth and de Rijcke 2020 ).

In Sweden, studies have been able to tease out humanities quality notions by comparing humanities fields with fields in other research areas. Scrutinizing the actual expert review reports, Nilsson (2009) includes the field of literature studies as compared with physics and political science. Similarly, Hammarfelt (2017) explores the field of history in comparison with biomedicine and economics. Both of these studies show that humanities fields have less standardized notions of research quality and that metrics and proxy measures are rarely used to compare and rank candidates. The notion of quality that distinguishes the humanities fields from their counterparts elsewhere, then, is the practice of relying on a qualitative conception of quality and therefore on the professional judgement of the evaluator. The evident arbitrariness of professional judgement is explored at length by Ganuza and Salö (2023) , who show, analysing applied linguistics in Sweden, that the practical sense of expert evaluators tends to correspond with their own interests, positions and investment strategies within the field.

Recruitment procedures and their document trail comprise a fairly well studied entry point, which has provided ample information on academic values and virtues. It serves as an entry point with clearly articulated descriptions on how professional judgement is used to decide on the quality of an application. However, expert reports rarely reflect the opinions of more than a handful of individual scholars that make up the review committee (sometimes only one expert) and many other factors beyond research quality play into the final assessment, such as teaching or service to the university. The reports make up a particular genre where the decision has to be legitimized in hindsight for external review. Another limitation is the academic range; competitions for positions typically relate to a single discipline or field. Values that are shared by the humanities in the wider sense, and their impact on and contributions to society or even to a university, are rarely clearly articulated in assessment for recruitment. This is what one finds in the following entry point, here defined as field-type internal evaluations .

A source of rich empirical material for the study of research quality in the humanities is the fairly large body of collective evaluations from the last several decades. Further back, in the post-WWII decades, evaluations or reports in or from the humanities often took the purview of the nation, or, more grandiosely ‘civilization’ or ‘humanity’, like the Report of the Commission of the Humanities published in the United States in 1964 ( Keeney and Wells 1964 ). They certainly tell of ideals and grand visions of the study of humanities, but they rarely probe into the internal workings of the knowledge-providing institutions. The production of in-depth evaluative reports belongs largely in the neoliberal era and requires the kind of framing mindset of cost-efficient neoliberalism, reinforced by a sense of rational deliberation of resource allocation between academic fields and performing institutions. These evaluations started to appear in the 1980s; in Sweden, they first focused on scientific disciplines through the research councils. Somewhat later, entire universities opted to undertake evaluations, linked to the governments’ demand for strategy documents. This happened in the Nordic countries on a broad scale from the 1990s. Individual faculties and schools followed suit.

The humanities became the subject of such evaluations or strategic committees—two branches of the same tree—during the 1990s. To mention a few examples, in Sweden, Umeå University launched a report on their humanities faculty in 1999 ( Jonsson et al. 1999 ), the University of Gothenburg commissioned a committee of Nordic scholars to evaluate their large but sprawling (more than forty separate departments or divisions) faculty of humanities in 2000–01, and Lund University did a similar review in 2004 ( Sörlin et al. 2001 ; Jonsson et al. 2004 ). As university-wide external research assessment exercises became the norm in the following years, humanities reports typically came under scrutiny as part of these. At the same time, the conditions for the humanities as a whole were examined ( Geschwind and Larsson 2008 ). In addition, a research project on the future of humanities, funded by the Riksbankens Jubileumsfond foundation, was launched, largely to critically examine the widespread perception that research in the humanities was not only limited but also of poor quality ( Ekström and Sörlin 2012 ). 2 Other reports and studies on the humanities were undertaken in the period since the 1990s, also initiated by the government ( Forskningsberedningen 1997 ).

To conclude, these field-type internal evaluations contain plenty of information about the opinions of qualified scholars: the evaluators who express their views on a wide array of institutional arrangements and research products and results. These reports typically contain so much data from the inside of humanities departments that it would be difficult for any research project to repeat it. This data is also historical, in the sense that the evaluations represent an archive of past quality understandings of the kind that otherwise typically stay silent. In this way, field-type internal evaluations offer detailed, historical data on institutional practices and opinions, thus, preserving quality notions from within.

Another entry point is public debates and discussions on the humanities. Some of these discussions have already been the focus of recent studies on the general role or value of the humanities ( Small 2013 ; Holm, Jarrick and Scott 2015 ), yet we have not found any examples of studies that centre on research quality notions in these debates. While the humanities have been under-articulated within research policy discussions such as governmental research bills ( Ekström and Sörlin 2012 ), studies on the impact and public knowledge circulation of the humanities indicate that humanities scholars have nonetheless had a strong presence in public debates ( Hammar and Östling 2021 ; Salö, Sörlin and Benner 2021 ; Östling, Jansson and Svensson Stringberg 2022 ). A Swedish example is the fall 2005 debates on quality and the humanities in Sweden’s largest daily, the Dagens Nyheter , starting with the results of a bibliometric study illustrating a comparatively low performance of the Swedish humanities. 3

Humanities scholars engaging in these debates are often renowned, using their platform or authority to argue for their own claims about what research quality in the humanities is and how it should be obtained. For example, in Historisk Tidskrift , historian Jonas Nordin (2008) described how the journal had applied quality assurance through advisory boards of historians since its start in 1881, with the addition of double-blind peer review during the early 2000s. These forms of evaluation suited the journal well, Nordin argued, whereas the use of bibliometrics to determine the quality of historical research was the ‘worst type of charlatanry’ for a number of reasons: the methods were not adjusted for publishing patterns in the fields, yielding a false, simplified picture of what good and important research could be, while simultaneously replacing ‘qualitative criteria with quantitative ones’ (608).

In the study of field-type research quality notions, public debates serve as an entry point for understanding how humanities scholars have defended and promoted quality, in particular in response to external quality notions.

While peer review has emerged as the gold standard for quality assurance, it is not the sole—or even the primary—way for upholding scientific standards within the humanities. Another, often complementary, approach is the academic seminar, the history of which dates back to the first half of the eighteenth century ( Karlsohn 2016 ). The seminar is a well-established practice at research institutions of different disciplines, though the more exact meaning of a seminar is often more locally cultivated ( Cronqvist and Maurits 2016 ). In many Swedish humanities departments, the seminar is recognized as a well-developed internal quality evaluation system. Historian of ideas Henrik Björck, for example, identifies the seminar as an important aspect of collegiality, which he in turn describes as a robust and well-tried way of continuously assessing research quality ( Björck 2013 ). The quality assurance that takes place when drafts are scrutinized within the academic seminar, and according to its standards, is also, in some cases, expected to replace or prepare for other forms of peer review such as submitting manuscripts for blind peer review ( Hammarfelt, Hammar and Francke 2021 ). Humanities scholars tend to appreciate and observe the academic seminar culture with great care and it tends to rest on tacit, institutionally inherited knowledge.

The seminar should also be regarded as an important site of practice in the humanities, especially through negotiations of research quality and its transmission across generations of scholars. Its eminent position in many humanities’ quality cultures is explained by virtue of this fact. The seminar is not just an infrastructure, like a library or a database. It is more akin to the field station or the laboratory in the sciences: a collaborative space for experimentation and questioning, for refinement of argument and common and individual creativity. Humanities scholars conduct much of their work in solitude, silently reading, writing and reflecting in archives, in research libraries or in front of their computer screens. But the seminar is a place where they work and talk together. These collaborative practices are hard to separate from the norms; practices reflect and embody the norms and uphold them, disseminating them in real time to new arrivals. Through these practices, scholars can also question the norms when they (rarely) prove insufficient to serve the purpose of the enterprise.

Several seminars are held with the aim of providing a focus for discussion, study, and the exchange of ideas in the humanities. The seminars take different forms, but traditionally they have been conducted by guests invited to present material upon which they are working. Past seminar leaders have included Erich Auerbach, Hannah Arendt, W. H. Auden, Noam Chomsky, Roman Jakobson, Elaine Scarry, Joan Scott, and Raymond Bellour. Faculty and graduate students from Princeton University, the Institute for Advanced Study, and the community at large participate in each seminar.          ( Princeton Humanities Council 2022 )

From this quote it must be assumed that there is really no need for quantitative or other forms of evaluation of the seminars. The names speak for themselves, although it is of course also true that Noam Chomsky belongs among the most highly cited scholars of all categories of all time ( Perez Vico et al. 2024 ). To mention it in this context, however, would be a faux pas , almost a blasphemy, and clearly a devaluation of the Gauss seminar as an institution. So, trust in reputation and nimbus may in fact be a defensible position to take in the humanities, whose quality cultures hence come across as less formalistic and precise.

To summarize, academic seminars play a crucial role in transferring and developing research quality notions of the humanities. They provide a recurring platform for peer evaluation of work-in-progress, while also facilitating collective discussions on what constitutes high-quality research. Seminars transmit both standards in between generations of scholars and more generally tacit knowledge on how to practice research quality in the humanities. Despite their centrality, seminars have not been a primary data source in many studies on quality notions, and could therefore contribute with fresh perspectives on quality from within.

In line with the seminar, supervision practices also tend to be based on tacit, institutionally inherited knowledge and understandings of quality notions. Supervision of PhDs, and postdocs, tends to be a prioritized activity where norms of research quality are constantly rehearsed and transmitted to new generations of practitioners, previously studied by ethnographic approaches ( Gerholm and Gerholm 1992 ; Ehn and Löfgren 2004 ), however, not explicitly as a source for field-type quality notions. These are quality-maintaining and -enhancing arenas and should be at least somewhat distinguished from the quality-controlling institutions of PhD examination and other formal arrangements to ensure that the final product of the PhD training has passed the quality bar ( Odén 1991 ).

Only recently has supervision of PhD students become more formalized and, since the 1990s, it has increasingly been the subject of both compulsory and voluntary regulation in humanities faculties and departments. Most universities demand that new supervisors take designated supervision courses and that they document and report their supervision in special forms in rich detail. There is also a growing instruction literature on supervision and on how to be a PhD student and ‘get the degree’ ( Phillips and Pugh 2005 ; Burman 2018 ). However, there is still a very small literature on the quality dimensions of supervision, and the same goes for the seminar. These tend to be protected spheres, not (yet) discovered as reform objects for explicit quality enhancement measures. Instead, the literature that does exist tends to emphasize the normative content of the quality culture—for historians, say, objectivity, completeness, truthfulness, context, nuance, representativeness, literary efficacy, etc—and disregard the institution and its rituals, which tend to be left to local custom ( Bergenheim and Ågren 2008 ).

As graduate training has become normalized and, especially in the Nordic countries, costly with salaried PhDs, it has also become a subject of interest to the government and others. External bodies want more PhDs, hired for particular purposes, and they want them delivered at a certain pace, making passage time a critical issue. This has been widely debated, and descriptions of the quite rapid transition of PhD training from an exclusively obscure academic enterprise into a major concern for industrial innovation and regional development can be found many places (see e.g. McCulloch 2022 ). Some of these changes apply in the humanities as well: formalized and frequent reporting, external monitoring of progress, pluralist supervisory teams rather than individual senior professors. However, some of these demands are less heavy on the humanities, which probably means that in humanities fields the supervision situation likely still reflects internal professional norms of research quality to a very high degree.

Similar to academic seminars, supervision relies on tacit and institutionally inherited knowledge to uphold and transmit research quality notions from within, by repeatedly reinforcing and transmitting research quality notions to new generations of scholars. As compared to more formal aspects of PhD examination, supervision is focused on the continuous development and transmission of how to convey research quality within PhD training. PhD supervision provides a rich, yet underexplored, domain for studying quality notions from within. It offers insights into how quality is practiced, transmitted and adapted over time in the specific quality culture it is carried out in.

In most disciplinary journals, there is a book review section, with implicit clues or explicit statements on what research quality in the particular field is or ought to be. Various forms of peer reviewing of books have been noted as a quality-ensuring practice, particularly important within the humanities, especially those disciplines that have a strong tradition of monograph publication ( Hammarfelt, Hammar and Francke 2021 ). In previous literature on book reviews and quality, more general aspects are addressed, such as the practices of (double) blind peer review or the editorial function of scholars working as ‘gatekeepers’ ( Hamann and Beljean 2017 ; Foss Hansen 2022 ). Book reviews, however, tend not to assess quality on any other level than that of the published book. They very rarely relate the quality of the book to the performance of the research environment or the institution. It is the work of the author or authors (or editor/s) that is up for scrutiny. Book reviews tend to spend little energy on quantitative measurements. What book reviews do, though, is show the repertoire of virtues and skills (and their opposites, vices and sloppiness) of the profession or disciplinary practice of which the journal is part and is also committed to sustain. Book reviews therefore tend to serve as the continuation into print of the professional virtues and skills that originate in the seminar room or in the supervision or mentoring of young scholars by senior members of the academic community, linking epistemic virtues and the character of the author ( ten Hagen 2022 ; Paul 2022b ). The cultural section book reviews in the daily press, an institution that developed significantly during the twentieth century, also contributes to setting and upholding standards of the humanities to an extent that is unique and pertains to no other research field. This situation has ramifications for the outreach and ‘impact’ dimensions of humanities, as made evident through studies on public media reviews ( Östling 2020 ; for the wider public role of the academic popular book, see Mandler (2019) and Östling, Jansson and Svensson Stringberg (2022 , chapter 3).

Book reviews provide an entry point for understanding quality notions from within, as they make both implicit and explicit statements about what constitutes quality and how they are communicated within a field. With the tradition of publishing results in monographs being particularly strong in the humanities, they offer a form of quality-assurance through the evaluation of published work. Additionally, book reviews focus on the quality notions of particular books or individual authors, rather than the larger research environment they might belong to. This enables highlighting professional skills and virtues of the field that are essential for the practice of field-type research quality notions.

Some scholars write their memoirs, usually at the end of a long career. This may be as a way of ‘balancing the books’ or manifesting an oeuvre that may have been significant but not so easy for the wider public to notice, let alone understand or appreciate. As these statements tend to come from scholars at the end of a long career, they tend to give way to certain understandings of quality notions represented in memoirs. Memoirs, after all, are written by people who believe that they have something important to say and who, usually, have had overall success in the system. The academic memoir literature thus mostly consists of works by those who stayed in academia for longer parts of their careers, which is why the genre provides unique insights to understand the emergence of field-type quality notions.

Statements on the past, written in the author’s present, can be used in order to understand quality and evaluation from a longer perspective, also considering how the practices of evaluation might relate to personal experiences in general. This is how we understand the academic memoir as a way to grasp a lifelong quality trajectory. Memoirs give insight into how the scholar remembers situations where research quality notions were negotiated. As opposed to an interview, where specific questions are posed and responded to, the content of the memoir is primarily created by the authoring scholar.

Additionally, memoirs often give thorough descriptions of the emotional aspects of evaluative situations, an aspect typically overlooked in studies on research evaluation. The interconnection, for example, is visible in the title of the historian Rolf Torstendahl’s (2011) memoirs, Done, Thought, Felt , and studies on academic work have pointed to the central role of emotions ( Ehn and Löfgren 2004 , 2007 ; Bloch 2012 ).

Making use of this of material, which usually was not consciously composed as a way to discuss or defend certain quality notions within a field, we get to learn more about how quality has been practised from within and how humanities scholars have understood changes in quality cultures on a biographical level.

Similar to the academic memoir is the tradition of Festschrift , or ‘essays in honour of’, ‘essays presented to’. At the beginning of the twentieth century, the term ‘Festschrift’ was used particularly for honouring scholars in the humanities and the physical sciences ( Horowitz 1991 ). Typically compiled to celebrate an academic’s career, often in conjunction with a milestone birthday or retirement, a Festschrift includes contributions from colleagues and former doctoral students who have worked closely with the scholar. For a scholar with wide social networks, contributions may also come from non-academics, for example in government, business, culture or other relevant sectors.

The Festschrift serves to acknowledge the impact and quality of an individual’s work within a field, usually contributing in an outstanding way to discoveries or enabling networks. It highlights the virtues of the scholar honoured, and the tradition of the Festschrift has previously been described as a ‘recognition of a set of insights which are deemed of sufficient protean quality as to merit recollection or celebration’ ( Horowitz 1991 : 236). With the Festschrift, quality notions from within the field/s that the researcher has been contributing to are presented and further communicated. This means that the Festschrift serves as a useful entry point for understanding the entanglement of an academic biography and the scholarly networks. An important aspect is also the fact that far from everyone gets a Festschrift—rather, it is a reward to those who have made outstanding contributions or have held certain positions.

Related to the celebration at a birthday or other special occasion, there are also collected volumes published when a successful academic has passed, a Festschrift in memoriam to pay tribute to a scholar, drawing on the same principle. There is, however, of course a significant difference in writing for the living to consecrating someone’s life trajectory in hindsight. The in-memoriam Festschrift relates to the evaluation that takes place in an obituary, which has been previously studied by Hamann (2016) . Here, the academic life biography consolidates into a linear trajectory, very similar to the CV as an evaluative procedure. The obituary is of course written by somebody else, aiming to describe the ‘messy’ life of a scholar as a legitimate research career. This also means that the authors of the obituaries are, like the authors of the Festschrift, following a set of informal rules in order to describe a biography that follows a life trajectory of quality. This is how obituaries can work as an entry point to study quality notions from within.

Academic memoirs, obituaries and Festschriften offer valuable insights into quality notions from within the humanities through personal and professionally moulded reflections. These provide a rich source of qualitative data showcasing how scholars have perceived and narrated the changes in norms and practices throughout their careers and how quality notions have been negotiated and institutionalized over time. If utilized with regards to their limitations, academic memoirs, obituaries and Festschriften can provide ample evidence for how field-type quality notions have developed over time, in particular with regards to how they have been perceived to impact academic life trajectories, practices and communities.

By identifying and analysing a number of empirical entry points where it is possible to discern field-type quality notions of the humanities, we have drawn a map—or, at least provided some pointers—for future possible studies on the emergence of research quality notions within the humanities. Following our entry points, future research on this topic will enable an understanding of quality based on rich, context-based empirical materials.

Myths and anecdotes on quality notions in the humanities are commonplace and often idealized or derogatory, which is why it will be helpful to compare these descriptions with sound and solid evidence of how humanities scholars have argued and practised quality notions ‘from within’. We have provided some examples of how this work could be carried out with the help of observations, mostly from Sweden. Through our compilation of possible entry points, we have illustrated what sort of empirical material is lacking in the discussions on research evaluation and suggested how this material could contribute to broadening our understanding of field-type research quality notions and research quality more generally. This also brings insights into the marginalization of the humanities in research policy and evaluation discourse by pointing out some of the missing empirical material and processes in these discussions.

These are challenging empirical questions which require in-depth research. We need to dig deeper into the seminar and the supervision as ‘sites of quality’, or ‘proof-spots’, to paraphrase Thomas Gieryn’s (2018) concept, ‘truth-spots’. Notably, our account has not been geared to outlining a methodological agenda; that is, an account on how to explore each identified entry points empirically. Each entry point will probably require its own procedure or even call for a mixed methods approach. For example, because present-day quality regimes are recent enough for senior scholars to recall those that preceded them, interview accounts provide a window not only into contemporary field-type quality notions but also into their progressions of change (e.g. Lucas 2006 ; Salö 2017 ). It will thus be inevitable to collect qualitative evidence by asking academics about their experiences of practices such as seminars and supervision. This should be supplemented by gathering data about output but also about hiring procedures, promotions, finances and other areas pertaining to the quality of the wider intellectual environment of which the seminar is part. This may in some instances require field studies, ethnographies or other forms of deeper inquiries into humanities institutions. These may resemble the laboratory life tradition of STS studies of experimental sciences (e.g. Latour and Woolgar 1979 ; Knorr Cetina 1981 , 1999 ; Traweek 1988 ; Rabinow 1996 ), but they would not be primarily on the epistemological questions of the nature and formation of scientific knowledge. Instead, fieldwork into the quality cultures of the humanities should be driven by the need to understand how humanities sites of practice function as spaces of establishing and negotiating standards of quality. This may hinge on profound epistemological and ontological questions, but the main purpose would be to enrich our understanding of quality from within.

To summarize our entry points, we describe how academic seminars emphasize the transmission of implicit, culturally embedded quality norms within a disciplinary community. Supervision practices, while increasingly formalized, continue to uphold and instill these norms through close mentorship and academic traditions. Book reviews offer both explicit and implicit descriptions of field-type quality notions, and particularly reflect the publishing patterns of many humanities disciplines with traditions of publishing findings in monographs rather than articles. Book reviews also offer insights to how research quality notions are transmitted to public debates, for example through reviews of research in journals intended for not an exclusively academic audience. Similarly, public debates offer an entry point to how humanities scholars have defended and promoted discipline-specific criteria of quality.

Another entry point suggested in this article is recruitment procedures, which can offer detailed insights into how the evaluative group have reasoned around quality notions. Also, field-type internal evaluations offer an entry point to understand how faculty members as well as PhD students reason about quality notions in their institutional environment, and these can also serve as data to understand the historical developments of research quality notions on the institutional level.

Reflections on research quality notions deriving from what we have labelled as academic life histories are accessible through interviews, but also memoirs, obituaries and Festschriften. By analysing these personal narratives, it is possible to trace quality notions and their impact on academic life histories over longer time periods.

By emphasizing the value of broadening the scope of empirical material, the entry points identified also carry implications for the study of research quality across various fields. For instance, PhD training and academic memoirs would serve as highly relevant, albeit currently overlooked, entry points to field-type research quality notions and could also be used for comparing practices across fields. They also invite exploration of additional entry points in certain fields, such as the natural, technical or social sciences, and how and why these have changed over time.

In summary, we have identified and critically examined several inroads to the study of field-type quality notions. We have outlined how an empirically grounded research agenda around this object can be fruitfully pursued, offering perspectives rooted within the humanities. We believe that these entry points—individually or jointly implemented—can provide much-needed insight into quality from within .

This work was supported by Vinnova, the Swedish innovation agency, through the project Making Universities Matter (2015–23), Grant Number 2019–03679, and by the Research Council of Norway, Grant Number 256223 (the Centre for Research Quality and Policy Impact Studies, R-QUEST) (2018–25).

Conflict of interest statement. None declared.

However, it should be noted that in recent years (the most recent data covering the period 2019 to 2021), this general perception of both low R&D intensity and a low level of performance has been contrasted by the fact that Swedish humanities research has scored high in bibliometric surveys of publication impact, and is now at the very top among all research fields in Sweden. It is a quite remarkable contrast to past performance and the reasons have not yet been thoroughly analyzed, although they may be the result of responsiveness in the humanities to incentives towards international journal publication in combination with a general weakening of impact figures in most other Swedish research fields ( Swedish Research Council (VR) 2023 : 78–80).

Participating researchers came from several universities and institutes. A report from the project was Humanisterna och framtidssamhället , edited by Ekström and Sörlin (2011) . The gist of this project was a critique of the still widespread ‘crisis’ trope in the understanding of humanities. On its roots, see Plumb (1964) and more recent studies, e.g. Östh Gustafsson (2022) ; Reitter and Wellmon (2021) .

Examples of similar debates close in time are those in Historisk tidskrift (history) in 2008 , Kulturella perspektiv (ethnology) and interdisciplinary humanities journals such as Glänta (2005) . Additional debates have previously been described by Tunlid (2008 , 2022 ).

Baldwin M. ( 2018 ) ‘ Scientific Autonomy, Public Accountability, and the Rise of “Peer Review” in the Cold War United States ’, Isis , 109 : 538 – 58 .

Google Scholar

Benner M. ( 2011 ) ‘In Search of Excellence? An International Perspective on Governance of University Research’, in Göransson B. , Brundenius C. (eds) Universities in Transition: The Changing Role and Challenges for Academic Institutions , pp. 11 – 24 . Ottawa : Springer .

Google Preview

Benner M. , M. Holmqvist , eds ( 2023 ) Universities under Neoliberalism: Ideologies, Discourses and Management Practices . New York : Routledge .

Benneworth P. , Gulbrandsen M. , Hazelkorn E. ( 2016 ) The Impact and Future of Arts and Humanities Research . Palgrave Macmillan .

Bergenheim Å. , K. Ågren , eds ( 2008 ) Forskarhandledares robusta råd . Lund : Studentlitteratur .

Björck Henrik ( 2013 ) Om kollegialitet . Stockholm : Sveriges Universitetslärarförbund .

Bloch Charlotte ( 2012 ) Passion and Paranoia: Emotions and the Culture of Emotion in Academia . Abingdon, UK : Taylor & Francis Group .

Bod R. , Kursell J. , Maat J. , Weststeijn T. ( 2016 ) ‘ A New Field: History of Humanities ’, History of Humanities , 1 : 1 – 8 .

Budtz Pedersen D. , Følsgaard Grønvad J. , Hvidtfeldt R. ( 2020 ) ‘ Methods for Mapping the Impact of Social Sciences and Humanities—A Literature Review ’, Research Evaluation , 29 : 4 – 21 .

Burman Åsa. ( 2018 ) The Doctoral Student Handbook: Master Effectiveness, Reduce Stress: Finish on Time . Stockholm : Finish on Time Publications .

Cronqvist M. , Maurits A. ( 2016 ) Det goda seminariet: Forskarseminariet som lärandemiljö och kollegialt rum [The Good Seminar: The Research Seminar as a Learning Environment and Collegial Space]. Göteborg : Makadam .

Dahler-Larsen Peter ( 2019 ) Quality: From Plato to Performance . Palgrave Macmillan

Ehn B. , Löfgren O. ( 2004 ) Hur blir man klok på universitetet? Lund : Studentlitteratur .

Ehn B. , Löfgren O. ( 2007 ) ‘Emotions in Academia’, in: Wulff H. (ed.) The Emotions: A Cultural Reader . Oxford, UK : Berg Publishers .

Ekström A. , Östh Gustafsson H. ( 2022 ) ‘Introduction: Politics of Knowledge and the Modern History of the Humanities’, In: Ekström A. Östh Gustafsson H. (eds) The Humanities and the Modern Politics of Knowledge: The Impact and Organization of the Humanities in Sweden, 1850-2020 , pp. 7 – 35 . Amsterdam : Amsterdam University Press .

Ekström A. , Sörlin S. ( 2011 ) Humanisterna och framtidssamhället [The Humanists and the Future Society]. Stockholm : Institutet för framtidsstudier .

Ekström A. , Sörlin S. ( 2012 ) Alltings mått: Humanistisk kunskap i framtidens samhälle [Man is the Measure of All Things: Humanities Knowledge in the Future Society] . Stockholm : Norstedts .

Ekström A. , Sörlin S. ( 2022 ) ‘The Integrative Humanities—and the Third Research Policy Regime’, in: Benner M. , Marklund G. , Schwaag Serger S. (eds) Smart Policies for Societies in Transition , pp. 189 – 212 . Cheltenham Northampton : Edward Elgar Publishing .

Elvebakk B. , Enebakk V. ( 2006 ) ‘Kunnskapsløst kunnskapsløft’, in: Hagen E. B. , Johansen A. (eds) Hva skal vi med vitenskap? 13 innlegg fra striden om tellekantene . Oslo : Universitetsforlaget .

Engwall L. , Edlund P. , Wedlin L. ( 2023 ) ‘ Who Is to Blame? Evaluations in Academia Spreading through Relationships among Multiple Actor Types ’, Social Science Information , 61 : 439 – 56 .

Forsberg E. , Geschwind, L., Levander, S. and Wermke, W. ( 2022 ) Peer Review in an Era of Evaluation: Understanding the Practice of Gatekeeping in Academia . Cham : Palgrave Macmillan .

Forskningsberedningen ( 1997 ) Röster om humaniora [Voices on the humanities] . Stockholm : Utbildningsdepartementet .

Foss Hansen H. ( 2022 ) ‘The Many Faces of Peer Review’, in: Forsberg E. , Geschwind L. , Levander S. , Wermke W. (eds) Peer Review in an Era of Evaluation: Understanding the Practice of Gatekeeping in Academia , pp. 107 – 26 . Cham : Palgrave Macmillan .

Franssen T. , Wouters P. ( 2019 ) ‘ Science and Its Significant Other: Representing the Humanities in Bibliometric Scholarship ’, Journal of the Association for Information Science and Technology , 70 : 1124 – 37 .

Ganuza N. , Salö L. ( 2023 ) ‘ Boundary-Work and Social Closure in Academic Recruitment: Insights from the Transdisciplinary Subject Area Swedish as a Second Language ’, Research Evaluation , 32 : 515 – 25 .

Gerholm L. , Gerholm T. ( 1992 ) Doktorshatten: en studie av forskarutbildningen inom sex discipliner vid Stockholms universitet [Doktorshatten: A Study of Doctoral Education in Six Disciplines at Stockholm University]. Stockholm : Carlsson .

Geschwind L. and Larsson, K. ( 2008 ) Om humanistisk forskning: Nutida villkor och framtida förutsättningar [On Humanities Research: Current Conditions and Future Prospects] 81 . Stockholm : SISTER .

Gieryn Thomas F. ( 2018 ) Truth-Spots: How Places Make People Believe . Chicago : The University of Chicago Press .

Glänta ( 2005 ) ‘ Humaniora ’, 2005 : 1 – 2 .

Guillory J. ( 2005 ) ‘ Valuing the Humanities, Evaluating Scholarship ’, Profession , 2005 : 28 – 38 .

Gunvik-Grönbladh Ingegerd ( 2014 ) Att bli bemött och att bemöta. En studie om meritering i tillsättning av lektorat vid Uppsala universitet [To Be Recognised and to Recognise. A Study on Merit in the Appointment of Lectureships at Uppsala University] . Uppsala : Uppsala University .

Hamann J. ( 2016 ) ‘“ Let Us Salute One of Our Kind.” How Academic Obituaries Consecrate Research Biographies ’, Poetics , 56 : 1 – 14 .

Hamann J. ( 2019 ) ‘ The Making of Professors. Assessment and Recognition in Academic Recruitment ’, Social Studies of Science , 49 : 919 – 41 .

Hamann J. , Beljean S. ( 2017 ) ‘Academic Evaluation in Higher Education’, in: Shin J. , Teixeira P. (eds) Encyclopedia of International Higher Education Systems and Institutions . Dordrecht : Springer .

Hamann J. , Kaltenbrunner W. ( 2022 ) ‘ Biographical Representation, from Narrative to List: The Evolution of Curricula Vitae in the Humanities, 1950 to 2010 ’, Research Evaluation , 31 : 438 – 51 .

Hammar I. , Östling J. ( 2021 ) ‘ Introduction: The Circulation of Knowledge and the History Of Humanities ’, History of Humanities , 6 : 595 – 602 .

Hammarfelt B. , de Rijcke S. ( 2015 ) ‘ Accountability in Context: effects of Research Evaluation Systems on Publication Practices, Disciplinary Norms, and Individual Working Routines in the Faculty of Arts at Uppsala University ’, Research Evaluation , 24 : 63 – 77 .

Hammarfelt B. ( 2014 ) ‘ Using Altmetrics for Assessing Research Impact in the Humanities ’, Scientometrics , 101 : 1419 – 30 .

Hammarfelt B. ( 2016 ) ‘Beyond Coverage: Toward a Bibliometrics for the Humanities’, in: Ochsner M. , Hug S. E. , Daniel H. (eds) Research Assessment in the Humanities: Towards Criteria and Procedures , pp. 115 – 31 . Cham : Springer International Publishing .

Hammarfelt B. ( 2017 ) ‘ Recognition and Reward in the Academy. Valuing Publication Oeuvres in Biomedicine, Economics and History ’, Aslib Journal of Information Management , 69 : 607 – 23 .

Hammarfelt B. , Hammar I. , Francke H. ( 2021 ) ‘ Ensuring Quality and Status: Peer Review Practices in Kriterium, A Portal for Quality-Marked Monographs and Edited Volumes in Swedish SSH ’, Frontiers in Research Metrics and Analytics , 6 : 740297 .

Hammarfelt B. , Rushforth A. D. , de Rijcke S. ( 2020 ) ‘ Temporality in Academic Evaluation: “Trajectoral Thinking” ’, The Assessment of Biomedical Researchers’, Valuation Studies , 7 : 33 – 63 .

Holm P. , Jarrick A. , Scott D. ( 2015 ) Humanities World Report 2015 . Basingstoke : Palgrave Macmillan .

Horowitz Irving Louis ( 1991 ) Communicating Ideas: The Politics of Scholarly Publishing , 2nd edn. Abingdon : Routledge .

Hug S. , Ochsner M. ( 2014 ) ‘ A Framework to Explore and Develop Criteria for Assessing Research Quality in the Humanities ’, International Journal of Education Law and Policy , 10 : 55 – 68 .

Hug S. , Ochsner M. , Daniel H. D. ( 2013 ) ‘ Criteria for Assessing Research Quality in the Humanities: A Delphi Study among Scholars of English Literature, German Literature and Art History ’, Research Evaluation , 22 : 369 – 83 .

Ingold T. ( 2000 ) The Perception of the Environment. Essays on Livelihood, Dwelling and Skill . Routledge .

Jonsson I., Edlund, L. E, Holm, E., Janlert, L. E, Lundgren, B., Puranen, P., Sörlin, S. and Sedlacek, P. ( 1999 ) Utredning om humanistisk forskning och utbildning vid Umeå universitet [Report on Humanities Research and Education at Umeå University]. Umeå : Umeå universitet .

Jonsson I. , Ahlsén, E., Björkstrand, G. and Karlsson, F. ( 2004 ) Rapport om resultat av en utvärdering av forskning och forskarutbildning inom området för humaniora och teologi vid Lunds universitet [Report on the results of an evaluation of research and postgraduate education in the field of humanities and theology at Lund University] . Lund: Lunds universitet.

Karlsohn T. ( 2016 ) ‘ The Academic Seminar as Emotional Community ’, Nordic Journal of Studies in Educational Policy , 2016 : 33724 – 3 .

Keeney C. B and Wells, B. H. ( 1964 ) Report of The Commission of the Humanities . New York : The American Council of Learned Societies .

Knorr Cetina Karin ( 1981 ) The Manufacture of Knowledge: An Essay on the Constructivist and Contextual Nature of Science . Oxford : Pergamon Press .

Knorr Cetina Karin ( 1999 ) Epistemic Cultures: How the Sciences Make Knowledge . Cambridge, MA : Harvard University Press .

Krämer F. ( 2018 ) ‘ Shifting Demarcations: An Introduction ’, History of Humanities , 3 : 5 – 14 .

Lamont Michèle ( 2009 ) How Professors Think: Inside the Curious World of Academic Judgment . Cambridge, MA : Harvard University Press .

Langfeldt L. , Nedeva M. , Sörlin S. , Thomas D. A. ( 2020 ) ‘ Co-Existing Notions of Research Quality: A Framework to Study Context-Specific Understandings of Good Research ’, Minerva , 58 : 115 – 37 .

Larsson S. ( 2009 ) ‘ An Emerging Economy of Publications and Citations ’, Nordisk Pedagogik , 29 : 34 – 52 .

Latour B. , Woolgar S. ( 1979 ) Laboratory Life: The Construction of Scientific Facts . Beverly Hills, CA : Sage .

Lucas Lisa ( 2006 ) The Research Game in Academic Life . Berkshire : Open University Press .

Mandler P. ( 2019 ) ‘ Good Reading for the Million: The “Paperback Revolution” and the Co-Production of Academic Knowledge in Mid Twentieth-Century Britain and America ’, Past & Present , 244 : 235 – 69 .

McCulloch A. ( 2022 ) In Defence of History—The Changing Context for Supervision < https://supervisingphds.wordpress.com/2022/01/11/in-defence-of-history-the-changing-context-for-supervision/ > accessed 23 Aug 2022.

Müller K. ( 2024 ) ‘Responsive Research Quality Articulations of the Humanities’, in: Mattsson P. , Perez Vico E. , Salö L. (eds) Making Universities Matter: Collaboration, Engagement, Impact , pp. 165 – 84 . Springer .

Nästesjö J. ( 2021 ) ‘ Navigating Uncertainty: Early Career Academics and Practices of Appraisal Devices ’, Minerva , 59 : 237 – 59 .

Nilsson Rangnar ( 2009 ) God vetenskap—hur forskares vetenskapsuppfattningar uttryckta i sakkunnigutlåtanden förändras i tre skilda discipliner . Gothenburg: Gothenburg Studies in the History of Science and Ideas.

Nordin J. ( 2008 ) ‘ Historisk tidskrift i nutid och framtid: Några reflektioner över läsarsynpunkter, bibliometri och open access ’, Historisk tidskrift , 128 : 601 – 20 .

Nylander E. , Aman R. , Hallqvist A. , Malmquist A. , Sandberg F. ( 2013 ) ‘ Managing by Measuring: Academic Knowledge Production under the Ranks ’, Confero: Essays on Education Philosophy and Politics , 1 : 5 – 18 .

Ochsner M. , Galleron I. , Ionescu A. ( 2017a ) List of Projects on SSH Scholars’ Notions of Research Quality in Participating Countries < http://enressh.eu/wp-content/uploads/2017/09/Report_Quality_Projects.pdf > accessed 5 Apr 2023.

Ochsner M. , Hug S. , Galleron I. ( 2017b ) ‘ The Future of Research Assessment in the Humanities: Bottom-Up Assessment Procedures ’, Palgrave Communications , 3 : 1 – 12 .

Ochsner M. , Hug S. , Daniel H. D. eds ( 2016 ) Research Assessment in the Humanities: Towards Criteria and Procedures . Springer International Publishing .

Odén Birgitta ( 1991 ) Forskarutbildningens förändringar 1890-1975: historia, statskunskap, kulturgeografi, ekonomisk historia [Changes in Postgraduate Studies, 1890-1975]. Lund : Lund University Press .

Östh Gustafsson H. ( 2022 ) ‘The Humanities in Crisis: Comparative Perspectives on a Recurring Motif’, in Paul H. (ed.) Writing the History of the Humanities: Questions, Themes and Approaches , pp. 65 – 83 . London : Bloomsbury Academic .

Östling J. ( 2020 ) ‘ En kunskapsarena och dess aktörer: Under strecket och kunskapscirkulation i 1960-talets offentlighet ’, Historisk Tidskrift , 140 : 95 – 124 .

Östling J. , Jansson A. , Svensson Stringberg R. ( 2022 ) Humanister i offentligheten: Kunskapens aktörer och arenor under efterkrigstiden . Göteborg : Makadam .

Paul Herman ( 2022a ) Writing the History of the Humanities: Questions, Themes, and Approaches . London : Bloomsbury Academic .

Paul Herman ( 2022b ) Historians' Virtues: From Antiquity to the Twenty-First Century . Cambridge : Cambridge University Press .

Perez Vico E. , Sörlin S. , Hanell L. , Salö L. ( 2024 ) ‘Valorizing the Humanities: Impact Spaces, Acting Spaces, and Meandering Knowledge Flows’, in Mattsson P. , Perez Vico E. , Salö L. (eds) Making Universities Matter: Collaboration, Engagement, Impact , pp. 211 – 32 . Springer .

Phillips E. , Pugh D. ( 2005 ) How to Get a PhD: A Handbook for Students and Their Supervisors , 4th edn. Maidenhead : Open University Press .

Plumb J. H. , ed. ( 1964 ) Crisis in the Humanities . London : Penguin .

Princeton Humanities Council , History of the Seminars < https://humanities.princeton.edu/humanities-council-programs/public-lectures/gauss-seminars-in-criticism/history-of-the-gauss-seminars/ > accessed 23 Aug 2022.

Rabinow Paul. ( 1996 ) Making PCR: A Story of Biotechnology . Chicago : University of Chicago Press .

Reitter P. , Wellmon C. ( 2021 ) Permanent Crisis: The Humanities in a Disenchanted Age . Chicago : University of Chicago Press .

Riis U. , Hartman T. , Levander S. ( 2011 ) Darr på ribban—En uppföljning av 1999 års befordringsreform vid Uppsala universitet . Uppsala : Acta Universitatis Upsaliensis .

Salö L. ( 2017 ) The Sociolinguistics of Academic Publishing: Language and the Practices of Homo Academicus . New York : Palgrave Macmillan .

Salö L. , Sörlin S. , Benner M. ( 2021 ) ‘Humanvetenskapernas kunskapspolitik: En inledning’, in Salö Linus (ed.) Humanvetenskapernas verkningar , pp. 9 – 34 . Stockholm : Dialogos Förlag .

Small Helen ( 2013 ) The Value of the Humanities . Oxford : Oxford University Press .

Sörlin S. ( 2007 ) ‘ Funding Diversity: Performance-Based Funding Regimes as Drivers of Differentiation in Higher Education Systems ’, Higher Education Policy , 20 : 413 – 40 .

Sörlin S. ( 2018 ) ‘ Humanities of Transformation: From Crisis and Critique towards the Emerging Integrative Humanities ’, Research Evaluation , 27 : 287 – 97 .

Sörlin S., Johansson, S., Karlsson, F., Montgomery, H., Nordenstam, T. and Skafte Jensen, M. ( 2001 ) Den humanistiska cirkelns kvadratur: Om humanioras möjligheter och framtid: Rapport från Humanistiska fakultetsnämndens nordiska bedömargrupp. [Squaring the circle of humanities: On the possibilities and future of the humanities] . Gothenburg University: Faculty of the Humanities.

Swedish Research Council (VR) ( 2023 ) The Swedish Research Barometer 2023: Swedish Research in International Comparison .

ten Hagen S. ( 2022 ) ‘ Evaluating Knowledge, Evaluating Character: Book Reviewing by American Historians and Physicists (1900–1940) ’, History of Humanities , 7 : 251 – 77 .

Torstendahl Rolf ( 2011 ) Gjort, tänkt, känt. [Done, Thought, Felt] . Stockholm : Carlsson .

Traweek Sharon ( 1988 ) Beamtimes and Lifetimes: The World of High Energy Physicists . Cambridge, MA : Harvard University Press .

Tunlid A. ( 2008 ) Humanioras kris: Om självförståelse, samhällsrelevans och forskningspolitik [The crisis of the humanities: On self-awareness, social relevance and research policy] . Lund : Mer forskning för pengarna .

Tunlid A. ( 2022 ) ‘“Humanities 2000”: Legitimizing Discourses of the Humanities in Public Debate and Research Policy at the Turn of the Century’, in Ekström A. , Östh Gustafsson H. (eds) The Humanities and the Modern Politics of Knowledge: The Impact and Organization of the Humanities in Sweden, 1850-2020 , pp. 253 – 74 . Amsterdam : Amsterdam University Press .

Whitley R. ( 2007 ) ‘Changing Governance of the Public Sciences: The Consequences of Establishing Research Evaluation Systems for Knowledge Production in Different Countries and Scientific Fields’, in Whitley R. , Gläser J. (eds) The Changing Governance of the Sciences , pp. 3 – 27 . New York : Springer .

Widmalm S. ( 2013 ) ‘Innovation and Control: Performative Research Policy in Sweden’, in Rider S. , Hasselberg Y. , Waluszewski A. (eds) Transformations in Research, Higher Education and the Academic Market: The Breakdown of Scientific Thought , pp. 39 – 52 . New York : Springer .

Month: Total Views:
August 2024 303

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5449
  • Print ISSN 0958-2029
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

OR Research Briefings

  • What are research briefings?

How it works

  • The Oxford Review Store
  • The DEI (Diversity, Equity and Inclusion) Impact Series
  • What is DEI? The Oxford Review Guide to Diversity, Equity and Inclusion.
  • The Oxford Review DEI (Diversity, Equity and Inclusion) Dictionary
  • The Essential Guide to Evidence-Based Practice
  • The Oxford Review Encyclopaedia of Terms
  • Video Research Briefings
  • Frequently Asked Questions (FAQ)
  • Competitive Intelligence From The Oxford Review
  • The Oxford Review Mission / Aim
  • How to use The Oxford Review to do more than just be the most knowledgeable person in the room
  • Charities The Oxford Review Supports
  • Terms and conditions
  • Member Login

Research quality – not all research is good research, but how do you tell?

Blog, Evidence-Based Practice, Oxford Review, Research

  • “Research says…”

“Research says…”, but does it? How to tell how good a study is. Research quality is an important factor in deciding to use a study but how do you know how good a particular bit of research really is?

Science proves nothing 

There are lots of factors that influence things, science is useless, confirmation bias, how to tell how good a piece of research is, the 4 levels of evidence, the 5 grade domains , risk of bias, imprecision, inconsistency, indirectness, publication bias, what we are looking for, example review panel, citing evidence, other references used.

Lots of people like quoting research. Usually the quote is vague, something like “There is a study that proves x”. There are a number of problems with using research in this way. The first and most obvious issue here is ‘what research’? A vague reference to some study could refer to a 4 th grade children’s school project, someone’s ideas written on the back of an envelope or a major international study conducted by subject matter experts and professional researchers. There is usually no way of telling what the research quality is until we know which study it actually is, how the research was conducted and by whom.

For example there is a big difference in research quality between a study that interviews 5 people in the same office and a study that observes the actual behaviour of 2,000 employees over time and not what they tell you their behaviour is.

The other issue here is the idea that research is there to ‘prove’ things. It isn’t. It is trying to get to as close to the truth about a particular topic as possible, but that is very different to ‘proving something to be true’. To say something has been ‘proved’ means that there is no doubt left, that this is the truth.

Science doesn’t work like that [i] . Life is complex. There are rarely simple causes of things and finding real causal relationships is notoriously hard. For example, ‘smoking causes lung cancer’. Firstly, not everyone who smokes will get lung cancer. There are other factors which, when combined with smoking, make the chances of developing cancer more or less likely, like excessive drinking, exercise, living in a polluted environment, having a diet of fast food and fizzy drinks etc. Some people have a greater genetic susceptibility to the carcinogens contained in smoke than others. Even then, all the smoke does is to help to create the conditions within the lungs that start changes in the person’s physiology that can then result in cancer. Additionally, there are many types of cancer.

The best we can say is that smoking significantly increases the chance of developing cancer compared to people who don’t smoke. But it’s complex and we don’t have all the answers.

There are rarely simple causes to things – life is complex

On an organisational note, saying something like ‘it has been proved that transformational leadership is better than transactional leadership’ has the same problem. There are so many variables involved that it is impossible to ‘prove’. For example, a poor transformational leader could be worse than a good transactional leader, or transformational leadership may not work in certain situations or with certain people (which appears to be the case). Notice I say, ‘appears to be the case’, not ‘has been proved’. There is always an element of doubt and, as research progresses and gets better, we start to see flaws in our research.

The problem with life, organisations, people etc. is that the number of variables or factors involved in the relationship between most things is so complex it is really hard to unravel.

research quality

Does that mean science is useless if it doesn’t prove things? No, not at all. Think about all the life-saving drugs that have been developed, for example. Does it mean the drugs work all the time and in every case? No, because there are so many variables, which is why most medicines come with a leaflet to tell you to stop taking them if you get certain reactions.

So, what researchers do when they are testing something is, firstly, they try to work out what factors may be involved (usually from previous research) and then they don’t test it. They form a hypothesis, something like we think that x is related to y in z direction or way. For example, higher levels of work engagement result in higher levels of productivity. That’s the hypothesis. However, the researcher will turn that statement around and test the null hypothesis. So, for example something like engagement (however that is measured) doesn’t result in higher productivity (however that is measured). If we then find that the null hypothesis can’t be accepted, we  accept  the hypothesis.

Good research

The reason for this is that it prevents things like confirmation bias, where people start looking for the answer they want / expect. This way, researchers are trying to break their hypothesis. If they can’t break it then the hypothesis is accepted. And this is important. It is only accepted – for now. It is always possible (and this happens all the time) that someone else comes along and either finds a flaw in your findings or research method or discovers a relationship or intervening factor you hadn’t. Science and research is dynamic and constantly changing. 

The problem is that just looking at findings doesn’t tell you how good or what quality a study is. For example, there is a big difference between someone publishing a single case study based on one particular situation, say a factory or one office or even one person, and a study taking in 1,000s of people across the world and using robust and valid research and analysis methods. 

99% of everything you are trying to do...

research quality

...has already been done by someone else, somewhere - and meticulously researched.

Get the latest research briefings, infographics and more from The Oxford Review - Free.

Success! Now check your email to confirm your subscription.

There was an error submitting your subscription. Please try again.

In evidence-based practice circles, the accuracy and reliability of the research (research quality) is an important factor when deciding to include a study in the evidence-base for a decision, especially clinical or engineering decisions where people’s lives are on the line.

Research quality

One way that evidence-based practitioners judge the quality of research is to use the GRADE system or framework [i] . GRADE stands for Grading of Recommendations, Assessment, Development and Evaluations, and is used extensively in medical scenarios. 

GRADE is now being used in engineering, organisational evidence-based practice to judge the research and is the basis of many systematic reviews. 

When choosing studies to include in a decision GRADE has four levels of evidence, or four levels of certainty, in the quality of the evidence/study:

  • Very low – The real effect is probably very different from the reported findings.
  • Low – The true effect is quite likely to be different from the findings of this study.
  • Moderate – The true effect is probably close to the findings of this study.
  • High – A high level of confidence that, the findings represent the true effect / represents reality.

There are five overall factors which are used to help a practitioner work out what the level of evidence a particular study is using :

  • Publication bias    All of these potential biases downgrade the research quality of the study.

Risk of bias

Tool for Assessing Risk of Bias 

The Cochrane Collaboration’s Tool for Assessing Risk of Bias usually uses a 3-point grading system for judging bias and is an important factor in working the quality of  research :

  • Low risk of bias
  • High risk of bias
  • Unclear risk of bias

In 2005 The Cochrane Institute in Oxford produced a tool that has become the standard bias risk assessment tool – The Cochrane Collaboration’s Tool for Assessing Risk of Bias – and looks specifically at a range of different biases that can affect a study’s findings. The tool looks for whether the study being looked at:

  • Uses quality scales. Quality scales are often inherently biased and based on opinions.
  • Looks at the internal validity of the study. What this means is that the researchers extensively look for and report any potential biases the method or study might have. In other words, are the methods used in the study likely to lead to bias or less so?
  • Has actively looked for sources of bias or influence in their results. Double blind trials, where neither the participants (subjects) nor the researchers know which subject is getting which treatment or is in which group are considered to be at the least risk of bias.
  • That the assessor / evaluator understands the methods used and can make a good judgement about the methods used.
  • Is there a risk of bias in the way the data is being presented or represented?
  • Does the use or group to which you are putting the study introduce a risk of bias not associated to the study?

How precise are the data and methods used for the study? 

Does the study show inconsistences openly or do they try to mask them? Are there inherent inconsistencies that haven’t been reported?

How close is the situation / population of the study being looked at to the one it is being applied to? Is this study talking directly into and about the organisation or population of the study, or is this being used in a more indirect way?

  • Does the publisher have a stake in the findings? For example, is this a study by a consultancy attempting to show how good its own methods are? 

All of these potential biases downgrade the quality of the research study.

When assessing a study, we are looking for good research that is applicable to the situation to which the findings are being put. It needs to be valid (the methods used actually give good data and is measuring what we think it is measuring) and reliable (you keep getting the same results). Being able to spot bias in all its forms in research studies is essential if you are using research to inform organisational decisions.

Our research briefings, research quality and the review panel

At the end of all of all of our research briefings we have an assessment/review of the study being reported. We use the following simplified panel to grade the quality of the study under consideration:

  • Research Quality – 3/5 A good literature review and overview of the subject. Based on a meta-analysis, rather than primary research.
  • Confidence – 4/5 Consistent with the current research thinking and developments.
  • Usefulness – 4/5 Particularly useful to HR practitioners.
  • Comments – The current focus on differentiated HR architecture (structures and systems) to support talent management is showing a lot of promise in improving organisational performance generally. 

And we always fully cite which studies we have used, so that you can check it and do further reading

[i] Guyatt GH, Oxman AD, Kunz R, Vist GE, Falck-Ytter Y, Schunemann HJ. What is “quality of evidence” and why is it important to clinicians? BMJ (Clinical research ed). 2008;336(7651):995-8. [i]  

Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ (Clinical research ed). 2008;336(7650):924-6.

Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, et al. GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. Journal of clinical epidemiology. 2011;64(4):383-94.

Guyatt GH, Oxman AD, Kunz R, Atkins D, Brozek J, Vist G, et al. GRADE guidelines: 2. Framing the question and deciding on important outcomes. Journal of clinical epidemiology. 2011;64(4):395-400.

Balshem H, Helfand M, Schunemann HJ, Oxman AD, Kunz R, Brozek J, et al. GRADE guidelines: 3. Rating the quality of evidence. Journal of clinical epidemiology. 2011;64(4):401-6.

Guyatt, G. H., Oxman, A. D., Kunz, R., Vist, G. E., Falck-Ytter, Y., & Schünemann, H. J. (2008). What is “quality of evidence” and why is it important to clinicians?.  Bmj ,  336 (7651), 995-998.

BMJ Best practice series: What is GRADE? Accessed at:  https://bestpractice.bmj.com/info/toolkit/learn-ebm/what-is-grade/  on 3 rd  November 2019

Guyatt, G., Oxman, A. D., Akl, E. A., Kunz, R., Vist, G., Brozek, J., … & Jaeschke, R. (2011). GRADE guidelines: 1. Introduction—GRADE evidence profiles and summary of findings tables.  Journal of clinical epidemiology ,  64 (4), 383-394.

Be impressively well informed

research quality

Get the very latest research intelligence briefings, video research briefings, infographics and more sent direct to you as they are published

Be the most impressively well-informed and up-to-date person around...

Success! Now check your email to confirm that we got your email right. If you don't get an email in the next 4-5 minutes something went wrong: 1. Check your junk folder just in case 🙁 2. If it's not there either, you may have accidentally mistyped your email address (it happens). Have another go. Many thanks

fact, GRADE, reliability, research evidence, research quality, validity

You may also like

Adaptive organising during sudden disruptive events, research review of the balanced scorecard, subscribe to our newsletter now.

Session expired

Please log in again. The login page will open in a new tab. After logging in you can close it and return to this page.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Starting the research process
  • 10 Research Question Examples to Guide Your Research Project

10 Research Question Examples to Guide your Research Project

Published on October 30, 2022 by Shona McCombes . Revised on October 19, 2023.

The research question is one of the most important parts of your research paper , thesis or dissertation . It’s important to spend some time assessing and refining your question before you get started.

The exact form of your question will depend on a few things, such as the length of your project, the type of research you’re conducting, the topic , and the research problem . However, all research questions should be focused, specific, and relevant to a timely social or scholarly issue.

Once you’ve read our guide on how to write a research question , you can use these examples to craft your own.

Research question Explanation
The first question is not enough. The second question is more , using .
Starting with “why” often means that your question is not enough: there are too many possible answers. By targeting just one aspect of the problem, the second question offers a clear path for research.
The first question is too broad and subjective: there’s no clear criteria for what counts as “better.” The second question is much more . It uses clearly defined terms and narrows its focus to a specific population.
It is generally not for academic research to answer broad normative questions. The second question is more specific, aiming to gain an understanding of possible solutions in order to make informed recommendations.
The first question is too simple: it can be answered with a simple yes or no. The second question is , requiring in-depth investigation and the development of an original argument.
The first question is too broad and not very . The second question identifies an underexplored aspect of the topic that requires investigation of various  to answer.
The first question is not enough: it tries to address two different (the quality of sexual health services and LGBT support services). Even though the two issues are related, it’s not clear how the research will bring them together. The second integrates the two problems into one focused, specific question.
The first question is too simple, asking for a straightforward fact that can be easily found online. The second is a more question that requires and detailed discussion to answer.
? dealt with the theme of racism through casting, staging, and allusion to contemporary events? The first question is not  — it would be very difficult to contribute anything new. The second question takes a specific angle to make an original argument, and has more relevance to current social concerns and debates.
The first question asks for a ready-made solution, and is not . The second question is a clearer comparative question, but note that it may not be practically . For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

Note that the design of your research question can depend on what method you are pursuing. Here are a few options for qualitative, quantitative, and statistical research questions.

Type of research Example question
Qualitative research question
Quantitative research question
Statistical research question

Other interesting articles

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

Methodology

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, October 19). 10 Research Question Examples to Guide your Research Project. Scribbr. Retrieved August 19, 2024, from https://www.scribbr.com/research-process/research-question-examples/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, writing strong research questions | criteria & examples, how to choose a dissertation topic | 8 steps to follow, evaluating sources | methods & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

We use cookies to understand how you use our site and to improve your experience. This includes personalizing content and advertising. To learn more, click here . By continuing to use our site, you accept our use of cookies, revised Privacy Policy and Terms of Service .

Zacks Investment Research Home

New to Zacks? Get started here.

Member Sign In

Don't Know Your Password?

Zacks

  • Zacks #1 Rank
  • Zacks Industry Rank
  • Zacks Sector Rank
  • Equity Research
  • Mutual Funds
  • Mutual Fund Screener
  • ETF Screener
  • Earnings Calendar
  • Earnings Releases
  • Earnings ESP
  • Earnings ESP Filter
  • Stock Screener
  • Premium Screens
  • Basic Screens
  • Thematic Screens
  • Research Wizard
  • Personal Finance
  • Money Management
  • Retirement Planning
  • Tax Information
  • My Portfolio
  • Create Portfolio
  • Style Scores
  • Testimonials
  • Zacks.com Tutorial

Services Overview

  • Zacks Ultimate
  • Zacks Investor Collection
  • Zacks Premium

Investor Services

  • ETF Investor
  • Home Run Investor
  • Income Investor
  • Stocks Under $10
  • Value Investor
  • Top 10 Stocks

Other Services

  • Method for Trading
  • Zacks Confidential

Trading Services

  • Black Box Trader
  • Counterstrike
  • Headline Trader
  • Insider Trader
  • Large-Cap Trader
  • Options Trader
  • Short Sell List
  • Surprise Trader
  • Alternative Energy

Zacks Investment Research Home

You are being directed to ZacksTrade, a division of LBMZ Securities and licensed broker-dealer. ZacksTrade and Zacks.com are separate companies. The web link between the two companies is not a solicitation or offer to invest in a particular security or type of security. ZacksTrade does not endorse or adopt any particular investment strategy, any analyst opinion/rating/report or any approach to evaluating individual securities.

If you wish to go to ZacksTrade, click OK . If you do not, click Cancel.

research quality

Image: Bigstock

Microbot Medical (MBOT) Gets Quality Certification for its System

Microbot Medical Inc . ( MBOT Quick Quote MBOT - Free Report ) recently announced the receipt of ISO 13485:2016 certification for its quality management system. This certification underscores the company's commitment to excellence in the development and manufacturing of its LIBERTY Endovascular Robotic Surgical System, marking an essential step toward commercialization and regulatory compliance in both the European Union and the United States.

With this certification, Microbot Medical is well-positioned to advance toward CE mark approval and streamline its transition to the updated FDA regulations, paving the way for future growth and commercialization in key markets.

Microbot Medical is a clinical-stage medical device company focused on developing innovative micro-robotic technologies to enhance patient outcomes and improve access to minimally invasive procedures. The Investigational LIBERTY Endovascular Robotic Surgical System is designed to revolutionize endovascular procedures by eliminating the need for large, costly equipment, reducing radiation exposure and minimizing physician strain. With its potential for remote operation, the LIBERTY system aims to democratize endovascular interventions, making them more accessible and efficient.

Significance of the Certification

The ISO 13485 certification validates the strength of the company's quality management system. Achieving ISO 13485:2016 certification is a critical milestone for Microbot Medical, as it signifies that the company has implemented a rigorous quality management system tailored to the medical device industry. This certification not only validates the company's dedication to maintaining high standards in product development and manufacturing but also aligns Microbot Medical with the stringent regulatory requirements of the European Union's Medical Device Regulation and the FDA's revised Quality System Management Regulation.

Industry Prospects

Per a report in Future Market Insights, the robotic-assisted endovascular systems market size was worth $94 million in 2023. It is anticipated to reach $214.7 million by 2033 at a CAGR of 8.6%.

The robust growth will be primarily driven by the increasing adoption of advanced systems that enhance efficiency and precision during procedures, along with the rising demand for innovative, minimally invasive surgeries that reduce patient trauma and recovery time.

Recent LIBERTY Endovascular Robotic Surgical System Developments

This month , Microbot Medical and Emory University agreed to explore potential collaboration on autonomous robotics in endovascular procedures. Emory will lead the feasibility study of integrating the LIBERTY Endovascular Robotic Surgical System with an imaging system to develop an autonomous robotic system. The project aims to combine CT guidance, artificial intelligence, and medical robotics to enhance procedural standardization, efficiency and patient access.

In July, MBOT secured Institutional Review Board approval and signed a clinical trial agreement with Memorial Sloan Kettering Cancer Center (MSKCC) in New York City for its LIBERTY Endovascular Robotic Surgical System. MSKCC will conduct the clinical trial under an Investigational Device Exemption to support future FDA marketing submissions and commercialization.The trial also includes Brigham and Women’s Hospital and Baptist Hospital of Miami, which are already enrolled and conducting clinical cases.

In the same month,Microbot Medical announced that the Baptist Hospital of Miami, including Miami Cardiac & Vascular Institute and Miami Cancer Institute, has successfully conducted its first clinical procedure using the LIBERTY Endovascular Robotic Surgical System. This follows the recent inclusion of Baptist Hospital as a clinical trial site and marks the second site to perform such a procedure after Brigham & Women’s Hospital. This trial is part of the Investigational Device Exemption for LIBERTY, with results anticipated to support future FDA submissions and commercialization.

Price Performance

Shares of Microbot Medical have decreased 37.8% so far this year against a 7.2% rise in the industry . The S&P 500 has witnessed a 16.4% rise in the same time frame.

Zacks Investment Research

Zacks Rank & Key Picks

Currently, Microbot Medical carries a Zacks Rank #3 (Hold).

Some top-ranked stocks in the broader medical space are  Universal Health Services  ( UHS Quick Quote UHS - Free Report ) ,  Quest Diagnostics  ( DGX Quick Quote DGX - Free Report ) and  ABM Industries  ( ABM Quick Quote ABM - Free Report ) . While Universal Health Services sports a Zacks Rank #1 (Strong Buy), Quest Diagnostics and ABM Industries carry a Zacks Rank #2 (Buy) each. You can see  the complete list of today’s Zacks #1 Rank stocks here .

Universal Health Services has an estimated long-term growth rate of 19%. UHS’ earnings surpassed estimates in each of the trailing four quarters, with the average being 14.58%.

Universal Health Services has gained 41.1% compared with the industry's  34.8% rise so far this year.

Quest Diagnostics has an estimated long-term growth rate of 6.20%. DGX’s earnings surpassed estimates in each of the trailing four quarters, with the average surprise being 3.31%.

Quest Diagnostics shares have gained 3.7% so far this year compared with the  industry’s  10.2% rise.

ABM Industries’ earnings surpassed estimates in each of the trailing four quarters, delivering an average surprise of 7.34%.

ABM's shares have risen 24.1% so far this year compared with the industry’s  11.9% increase.

See More Zacks Research for These Tickers

Pick one free report - opportunity may be withdrawn at any time.

Quest Diagnostics Incorporated (DGX) - free report >>

Universal Health Services, Inc. (UHS) - free report >>

ABM Industries Incorporated (ABM) - free report >>

Microbot Medical Inc. (MBOT) - free report >>

Published in

This file is used for Yahoo remarketing pixel add

research quality

Due to inactivity, you will be signed out in approximately:

V. I. Dvorkin. Metrology and Quality Assurance of Quantitative Chemical Analysis , Moscow: Khimiya, 2001, 263 p. (1000 copies)

  • Published: June 2003
  • Volume 58 , pages 601–603, ( 2003 )

Cite this article

research quality

29 Accesses

Explore all metrics

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Rights and permissions

Reprints and permissions

About this article

V. I. Dvorkin. Metrology and Quality Assurance of Quantitative Chemical Analysis , Moscow: Khimiya, 2001, 263 p. (1000 copies). Journal of Analytical Chemistry 58 , 601–603 (2003). https://doi.org/10.1023/A:1024132723085

Download citation

Issue Date : June 2003

DOI : https://doi.org/10.1023/A:1024132723085

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Analytical Chemistry
  • Quantitative Chemical Analysis
  • Quality Assurance
  • Quantitative Chemical
  • Find a journal
  • Publish with us
  • Track your research

The database contains project information dating back to 1998, including abstracts and links to final reports.

Read about NIATT's .

sponsored by .

sponsored by

Mobile Hands-On Training ( ) project sponsored by the

sponsored by

User-friendly tools are needed for undergraduates to learn about component sizing, powertrain integration, and control strategies for student competitions involving hybrid vehicles. A TK Solver tool was developed at the University of Idaho for this purpose. The model simulates each of the dynamic events in the Formula Hybrid Society of Automotive Engineers (FHSAE) competition, predicting average speed, acceleration, and fuel consumption for different track segments. Model inputs included manufacturer's data along with bench tests of electrical and IC engine components and roll-down data. This vehicle performance model was used to design the 2014 vehicle’s hybrid architecture, determine the energy allocation, and to select the batteries. Model predictions have been validated in full vehicle tests under simulated race conditions. The TK Solver tool has proven effective in making decisions about sizing gasoline and electric power components, establishing an optimal coupling connection between the electric motor and the gasoline engine, selecting and configuring the battery pack, tuning the gasoline engine, and making recommendations for energy management under different driving conditions. The resulting vehicle is being readied to compete in the 2014 FHSAE competition. 


SMART SIGNALS: Enabling Traffic Controller Technology
 

The University of Idaho and NIATT can boast of high quality professors and researchers with a variety of . Our faculty engage in research to solve challenging, practical, and relevant transportation problems that have regional and national significance. Faculty integrate their research into their course work.

Clicking on a faculty member's name in the will take you to a page where you can learn more about them and their research interests.



115 Engineering Physics Building
Moscow, ID 83844-0901
Phone:  (208) 885-0576
Fax:      (208) 885-2877
E-mail:  


All rights reserved.

Part of UT Health San Antonio

  • UT Health San Antonio

Accreditation

  • CODA Third-Party Comments
  • CODA Complaints Policy

Dental News & Events

  • Dental Student Absence Reporting

Schedule/ Calendar/ Catalog

  • Class Schedule
  • Academic Calendars

Health/Wellness

  • Student Health Center
  • Needlestick Guidelines
  • Needlestick Incident Report
  • Counseling Services
  • Service Desk
  • Password Reset
  • TechZone (Computer Store)
  • Voice Your Opinion
  • ADA Accommodations
  • Faculty Directory
  • Student Directory
  • Placement Service
  • Department List

Periodontics graduate wins 2024 John F. Prichard Graduate Research Competition

  • Contributor
  • Tuesday, August 13, 2024

Dr. Kaleb Espin

We are thrilled to share that Kaleb C. Esplin, DDS, MS, a 2024 graduate from the periodontics residency program, was awarded 1st place in the Southwest Society of Periodontists’ 2024 John F. Prichard Graduate Research Competition for his research titled: “Peri-implantitis induction and resolution around zirconia versus titanium implants.” This achievement is a testament to his dedication and the quality of his research, which was conducted under the mentorship of Georgios A. Kotsakis, DDS.

The Prichard competition, held during the society’s 2024 Winter Meeting, showcased the exceptional research being conducted by residents across the Southwest region.  Nine judges from residency programs and clinical backgrounds rigorously evaluated the abstracts, of which four were then selected for oral presentations on January 26, 2024. The following day, Esplin was announced as the winner, bringing the Prichard award back to UT Health Science Center San Antonio  School of Dentistry . 

This achievement marks the eighth time in the last nine years that our program has claimed this prestigious award. Such consistent success underscores the strength of our  periodontics residency program and the extraordinary efforts of our residents and faculty members. We are immensely proud of Esplin and all our residents who continue to push the boundaries of dental research.

Please join us in congratulating Dr. Esplin and celebrating the continued excellence of our periodontics residency program research.

Share This Story

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Neurol Res Pract

Logo of neurrp

How to use and assess qualitative research methods

Loraine busetto.

1 Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany

Wolfgang Wick

2 Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Christoph Gumbinger

Associated data.

Not applicable.

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 – 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 – 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig1_HTML.jpg

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig2_HTML.jpg

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig3_HTML.jpg

From data collection to data analysis

Attributions for icons: see Fig. ​ Fig.2, 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 – 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig4_HTML.jpg

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 – 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 – 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table ​ Table1. 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Take-away-points

• Assessing complex multi-component interventions or systems (of change)

• What works for whom when, how and why?

• Focussing on intervention improvement

• Document study

• Observations (participant or non-participant)

• Interviews (especially semi-structured)

• Focus groups

• Transcription of audio-recordings and field notes into transcripts and protocols

• Coding of protocols

• Using qualitative data management software

• Combinations of quantitative and/or qualitative methods, e.g.:

• : quali and quanti in parallel

• : quanti followed by quali

• : quali followed by quanti

• Checklists

• Reflexivity

• Sampling strategies

• Piloting

• Co-coding

• Member checking

• Stakeholder involvement

• Protocol adherence

• Sample size

• Randomization

• Interrater reliability, variability and other “objectivity checks”

• Not being quantitative research

Acknowledgements

Abbreviations.

EVTEndovascular treatment
RCTRandomised Controlled Trial
SOPStandard Operating Procedure
SRQRStandards for Reporting Qualitative Research

Authors’ contributions

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

no external funding.

Availability of data and materials

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Do you need to turn on kitchen exhaust fans when cooking?

A headshot of a woman with long light brown hair and blue eyes

By Nelli Saarinen

ABC Lifestyle

Topic: Health

A cropped shot of a woman in front of a kitchen stove with steam coming out of a pot

Exhaust fans capture and filter air pollutants emitted while cooking. ( Adobe Stock )

If the sound of a kitchen exhaust fan annoys you at the end of a long day, you're not alone.

On social media, videos describing the "absolute sensory nightmare" of kitchen fans rack up thousands of views and comments.

If you are in a kitchen that's not equipped with a range hood or exhaust fan, unpleasant odours and smoke could be damaging your wellbeing, with research showing kitchen fans serve an important purpose for both your home and health.

Do you really need an exhaust fan while cooking?

It's a good idea to use your kitchen's exhaust fan if you have one.

"Cooking results in emissions of all kinds of products which are not necessarily good for our health, so using the exhaust removes them from the air," says Lidia Morawska, director of the International Laboratory for Air Quality and Health at Queensland University of Technology.

Exhaust fans capture and filter steam, odours, smoke, grease and other potential pollutants emitted while cooking.

Professor Morawska says high-temperature frying in particular has been linked to higher emissions.

Standing over the stove and breathing in pollutants could have serious health effects over time.

"We are talking about respiratory system, cardiovascular system, systemic problems," she says.

"Basically, every system in our body is affected."

Even if the exhaust fan doesn't remove all the dangerous particles lingering in the air, any working kitchen fan will significantly reduce the exposure to them, she explains.

A close-up of a hand in a rubber glove using a sponge to clean a kitchen exhaust fan filter

Cleaning the filter regularly ensures the kitchen fan keeps working properly. ( Adobe Stock )

What can you use instead of a kitchen exhaust fan?

For those without a kitchen rangehood, opening a window or doors is the best alternative, Professor Morawska says.

"If there's no other option, then increasing ventilation by whatever means are available for the space," she says.

Portable air purifiers are another option, but they can be costly and only provide a partial solution, Professor Morawska says.

"The size of the air purifier has to be relevant to the space. If that's the case, yes, it works."

But air purifiers only remove the particles in the air, not the gaseous products also released while cooking, she explains.

The electric vs gas question

Something else to consider is whether your home has an electric or gas stove.

"Electric stoves are much safer from the health point of view than gas stoves," Professor Morawska says.

While electric stoves themselves don't emit any harmful pollutants, only the particles from cooking, gas stoves also emit gas combustion products, she explains.

Gas cooktops are the most common type in Australia, according to a report released by Asthma Australia in 2023.

Their use has been linked to asthma flare-ups and the development of asthma in childhood, the organisation says.

Do kitchen extractor fans need to vent outside?

Kitchens rangehoods come in many different styles. The most common options include freestanding wall canopies, undermount models that are built into the kitchen cabinets, fixed rangehoods and retractable rangehoods that can be pulled out when needed.

Additionally, there are downdraft vents, which are hidden behind the stovetop and rise up while cooking, but US research group Consumer Reports found them to be the least efficient.

Testing by Australian consumer group Choice found wall canopies and fixed rangehoods to be the most efficient, as they cover a larger area over the cooktop.

Another difference has to do with where the air goes after it is sucked into the filter.

Ducted rangehoods suck in the air and vent it outside, whereas recirculation hoods put air back to the room after it has been through the filter. Choice says its testing shows ducted rangehoods are the preferred option.

Rangehood filters need some maintenance to keep them working well.

Aluminium mesh filters can be cleaned at home according to manufacturer instructions, whereas other types of filters will need replacing, Choice says on its website.

Recirculation rangehoods have carbon filters which also need regular replacing, but how often depends on the type of cooking and how often it's used. 

Every three to six months is the most common manufacturer recommendation, according to Choice.

IMAGES

  1. Understanding Qualitative Research: An In-Depth Study Guide

    research quality

  2. A Guide to Understanding High Quality Research

    research quality

  3. Research Quality Management Program (RQMP)

    research quality

  4. How to Write a High Quality Research Paper 2024

    research quality

  5. (PDF) ISSUE OF QUALITY IN A QUALITATIVE RESEARCH: AN OVERVIEW

    research quality

  6. Evaluating Quality of a Research Article

    research quality

COMMENTS

  1. Research quality: What it is, and how to achieve it

    This article explores what constitutes high-quality research and how to achieve it. It discusses the role of journal rankings, citations, value-in-use, and societal impact in assessing research quality.

  2. Defining and assessing research quality in a transdisciplinary context

    This article reviews the literature and proposes a framework for defining and assessing research quality in a transdisciplinary context. It identifies four main principles of TDR quality: relevance, credibility, legitimacy, and effectiveness, and provides examples of criteria and indicators for each principle.

  3. Criteria for Good Qualitative Research: A Comprehensive Review

    This article synthesizes a published set of evaluative criteria for good qualitative research based on different paradigmatic standpoints. It also offers some prospects and recommendations to improve the quality of qualitative research and how to assess its findings.

  4. Research quality: What it is, and how to achieve it

    Section snippets What research quality historically is. Research assessment plays an essential role in academic appointments, in annual performance reviews, in promotions, and in national research assessment exercises such as the Excellence in Research for Australia (ERA), the Research Excellence Framework (REF) in the United Kingdom, the Standard Evaluation Protocol (SEP) in the Netherlands ...

  5. Assessing Research Quality

    Learn how to use research to inform practice and policymaking decisions in early care and education. Find resources on understanding research, assessing its relevance, credibility, and rigor, and using research tools.

  6. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  7. (PDF) What Is Quality in Research? Building a Framework of Design

    The results are represented by a structured taxonomy of 66 quality attributes providing a systemic definition of research quality. The attributes are aggregated into a three-dimensional framework ...

  8. A Review of the Quality Indicators of Rigor in Qualitative Research

    Abstract. Attributes of rigor and quality and suggested best practices for qualitative research design as they relate to the steps of designing, conducting, and reporting qualitative research in health professions educational scholarship are presented. A research question must be clear and focused and supported by a strong conceptual framework ...

  9. Research quality: What it is, and how to achieve it

    We present a set of conditions to support a research stream that delivers value-in-use to students and practitioners. We then turn to whether high-quality research implies finding solutions to ...

  10. Standards for High-Quality and Objective Research and Analysis

    The 2022 revision was informed by studies on research quality 1 and extensive input from the RAND community. The 2022 standards are framed as commitments to principles and actions that advance quality. They are mutually reinforcing, albeit with some intentional overlap among them. The standards are coupled with guidance about how to apply them ...

  11. Research and Quality Improvement: How Can They Work Together?

    Research and quality improvement provide a mechanism to support the advancement of knowledge, and to evaluate and learn from experience. The focus of research is to contribute to developing knowledge or gather evidence for theories in a field of study, whereas the focus of quality improvement is to standardize processes and reduce variation to improve outcomes for patients and health care ...

  12. Assessing the quality of research

    Figure 1. Go to: Systematic reviews of research are always preferred. Go to: Level alone should not be used to grade evidence. Other design elements, such as the validity of measurements and blinding of outcome assessments. Quality of the conduct of the study, such as loss to follow up and success of blinding.

  13. Quality in Research: Asking the Right Question

    Labbok M. H., Starling A. (2012). Definitions of breastfeeding: Call for the development and use of consistent definitions in research and peer-reviewed literature. Breastfeeding Medicine, 7(6), 397-402.

  14. What is quality research? A guide to identifying the key features and

    Learn what quality research means and how to achieve it. Find out the key features, standards and methods of quality research, and how to identify credible research findings.

  15. Quality from within: Entry points to research quality in the humanities

    The lack of knowledge on research quality in the social sciences and the humanities has, however, led to an increased interest in empirical studies more recently, with organizations such as the European Network for Research Evaluation in the Social Sciences and Humanities serving as the meeting place for these efforts (Ochsner, Galleron and ...

  16. Home

    About our work at AHRQ. The Agency for Healthcare Research and Quality's (AHRQ) mission is to produce evidence to make health care safer, higher quality, more accessible, equitable, and affordable, and to work within the U.S. Department of Health and Human Services and with other partners to make sure that the evidence is understood and used.

  17. Research quality

    Research Quality - 3/5 A good literature review and overview of the subject. Based on a meta-analysis, rather than primary research. Confidence - 4/5 Consistent with the current research thinking and developments. Usefulness - 4/5 Particularly useful to HR practitioners.

  18. 10 Research Question Examples to Guide your Research Project

    The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

  19. Agency for Healthcare Research and Quality (AHRQ)

    Agency for Healthcare Research and Quality (AHRQ)

  20. Microbot Medical (MBOT) Gets Quality Certification for its System

    Microbot Medical Inc. (MBOT Quick Quote MBOT - Free Report) recently announced the receipt of ISO 13485:2016 certification for its quality management system. This certification underscores the ...

  21. V. I. Dvorkin. Metrology and Quality Assurance of ...

    V. I. Dvorkin. Metrology and Quality Assurance of Quantitative Chemical Analysis, Moscow: Khimiya, 2001, 263 p.(1000 copies) Published: June 2003 Volume 58, pages 601-603, (2003) ; Cite this article

  22. Quality of Research Practice

    The research quality model based on four basic concepts and 28 sub-concepts has-overall-been found valid, as an important contribution for describing the quality of research practice, by 42 senior researchers from three major Swedish universities. The majority of the respondents believed that all concepts in the model are important, and did ...

  23. Research at NIATT

    Capabilities. The University of Idaho and NIATT can boast of high quality professors and researchers with a variety of capabilities. Our faculty engage in research to solve challenging, practical, and relevant transportation problems that have regional and national significance. Faculty integrate their research into their course work.

  24. Colorectal surgery earns Surgical Quality Partner honors

    (SACRAMENTO) The American College of Surgeons (ACS) and the National Accreditation Program for Rectal Cancer (NAPRC) has designated the colorectal surgery program at UC Davis Health a Surgical Quality Partner. The award is given to programs that excel in quality of care, preventing complications, saving lives and reducing costs "UC Davis Health was one of the first institutions in the country ...

  25. PDF Russian National Research of Ece Quality: Assessment for Development

    ECE QUALITY RESEARCH 2016-2017 DESIGN. 6 ECE QUALITY RESEARCH 2016-2017 RESULTS. 7 ECE QUALITY RESEARCH 2016-2017 RESULTS. 8 The feedback given by assessors after the first year of study made a critical influence on increasing the scores in those kindergartens. ECE QUALITY RESEARCH 2016-2017 RESULTS Clusters Results 2016 Results 2017

  26. P.P. Shirshov Institute of Oceanology

    Find 412 researchers working at P.P. Shirshov Institute of Oceanology | Moscow, Russia |

  27. Relationships between the appearance quality and starch structure of

    Rice quality is a key factor in determining the market competitiveness. Particularly, rice appearance quality is an important component of rice quality. Therefore, in order to enhance the market competitiveness of rice, it is significant to explore the optimizing measures of rice appearance quality (Pham et al., 2016; Rao et al., 2014). Rice ...

  28. Periodontics graduate wins 2024 John F. Prichard Graduate Research

    This achievement is a testament to his dedication and the quality of his research, which was conducted under the mentorship of Georgios A. Kotsakis, DDS. The Prichard competition, held during the society's 2024 Winter Meeting, showcased the exceptional research being conducted by residents across the Southwest region. Nine judges from ...

  29. How to use and assess qualitative research methods

    What is qualitative research? Qualitative research is defined as "the study of the nature of phenomena", including "their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived", but excluding "their range, frequency and place in an objectively determined chain of cause and effect" [].

  30. Do you need to turn on kitchen exhaust fans when cooking?

    Portable air purifiers are another option, but they can be costly and only provide a partial solution, Professor Morawska says. "The size of the air purifier has to be relevant to the space. If ...