• Submission Guidelines

qualitative and quantitative header

qualitative and quantitative header

Learning Objective

Differentiate between qualitative and quantitative approaches.

Hong is a physical therapist who teaches injury assessment classes at the University of Utah. With the recent change to online for the remainder of the semester, Hong is interested in the impact on students’ skills acquisition for injury assessment. He wants to utilize both quantitative and qualitative approaches—he plans to compare previous student test scores to current student test scores. He also plans to interview current students about their experiences practicing injury assessment skills virtually. What specific study design methods will Hong use?

Making sense of the evidence

hen conducting a literature search and reviewing research articles, it is important to have a general understanding of the types of research and data you anticipate from different types of studies.

In this article, we review two broad categories of study methods, quantitative and qualitative, and discuss some of their subtypes, or designs, and the type of data that they generate.

Quantitative vs. qualitative approaches

Objective and measurable Subjective and structured
Gathering data in organized, objective ways to generalize findings to other persons or populations. When inquiry centers around life experiences or meaning. Explores the complexity, depth, and richness of a particular situation.

Quantitative is measurable. It is often associated with a more traditional scientific method of gathering data in an organized, objective manner so that findings can be generalized to other persons or populations. Quantitative designs are based on probabilities or likelihood—it utilizes ‘p’ values, power analysis, and other scientific methods to ensure the rigor and reproducibility of the results to other populations. Quantitative designs can be experimental, quasi-experimental, descriptive, or correlational.

Qualitative is usually more subjective , although like quantitative research, it also uses a systematic approach. Qualitative research is generally preferred when the clinical question centers around life experiences or meaning. Qualitative research explores the complexity, depth, and richness of a particular situation from the perspective of the informants—referring to the person or persons providing the information. This may be the patient, the patient’s caregivers, the patient’s family members, etc. The information may also come from the investigator’s or researcher’s observations. At the heart of qualitative research is the belief that reality is based on perceptions and can be different for each person, often changing over time.

Study design differences

– cause and effect (if A, then B) – also examines cause, used when not all variables can be controlled – examine characteristics of a particular situation or group – examine relationships between two or more variables – examines the lived experience within a particular condition or situation – examine the culture of a group of people – using a research problem to discover and develop a theory

Quantitative design methods

Quantitative designs typically fall into four categories: experimental, quasi-experimental, descriptive, or correlational. Let’s talk about these different types. But before we begin, we need to briefly review the difference between independent and dependent variables.

The independent variable is the variable that is being manipulated, or the one that varies. It is sometimes called the ‘predictor’ or ‘treatment’ variable.

The dependent variable is the outcome (or response) variable. Changes in the dependent variables are presumed to be caused or influenced by the independent variable.

Experimental

In experimental designs, there are often treatment groups and control groups. This study design looks for cause and effect (if A, then B), so it requires having control over at least one of the independent, or treatment variables. Experimental design administers the treatment to some of the subjects (called the ‘experimental group’) and not to others (called the ‘control group’). Subjects are randomly assigned—meaning that they would have an equal chance of being assigned to the control group or the experimental group. This is the strongest design for testing cause and effect relationships because randomization reduces bias. In fact, most researchers believe that a randomized controlled trail is the only kind of research study where we can infer cause (if A, then B). The difficulty with a randomized controlled trial is that the results may not be generalizable in all circumstances with all patient populations, so as with any research study, you need to consider the application of the findings to your patients in your setting. 

Quasi-experimental

Quasi-Experimental studies also seek to identify a cause and effect (causal) relationship, although they are less powerful than experimental designs. This is because they lack one or more characteristics of a true experiment. For instance, they may not include random assignment or they may not have a control group. As is often the case in the ‘real world’, clinical care variables often cannot be controlled due to ethical, practical, or fiscal concerns. So, the quasi experimental approach is utilized when a randomized controlled trial is not possible. For example, if it was found that the new treatment stopped disease progression, it would no longer be ethical to withhold it from others by establishing a control group.

Descriptive

Descriptive studies give us an accurate account of the characteristics of a particular situation or group. They are often used to determine how often something occurs, the likelihood of something occurring, or to provide a way to categorize information. For example, let’s say we wanted to look at the visiting policy in the ICU and describe how implementing an open-visiting policy affected nurse satisfaction. We could use a research tool, such as a Likert scale (5 = very satisfied and 1 = very dissatisfied), to help us gain an understanding of how satisfied nurses are as a group with this policy.

Correlational

Correlational research involves the study of the relationship between two or more variables. The primary purpose is to explain the nature of the relationship, not to determine the cause and effect. For example, if you wanted to examine whether first-time moms who have an elective induction are more likely to have a cesarean birth than first-time moms who go into labor naturally, the independent variables would be ‘elective induction’ and ‘go into labor naturally’ (because they are the variables that ‘vary’) and the outcome variable is ‘cesarean section.’ Even if you find a strong relationship between elective inductions and an increased likelihood of cesarean birth, you cannot state that elective inductions ‘cause’ cesarean births because we have no control over the variables. We can only report an increased likelihood.   

Qualitative design methods

Qualitative methods delve deeply into experiences, social processes, and subcultures. Qualitative study generally falls under three types of designs: phenomenology, ethnography and grounded theory.

Phenomenology

In this approach, we want to understand and describe the lived experience or meaning of persons with a particular condition or situation. For example, phenomenological questions might ask “What is it like for an adolescent to have a younger sibling with a terminal illness?” or “What is the lived experience of caring for an older house-bound dependent parent?”

Ethnography

Ethnographic studies focus on the culture of a group of people. The assumption behind ethnographies is that groups of individuals evolve into a kind of ‘culture’ that guides the way members of that culture or group view the world. In this kind of study, the research focuses on participant observation, where the researcher becomes an active participant in that culture to understand its experiences. For example, nursing could be considered a professional culture, and the unit of a hospital can be viewed as a subculture. One example specific to nursing culture was a study done in 2006 by Deitrick and colleagues . They used ethnographic methods to examine problems related to answering patient call lights on one medical surgical inpatient unit. The single nursing unit was the ‘culture’ under study.

Grounded theory

Grounded theory research begins with a general research problem, selects persons most likely to clarify the initial understanding of the question, and uses a variety of techniques (interviewing, observation, document review to name a few) to discover and develop a theory. For example, one nurse researcher used a grounded theory approach to explain how African American women from different socioeconomic backgrounds make decisions about mammography screening. Because African American women historically have fewer mammograms (and therefore lower survival rates for later stage detection), understanding their decision-making process may help the provider support more effective health promotion efforts. 

Being able to identify the differences between qualitative and quantitative research and becoming familiar with the subtypes of each can make a literature search a little less daunting.

Take the quiz

This article originally appeared July 2, 2020. It was updated to reflect current practice on March 21, 2021.

Barbara Wilson

Mary-jean (gigi) austria, tallie casucci.

Performing a rapid critical appraisal helps evaluate a study for its worth by ensuring validity, meaningful data, and significance to the patient. Contributors Barb Wilson, Mary Jean Austria, and Tallie Casucci share a checklist of questions to complete a rapid critical appraisal efficiently and effectively.

Relationship building isn’t typically the focus of medical training but is a necessary skill for truly excellent clinicians. Deirdre, Joni, Jared and colleagues developed a model to integrate relationship management skills into medical training, helping create a more well-rounded, complete clinician.

Medical students Rachel Tsolinas and Sam Wilkinson, along with SOM professor Kathryn Moore, share a practical tool all health care professionals can use to broaden our understanding of how culture influences decisions and events.

Subscribe to our newsletter

Receive the latest insights in health care impact, improvement, leadership, resilience, and more..

experimental group is quantitative or qualitative

Contact the Accelerate Team

50 North Medical Drive   |   Salt Lake City, Utah 84132   |   801-587-2157

Qualitative vs Quantitative Research Methods & Data Analysis

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

What is the difference between quantitative and qualitative?

The main difference between quantitative and qualitative research is the type of data they collect and analyze.

Quantitative research collects numerical data and analyzes it using statistical methods. The aim is to produce objective, empirical data that can be measured and expressed in numerical terms. Quantitative research is often used to test hypotheses, identify patterns, and make predictions.

Qualitative research , on the other hand, collects non-numerical data such as words, images, and sounds. The focus is on exploring subjective experiences, opinions, and attitudes, often through observation and interviews.

Qualitative research aims to produce rich and detailed descriptions of the phenomenon being studied, and to uncover new insights and meanings.

Quantitative data is information about quantities, and therefore numbers, and qualitative data is descriptive, and regards phenomenon which can be observed but not measured, such as language.

What Is Qualitative Research?

Qualitative research is the process of collecting, analyzing, and interpreting non-numerical data, such as language. Qualitative research can be used to understand how an individual subjectively perceives and gives meaning to their social reality.

Qualitative data is non-numerical data, such as text, video, photographs, or audio recordings. This type of data can be collected using diary accounts or in-depth interviews and analyzed using grounded theory or thematic analysis.

Qualitative research is multimethod in focus, involving an interpretive, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them. Denzin and Lincoln (1994, p. 2)

Interest in qualitative data came about as the result of the dissatisfaction of some psychologists (e.g., Carl Rogers) with the scientific study of psychologists such as behaviorists (e.g., Skinner ).

Since psychologists study people, the traditional approach to science is not seen as an appropriate way of carrying out research since it fails to capture the totality of human experience and the essence of being human.  Exploring participants’ experiences is known as a phenomenological approach (re: Humanism ).

Qualitative research is primarily concerned with meaning, subjectivity, and lived experience. The goal is to understand the quality and texture of people’s experiences, how they make sense of them, and the implications for their lives.

Qualitative research aims to understand the social reality of individuals, groups, and cultures as nearly as possible as participants feel or live it. Thus, people and groups are studied in their natural setting.

Some examples of qualitative research questions are provided, such as what an experience feels like, how people talk about something, how they make sense of an experience, and how events unfold for people.

Research following a qualitative approach is exploratory and seeks to explain ‘how’ and ‘why’ a particular phenomenon, or behavior, operates as it does in a particular context. It can be used to generate hypotheses and theories from the data.

Qualitative Methods

There are different types of qualitative research methods, including diary accounts, in-depth interviews , documents, focus groups , case study research , and ethnography.

The results of qualitative methods provide a deep understanding of how people perceive their social realities and in consequence, how they act within the social world.

The researcher has several methods for collecting empirical materials, ranging from the interview to direct observation, to the analysis of artifacts, documents, and cultural records, to the use of visual materials or personal experience. Denzin and Lincoln (1994, p. 14)

Here are some examples of qualitative data:

Interview transcripts : Verbatim records of what participants said during an interview or focus group. They allow researchers to identify common themes and patterns, and draw conclusions based on the data. Interview transcripts can also be useful in providing direct quotes and examples to support research findings.

Observations : The researcher typically takes detailed notes on what they observe, including any contextual information, nonverbal cues, or other relevant details. The resulting observational data can be analyzed to gain insights into social phenomena, such as human behavior, social interactions, and cultural practices.

Unstructured interviews : generate qualitative data through the use of open questions.  This allows the respondent to talk in some depth, choosing their own words.  This helps the researcher develop a real sense of a person’s understanding of a situation.

Diaries or journals : Written accounts of personal experiences or reflections.

Notice that qualitative data could be much more than just words or text. Photographs, videos, sound recordings, and so on, can be considered qualitative data. Visual data can be used to understand behaviors, environments, and social interactions.

Qualitative Data Analysis

Qualitative research is endlessly creative and interpretive. The researcher does not just leave the field with mountains of empirical data and then easily write up his or her findings.

Qualitative interpretations are constructed, and various techniques can be used to make sense of the data, such as content analysis, grounded theory (Glaser & Strauss, 1967), thematic analysis (Braun & Clarke, 2006), or discourse analysis .

For example, thematic analysis is a qualitative approach that involves identifying implicit or explicit ideas within the data. Themes will often emerge once the data has been coded .

RESEARCH THEMATICANALYSISMETHOD

Key Features

  • Events can be understood adequately only if they are seen in context. Therefore, a qualitative researcher immerses her/himself in the field, in natural surroundings. The contexts of inquiry are not contrived; they are natural. Nothing is predefined or taken for granted.
  • Qualitative researchers want those who are studied to speak for themselves, to provide their perspectives in words and other actions. Therefore, qualitative research is an interactive process in which the persons studied teach the researcher about their lives.
  • The qualitative researcher is an integral part of the data; without the active participation of the researcher, no data exists.
  • The study’s design evolves during the research and can be adjusted or changed as it progresses. For the qualitative researcher, there is no single reality. It is subjective and exists only in reference to the observer.
  • The theory is data-driven and emerges as part of the research process, evolving from the data as they are collected.

Limitations of Qualitative Research

  • Because of the time and costs involved, qualitative designs do not generally draw samples from large-scale data sets.
  • The problem of adequate validity or reliability is a major criticism. Because of the subjective nature of qualitative data and its origin in single contexts, it is difficult to apply conventional standards of reliability and validity. For example, because of the central role played by the researcher in the generation of data, it is not possible to replicate qualitative studies.
  • Also, contexts, situations, events, conditions, and interactions cannot be replicated to any extent, nor can generalizations be made to a wider context than the one studied with confidence.
  • The time required for data collection, analysis, and interpretation is lengthy. Analysis of qualitative data is difficult, and expert knowledge of an area is necessary to interpret qualitative data. Great care must be taken when doing so, for example, looking for mental illness symptoms.

Advantages of Qualitative Research

  • Because of close researcher involvement, the researcher gains an insider’s view of the field. This allows the researcher to find issues that are often missed (such as subtleties and complexities) by the scientific, more positivistic inquiries.
  • Qualitative descriptions can be important in suggesting possible relationships, causes, effects, and dynamic processes.
  • Qualitative analysis allows for ambiguities/contradictions in the data, which reflect social reality (Denscombe, 2010).
  • Qualitative research uses a descriptive, narrative style; this research might be of particular benefit to the practitioner as she or he could turn to qualitative reports to examine forms of knowledge that might otherwise be unavailable, thereby gaining new insight.

What Is Quantitative Research?

Quantitative research involves the process of objectively collecting and analyzing numerical data to describe, predict, or control variables of interest.

The goals of quantitative research are to test causal relationships between variables , make predictions, and generalize results to wider populations.

Quantitative researchers aim to establish general laws of behavior and phenomenon across different settings/contexts. Research is used to test a theory and ultimately support or reject it.

Quantitative Methods

Experiments typically yield quantitative data, as they are concerned with measuring things.  However, other research methods, such as controlled observations and questionnaires , can produce both quantitative information.

For example, a rating scale or closed questions on a questionnaire would generate quantitative data as these produce either numerical data or data that can be put into categories (e.g., “yes,” “no” answers).

Experimental methods limit how research participants react to and express appropriate social behavior.

Findings are, therefore, likely to be context-bound and simply a reflection of the assumptions that the researcher brings to the investigation.

There are numerous examples of quantitative data in psychological research, including mental health. Here are a few examples:

Another example is the Experience in Close Relationships Scale (ECR), a self-report questionnaire widely used to assess adult attachment styles .

The ECR provides quantitative data that can be used to assess attachment styles and predict relationship outcomes.

Neuroimaging data : Neuroimaging techniques, such as MRI and fMRI, provide quantitative data on brain structure and function.

This data can be analyzed to identify brain regions involved in specific mental processes or disorders.

For example, the Beck Depression Inventory (BDI) is a clinician-administered questionnaire widely used to assess the severity of depressive symptoms in individuals.

The BDI consists of 21 questions, each scored on a scale of 0 to 3, with higher scores indicating more severe depressive symptoms. 

Quantitative Data Analysis

Statistics help us turn quantitative data into useful information to help with decision-making. We can use statistics to summarize our data, describing patterns, relationships, and connections. Statistics can be descriptive or inferential.

Descriptive statistics help us to summarize our data. In contrast, inferential statistics are used to identify statistically significant differences between groups of data (such as intervention and control groups in a randomized control study).

  • Quantitative researchers try to control extraneous variables by conducting their studies in the lab.
  • The research aims for objectivity (i.e., without bias) and is separated from the data.
  • The design of the study is determined before it begins.
  • For the quantitative researcher, the reality is objective, exists separately from the researcher, and can be seen by anyone.
  • Research is used to test a theory and ultimately support or reject it.

Limitations of Quantitative Research

  • Context: Quantitative experiments do not take place in natural settings. In addition, they do not allow participants to explain their choices or the meaning of the questions they may have for those participants (Carr, 1994).
  • Researcher expertise: Poor knowledge of the application of statistical analysis may negatively affect analysis and subsequent interpretation (Black, 1999).
  • Variability of data quantity: Large sample sizes are needed for more accurate analysis. Small-scale quantitative studies may be less reliable because of the low quantity of data (Denscombe, 2010). This also affects the ability to generalize study findings to wider populations.
  • Confirmation bias: The researcher might miss observing phenomena because of focus on theory or hypothesis testing rather than on the theory of hypothesis generation.

Advantages of Quantitative Research

  • Scientific objectivity: Quantitative data can be interpreted with statistical analysis, and since statistics are based on the principles of mathematics, the quantitative approach is viewed as scientifically objective and rational (Carr, 1994; Denscombe, 2010).
  • Useful for testing and validating already constructed theories.
  • Rapid analysis: Sophisticated software removes much of the need for prolonged data analysis, especially with large volumes of data involved (Antonius, 2003).
  • Replication: Quantitative data is based on measured values and can be checked by others because numerical data is less open to ambiguities of interpretation.
  • Hypotheses can also be tested because of statistical analysis (Antonius, 2003).

Antonius, R. (2003). Interpreting quantitative data with SPSS . Sage.

Black, T. R. (1999). Doing quantitative research in the social sciences: An integrated approach to research design, measurement and statistics . Sage.

Braun, V. & Clarke, V. (2006). Using thematic analysis in psychology . Qualitative Research in Psychology , 3, 77–101.

Carr, L. T. (1994). The strengths and weaknesses of quantitative and qualitative research : what method for nursing? Journal of advanced nursing, 20(4) , 716-721.

Denscombe, M. (2010). The Good Research Guide: for small-scale social research. McGraw Hill.

Denzin, N., & Lincoln. Y. (1994). Handbook of Qualitative Research. Thousand Oaks, CA, US: Sage Publications Inc.

Glaser, B. G., Strauss, A. L., & Strutzel, E. (1968). The discovery of grounded theory; strategies for qualitative research. Nursing research, 17(4) , 364.

Minichiello, V. (1990). In-Depth Interviewing: Researching People. Longman Cheshire.

Punch, K. (1998). Introduction to Social Research: Quantitative and Qualitative Approaches. London: Sage

Further Information

  • Mixed methods research
  • Designing qualitative research
  • Methods of data collection and analysis
  • Introduction to quantitative and qualitative research
  • Checklists for improving rigour in qualitative research: a case of the tail wagging the dog?
  • Qualitative research in health care: Analysing qualitative data
  • Qualitative data analysis: the framework approach
  • Using the framework method for the analysis of
  • Qualitative data in multi-disciplinary health research
  • Content Analysis
  • Grounded Theory
  • Thematic Analysis

Print Friendly, PDF & Email

Library Research Guides - University of Wisconsin Ebling Library

Uw-madison libraries research guides.

  • Course Guides
  • Subject Guides
  • University of Wisconsin-Madison
  • Research Guides
  • Nursing Resources
  • Types of Research within Qualitative and Quantitative

Nursing Resources : Types of Research within Qualitative and Quantitative

  • Definitions of
  • Professional Organizations
  • Nursing Informatics
  • Nursing Related Apps
  • EBP Resources
  • PICO-Clinical Question
  • Types of PICO Question (D, T, P, E)
  • Secondary & Guidelines
  • Bedside--Point of Care
  • Pre-processed Evidence
  • Measurement Tools, Surveys, Scales
  • Types of Studies
  • Table of Evidence
  • Qualitative vs Quantitative
  • Cohort vs Case studies
  • Independent Variable VS Dependent Variable
  • Sampling Methods and Statistics
  • Systematic Reviews
  • Review vs Systematic Review vs ETC...
  • Standard, Guideline, Protocol, Policy
  • Additional Guidelines Sources
  • Peer Reviewed Articles
  • Conducting a Literature Review
  • Systematic Reviews and Meta-Analysis
  • Writing a Research Paper or Poster
  • Annotated Bibliographies
  • Levels of Evidence (I-VII)
  • Reliability
  • Validity Threats
  • Threats to Validity of Research Designs
  • Nursing Theory
  • Nursing Models
  • PRISMA, RevMan, & GRADEPro
  • ORCiD & NIH Submission System
  • Understanding Predatory Journals
  • Nursing Scope & Standards of Practice, 4th Ed
  • Distance Ed & Scholarships
  • Assess A Quantitative Study?
  • Assess A Qualitative Study?
  • Find Health Statistics?
  • Choose A Citation Manager?
  • Find Instruments, Measurements, and Tools
  • Write a CV for a DNP or PhD?
  • Find information about graduate programs?
  • Learn more about Predatory Journals
  • Get writing help?
  • Choose a Citation Manager?
  • Other questions you may have
  • Search the Databases?
  • Get Grad School information?

Aspects of Quantative (Empirical) Research

♦   Statement of purpose—what was studied and why.

  ♦   Description of the methodology (experimental group, control group, variables, test conditions, test subjects, etc.).

  ♦   Results (usually numeric in form presented in tables or graphs, often with statistical analysis).

♦   Conclusions drawn from the results.

  ♦   Footnotes, a bibliography, author credentials.

Hint: the abstract (summary) of an article is the first place to check for most of the above features.  The abstract appears both in the database you search and at the top of the actual article.

Types of Quantitative Research

There are four (4) main types of quantitative designs: descriptive, correlational, quasi-experimental, and experimental.

samples.jbpub.com/9780763780586/80586_CH03_Keele.pdf

Types of Qualitative Research

 

Attempts to shed light on a phenomena by studying indepth a single case example of the phenomena.  The case can be an individual person, an event, a group, or an institution.

To understand the social and psychological processes that characterize an event or situation.

Describes the structures of experience as they present themselves to consciousness, without recourse to theory, deduction, or assumptions from other disciplines

Focuses on the sociology of meaning through close field observation of sociocultural phenomena. Typically, the ethnographer focuses on a community.

Systematic collection and objective evaluation of data related to past occurrences in order to test hypotheses concerning causes, effects, or trends of these events that may help to explain present events and anticipate future events. (Gay, 1996)

http://wilderdom.com/OEcourses/PROFLIT/Class6Qualitative1.htm

  • << Previous: Qualitative vs Quantitative
  • Next: Cohort vs Case studies >>
  • Last Updated: Mar 19, 2024 10:39 AM
  • URL: https://researchguides.library.wisc.edu/nursing

Logo for VCU Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Part 3: Using quantitative methods

13. Experimental design

Chapter outline.

  • What is an experiment and when should you use one? (8 minute read)
  • True experimental designs (7 minute read)
  • Quasi-experimental designs (8 minute read)
  • Non-experimental designs (5 minute read)
  • Critical, ethical, and critical considerations  (5 minute read)

Content warning : examples in this chapter contain references to non-consensual research in Western history, including experiments conducted during the Holocaust and on African Americans (section 13.6).

13.1 What is an experiment and when should you use one?

Learning objectives.

Learners will be able to…

  • Identify the characteristics of a basic experiment
  • Describe causality in experimental design
  • Discuss the relationship between dependent and independent variables in experiments
  • Explain the links between experiments and generalizability of results
  • Describe advantages and disadvantages of experimental designs

The basics of experiments

The first experiment I can remember using was for my fourth grade science fair. I wondered if latex- or oil-based paint would hold up to sunlight better. So, I went to the hardware store and got a few small cans of paint and two sets of wooden paint sticks. I painted one with oil-based paint and the other with latex-based paint of different colors and put them in a sunny spot in the back yard. My hypothesis was that the oil-based paint would fade the most and that more fading would happen the longer I left the paint sticks out. (I know, it’s obvious, but I was only 10.)

I checked in on the paint sticks every few days for a month and wrote down my observations. The first part of my hypothesis ended up being wrong—it was actually the latex-based paint that faded the most. But the second part was right, and the paint faded more and more over time. This is a simple example, of course—experiments get a heck of a lot more complex than this when we’re talking about real research.

Merriam-Webster defines an experiment   as “an operation or procedure carried out under controlled conditions in order to discover an unknown effect or law, to test or establish a hypothesis, or to illustrate a known law.” Each of these three components of the definition will come in handy as we go through the different types of experimental design in this chapter. Most of us probably think of the physical sciences when we think of experiments, and for good reason—these experiments can be pretty flashy! But social science and psychological research follow the same scientific methods, as we’ve discussed in this book.

As the video discusses, experiments can be used in social sciences just like they can in physical sciences. It makes sense to use an experiment when you want to determine the cause of a phenomenon with as much accuracy as possible. Some types of experimental designs do this more precisely than others, as we’ll see throughout the chapter. If you’ll remember back to Chapter 11  and the discussion of validity, experiments are the best way to ensure internal validity, or the extent to which a change in your independent variable causes a change in your dependent variable.

Experimental designs for research projects are most appropriate when trying to uncover or test a hypothesis about the cause of a phenomenon, so they are best for explanatory research questions. As we’ll learn throughout this chapter, different circumstances are appropriate for different types of experimental designs. Each type of experimental design has advantages and disadvantages, and some are better at controlling the effect of extraneous variables —those variables and characteristics that have an effect on your dependent variable, but aren’t the primary variable whose influence you’re interested in testing. For example, in a study that tries to determine whether aspirin lowers a person’s risk of a fatal heart attack, a person’s race would likely be an extraneous variable because you primarily want to know the effect of aspirin.

In practice, many types of experimental designs can be logistically challenging and resource-intensive. As practitioners, the likelihood that we will be involved in some of the types of experimental designs discussed in this chapter is fairly low. However, it’s important to learn about these methods, even if we might not ever use them, so that we can be thoughtful consumers of research that uses experimental designs.

While we might not use all of these types of experimental designs, many of us will engage in evidence-based practice during our time as social workers. A lot of research developing evidence-based practice, which has a strong emphasis on generalizability, will use experimental designs. You’ve undoubtedly seen one or two in your literature search so far.

The logic of experimental design

How do we know that one phenomenon causes another? The complexity of the social world in which we practice and conduct research means that causes of social problems are rarely cut and dry. Uncovering explanations for social problems is key to helping clients address them, and experimental research designs are one road to finding answers.

As you read about in Chapter 8 (and as we’ll discuss again in Chapter 15 ), just because two phenomena are related in some way doesn’t mean that one causes the other. Ice cream sales increase in the summer, and so does the rate of violent crime; does that mean that eating ice cream is going to make me murder someone? Obviously not, because ice cream is great. The reality of that relationship is far more complex—it could be that hot weather makes people more irritable and, at times, violent, while also making people want ice cream. More likely, though, there are other social factors not accounted for in the way we just described this relationship.

Experimental designs can help clear up at least some of this fog by allowing researchers to isolate the effect of interventions on dependent variables by controlling extraneous variables . In true experimental design (discussed in the next section) and some quasi-experimental designs, researchers accomplish this w ith the control group and the experimental group . (The experimental group is sometimes called the “treatment group,” but we will call it the experimental group in this chapter.) The control group does not receive the intervention you are testing (they may receive no intervention or what is known as “treatment as usual”), while the experimental group does. (You will hopefully remember our earlier discussion of control variables in Chapter 8 —conceptually, the use of the word “control” here is the same.)

experimental group is quantitative or qualitative

In a well-designed experiment, your control group should look almost identical to your experimental group in terms of demographics and other relevant factors. What if we want to know the effect of CBT on social anxiety, but we have learned in prior research that men tend to have a more difficult time overcoming social anxiety? We would want our control and experimental groups to have a similar gender mix because it would limit the effect of gender on our results, since ostensibly, both groups’ results would be affected by gender in the same way. If your control group has 5 women, 6 men, and 4 non-binary people, then your experimental group should be made up of roughly the same gender balance to help control for the influence of gender on the outcome of your intervention. (In reality, the groups should be similar along other dimensions, as well, and your group will likely be much larger.) The researcher will use the same outcome measures for both groups and compare them, and assuming the experiment was designed correctly, get a pretty good answer about whether the intervention had an effect on social anxiety.

You will also hear people talk about comparison groups , which are similar to control groups. The primary difference between the two is that a control group is populated using random assignment, but a comparison group is not. Random assignment entails using a random process to decide which participants are put into the control or experimental group (which participants receive an intervention and which do not). By randomly assigning participants to a group, you can reduce the effect of extraneous variables on your research because there won’t be a systematic difference between the groups.

Do not confuse random assignment with random sampling. Random sampling is a method for selecting a sample from a population, and is rarely used in psychological research. Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other related fields. Random sampling also helps a great deal with generalizability , whereas random assignment increases internal validity .

We have already learned about internal validity in Chapter 11 . The use of an experimental design will bolster internal validity since it works to isolate causal relationships. As we will see in the coming sections, some types of experimental design do this more effectively than others. It’s also worth considering that true experiments, which most effectively show causality , are often difficult and expensive to implement. Although other experimental designs aren’t perfect, they still produce useful, valid evidence and may be more feasible to carry out.

Key Takeaways

  • Experimental designs are useful for establishing causality, but some types of experimental design do this better than others.
  • Experiments help researchers isolate the effect of the independent variable on the dependent variable by controlling for the effect of extraneous variables .
  • Experiments use a control/comparison group and an experimental group to test the effects of interventions. These groups should be as similar to each other as possible in terms of demographics and other relevant factors.
  • True experiments have control groups with randomly assigned participants, while other types of experiments have comparison groups to which participants are not randomly assigned.
  • Think about the research project you’ve been designing so far. How might you use a basic experiment to answer your question? If your question isn’t explanatory, try to formulate a new explanatory question and consider the usefulness of an experiment.
  • Why is establishing a simple relationship between two variables not indicative of one causing the other?

13.2 True experimental design

  • Describe a true experimental design in social work research
  • Understand the different types of true experimental designs
  • Determine what kinds of research questions true experimental designs are suited for
  • Discuss advantages and disadvantages of true experimental designs

True experimental design , often considered to be the “gold standard” in research designs, is thought of as one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity and its ability to establish ( causality ) through treatment manipulation, while controlling for the effects of extraneous variable. Sometimes the treatment level is no treatment, while other times it is simply a different treatment than that which we are trying to evaluate. For example, we might have a control group that is made up of people who will not receive any treatment for a particular condition. Or, a control group could consist of people who consent to treatment with DBT when we are testing the effectiveness of CBT.

As we discussed in the previous section, a true experiment has a control group with participants randomly assigned , and an experimental group . This is the most basic element of a true experiment. The next decision a researcher must make is when they need to gather data during their experiment. Do they take a baseline measurement and then a measurement after treatment, or just a measurement after treatment, or do they handle measurement another way? Below, we’ll discuss the three main types of true experimental designs. There are sub-types of each of these designs, but here, we just want to get you started with some of the basics.

Using a true experiment in social work research is often pretty difficult, since as I mentioned earlier, true experiments can be quite resource intensive. True experiments work best with relatively large sample sizes, and random assignment, a key criterion for a true experimental design, is hard (and unethical) to execute in practice when you have people in dire need of an intervention. Nonetheless, some of the strongest evidence bases are built on true experiments.

For the purposes of this section, let’s bring back the example of CBT for the treatment of social anxiety. We have a group of 500 individuals who have agreed to participate in our study, and we have randomly assigned them to the control and experimental groups. The folks in the experimental group will receive CBT, while the folks in the control group will receive more unstructured, basic talk therapy. These designs, as we talked about above, are best suited for explanatory research questions.

Before we get started, take a look at the table below. When explaining experimental research designs, we often use diagrams with abbreviations to visually represent the experiment. Table 13.1 starts us off by laying out what each of the abbreviations mean.

Table 13.1 Experimental research design notations
R Randomly assigned group (control/comparison or experimental)
O Observation/measurement taken of dependent variable
X Intervention or treatment
X Experimental or new intervention
X Typical intervention/treatment as usual
A, B, C, etc. Denotes different groups (control/comparison and experimental)

Pretest and post-test control group design

In pretest and post-test control group design , participants are given a pretest of some kind to measure their baseline state before their participation in an intervention. In our social anxiety experiment, we would have participants in both the experimental and control groups complete some measure of social anxiety—most likely an established scale and/or a structured interview—before they start their treatment. As part of the experiment, we would have a defined time period during which the treatment would take place (let’s say 12 weeks, just for illustration). At the end of 12 weeks, we would give both groups the same measure as a post-test .

experimental group is quantitative or qualitative

In the diagram, RA (random assignment group A) is the experimental group and RB is the control group. O 1 denotes the pre-test, X e denotes the experimental intervention, and O 2 denotes the post-test. Let’s look at this diagram another way, using the example of CBT for social anxiety that we’ve been talking about.

experimental group is quantitative or qualitative

In a situation where the control group received treatment as usual instead of no intervention, the diagram would look this way, with X i denoting treatment as usual (Figure 13.3).

experimental group is quantitative or qualitative

Hopefully, these diagrams provide you a visualization of how this type of experiment establishes time order , a key component of a causal relationship. Did the change occur after the intervention? Assuming there is a change in the scores between the pretest and post-test, we would be able to say that yes, the change did occur after the intervention. Causality can’t exist if the change happened before the intervention—this would mean that something else led to the change, not our intervention.

Post-test only control group design

Post-test only control group design involves only giving participants a post-test, just like it sounds (Figure 13.4).

experimental group is quantitative or qualitative

But why would you use this design instead of using a pretest/post-test design? One reason could be the testing effect that can happen when research participants take a pretest. In research, the testing effect refers to “measurement error related to how a test is given; the conditions of the testing, including environmental conditions; and acclimation to the test itself” (Engel & Schutt, 2017, p. 444) [1] (When we say “measurement error,” all we mean is the accuracy of the way we measure the dependent variable.) Figure 13.4 is a visualization of this type of experiment. The testing effect isn’t always bad in practice—our initial assessments might help clients identify or put into words feelings or experiences they are having when they haven’t been able to do that before. In research, however, we might want to control its effects to isolate a cleaner causal relationship between intervention and outcome.

Going back to our CBT for social anxiety example, we might be concerned that participants would learn about social anxiety symptoms by virtue of taking a pretest. They might then identify that they have those symptoms on the post-test, even though they are not new symptoms for them. That could make our intervention look less effective than it actually is.

However, without a baseline measurement establishing causality can be more difficult. If we don’t know someone’s state of mind before our intervention, how do we know our intervention did anything at all? Establishing time order is thus a little more difficult. You must balance this consideration with the benefits of this type of design.

Solomon four group design

One way we can possibly measure how much the testing effect might change the results of the experiment is with the Solomon four group design. Basically, as part of this experiment, you have two control groups and two experimental groups. The first pair of groups receives both a pretest and a post-test. The other pair of groups receives only a post-test (Figure 13.5). This design helps address the problem of establishing time order in post-test only control group designs.

experimental group is quantitative or qualitative

For our CBT project, we would randomly assign people to four different groups instead of just two. Groups A and B would take our pretest measures and our post-test measures, and groups C and D would take only our post-test measures. We could then compare the results among these groups and see if they’re significantly different between the folks in A and B, and C and D. If they are, we may have identified some kind of testing effect, which enables us to put our results into full context. We don’t want to draw a strong causal conclusion about our intervention when we have major concerns about testing effects without trying to determine the extent of those effects.

Solomon four group designs are less common in social work research, primarily because of the logistics and resource needs involved. Nonetheless, this is an important experimental design to consider when we want to address major concerns about testing effects.

  • True experimental design is best suited for explanatory research questions.
  • True experiments require random assignment of participants to control and experimental groups.
  • Pretest/post-test research design involves two points of measurement—one pre-intervention and one post-intervention.
  • Post-test only research design involves only one point of measurement—post-intervention. It is a useful design to minimize the effect of testing effects on our results.
  • Solomon four group research design involves both of the above types of designs, using 2 pairs of control and experimental groups. One group receives both a pretest and a post-test, while the other receives only a post-test. This can help uncover the influence of testing effects.
  • Think about a true experiment you might conduct for your research project. Which design would be best for your research, and why?
  • What challenges or limitations might make it unrealistic (or at least very complicated!) for you to carry your true experimental design in the real-world as a student researcher?
  • What hypothesis(es) would you test using this true experiment?

13.4 Quasi-experimental designs

  • Describe a quasi-experimental design in social work research
  • Understand the different types of quasi-experimental designs
  • Determine what kinds of research questions quasi-experimental designs are suited for
  • Discuss advantages and disadvantages of quasi-experimental designs

Quasi-experimental designs are a lot more common in social work research than true experimental designs. Although quasi-experiments don’t do as good a job of giving us robust proof of causality , they still allow us to establish time order , which is a key element of causality. The prefix quasi means “resembling,” so quasi-experimental research is research that resembles experimental research, but is not true experimental research. Nonetheless, given proper research design, quasi-experiments can still provide extremely rigorous and useful results.

There are a few key differences between true experimental and quasi-experimental research. The primary difference between quasi-experimental research and true experimental research is that quasi-experimental research does not involve random assignment to control and experimental groups. Instead, we talk about comparison groups in quasi-experimental research instead. As a result, these types of experiments don’t control the effect of extraneous variables as well as a true experiment.

Quasi-experiments are most likely to be conducted in field settings in which random assignment is difficult or impossible. They are often conducted to evaluate the effectiveness of a treatment—perhaps a type of psychotherapy or an educational intervention.  We’re able to eliminate some threats to internal validity, but we can’t do this as effectively as we can with a true experiment.  Realistically, our CBT-social anxiety project is likely to be a quasi experiment, based on the resources and participant pool we’re likely to have available. 

It’s important to note that not all quasi-experimental designs have a comparison group.  There are many different kinds of quasi-experiments, but we will discuss the three main types below: nonequivalent comparison group designs, time series designs, and ex post facto comparison group designs.

Nonequivalent comparison group design

You will notice that this type of design looks extremely similar to the pretest/post-test design that we discussed in section 13.3. But instead of random assignment to control and experimental groups, researchers use other methods to construct their comparison and experimental groups. A diagram of this design will also look very similar to pretest/post-test design, but you’ll notice we’ve removed the “R” from our groups, since they are not randomly assigned (Figure 13.6).

experimental group is quantitative or qualitative

Researchers using this design select a comparison group that’s as close as possible based on relevant factors to their experimental group. Engel and Schutt (2017) [2] identify two different selection methods:

  • Individual matching : Researchers take the time to match individual cases in the experimental group to similar cases in the comparison group. It can be difficult, however, to match participants on all the variables you want to control for.
  • Aggregate matching : Instead of trying to match individual participants to each other, researchers try to match the population profile of the comparison and experimental groups. For example, researchers would try to match the groups on average age, gender balance, or median income. This is a less resource-intensive matching method, but researchers have to ensure that participants aren’t choosing which group (comparison or experimental) they are a part of.

As we’ve already talked about, this kind of design provides weaker evidence that the intervention itself leads to a change in outcome. Nonetheless, we are still able to establish time order using this method, and can thereby show an association between the intervention and the outcome. Like true experimental designs, this type of quasi-experimental design is useful for explanatory research questions.

What might this look like in a practice setting? Let’s say you’re working at an agency that provides CBT and other types of interventions, and you have identified a group of clients who are seeking help for social anxiety, as in our earlier example. Once you’ve obtained consent from your clients, you can create a comparison group using one of the matching methods we just discussed. If the group is small, you might match using individual matching, but if it’s larger, you’ll probably sort people by demographics to try to get similar population profiles. (You can do aggregate matching more easily when your agency has some kind of electronic records or database, but it’s still possible to do manually.)

Time series design

Another type of quasi-experimental design is a time series design. Unlike other types of experimental design, time series designs do not have a comparison group. A time series is a set of measurements taken at intervals over a period of time (Figure 13.7). Proper time series design should include at least three pre- and post-intervention measurement points. While there are a few types of time series designs, we’re going to focus on the most common: interrupted time series design.

experimental group is quantitative or qualitative

But why use this method? Here’s an example. Let’s think about elementary student behavior throughout the school year. As anyone with children or who is a teacher knows, kids get very excited and animated around holidays, days off, or even just on a Friday afternoon. This fact might mean that around those times of year, there are more reports of disruptive behavior in classrooms. What if we took our one and only measurement in mid-December? It’s possible we’d see a higher-than-average rate of disruptive behavior reports, which could bias our results if our next measurement is around a time of year students are in a different, less excitable frame of mind. When we take multiple measurements throughout the first half of the school year, we can establish a more accurate baseline for the rate of these reports by looking at the trend over time.

We may want to test the effect of extended recess times in elementary school on reports of disruptive behavior in classrooms. When students come back after the winter break, the school extends recess by 10 minutes each day (the intervention), and the researchers start tracking the monthly reports of disruptive behavior again. These reports could be subject to the same fluctuations as the pre-intervention reports, and so we once again take multiple measurements over time to try to control for those fluctuations.

This method improves the extent to which we can establish causality because we are accounting for a major extraneous variable in the equation—the passage of time. On its own, it does not allow us to account for other extraneous variables, but it does establish time order and association between the intervention and the trend in reports of disruptive behavior. Finding a stable condition before the treatment that changes after the treatment is evidence for causality between treatment and outcome.

Ex post facto comparison group design

Ex post facto (Latin for “after the fact”) designs are extremely similar to nonequivalent comparison group designs. There are still comparison and experimental groups, pretest and post-test measurements, and an intervention. But in ex post facto designs, participants are assigned to the comparison and experimental groups once the intervention has already happened. This type of design often occurs when interventions are already up and running at an agency and the agency wants to assess effectiveness based on people who have already completed treatment.

In most clinical agency environments, social workers conduct both initial and exit assessments, so there are usually some kind of pretest and post-test measures available. We also typically collect demographic information about our clients, which could allow us to try to use some kind of matching to construct comparison and experimental groups.

In terms of internal validity and establishing causality, ex post facto designs are a bit of a mixed bag. The ability to establish causality depends partially on the ability to construct comparison and experimental groups that are demographically similar so we can control for these extraneous variables .

Quasi-experimental designs are common in social work intervention research because, when designed correctly, they balance the intense resource needs of true experiments with the realities of research in practice. They still offer researchers tools to gather robust evidence about whether interventions are having positive effects for clients.

  • Quasi-experimental designs are similar to true experiments, but do not require random assignment to experimental and control groups.
  • In quasi-experimental projects, the group not receiving the treatment is called the comparison group, not the control group.
  • Nonequivalent comparison group design is nearly identical to pretest/post-test experimental design, but participants are not randomly assigned to the experimental and control groups. As a result, this design provides slightly less robust evidence for causality.
  • Nonequivalent groups can be constructed by individual matching or aggregate matching .
  • Time series design does not have a control or experimental group, and instead compares the condition of participants before and after the intervention by measuring relevant factors at multiple points in time. This allows researchers to mitigate the error introduced by the passage of time.
  • Ex post facto comparison group designs are also similar to true experiments, but experimental and comparison groups are constructed after the intervention is over. This makes it more difficult to control for the effect of extraneous variables, but still provides useful evidence for causality because it maintains the time order of the experiment.
  • Think back to the experiment you considered for your research project in Section 13.3. Now that you know more about quasi-experimental designs, do you still think it’s a true experiment? Why or why not?
  • What should you consider when deciding whether an experimental or quasi-experimental design would be more feasible or fit your research question better?

13.5 Non-experimental designs

  • Describe non-experimental designs in social work research
  • Discuss how non-experimental research differs from true and quasi-experimental research
  • Demonstrate an understanding the different types of non-experimental designs
  • Determine what kinds of research questions non-experimental designs are suited for
  • Discuss advantages and disadvantages of non-experimental designs

The previous sections have laid out the basics of some rigorous approaches to establish that an intervention is responsible for changes we observe in research participants. This type of evidence is extremely important to build an evidence base for social work interventions, but it’s not the only type of evidence to consider. We will discuss qualitative methods, which provide us with rich, contextual information, in Part 4 of this text. The designs we’ll talk about in this section are sometimes used in qualitative research  but in keeping with our discussion of experimental design so far, we’re going to stay in the quantitative research realm for now. Non-experimental is also often a stepping stone for more rigorous experimental design in the future, as it can help test the feasibility of your research.

In general, non-experimental designs do not strongly support causality and don’t address threats to internal validity. However, that’s not really what they’re intended for. Non-experimental designs are useful for a few different types of research, including explanatory questions in program evaluation. Certain types of non-experimental design are also helpful for researchers when they are trying to develop a new assessment or scale. Other times, researchers or agency staff did not get a chance to gather any assessment information before an intervention began, so a pretest/post-test design is not possible.

A genderqueer person sitting on a couch, talking to a therapist in a brightly-lit room

A significant benefit of these types of designs is that they’re pretty easy to execute in a practice or agency setting. They don’t require a comparison or control group, and as Engel and Schutt (2017) [3] point out, they “flow from a typical practice model of assessment, intervention, and evaluating the impact of the intervention” (p. 177). Thus, these designs are fairly intuitive for social workers, even when they aren’t expert researchers. Below, we will go into some detail about the different types of non-experimental design.

One group pretest/post-test design

Also known as a before-after one-group design, this type of research design does not have a comparison group and everyone who participates in the research receives the intervention (Figure 13.8). This is a common type of design in program evaluation in the practice world. Controlling for extraneous variables is difficult or impossible in this design, but given that it is still possible to establish some measure of time order, it does provide weak support for causality.

experimental group is quantitative or qualitative

Imagine, for example, a researcher who is interested in the effectiveness of an anti-drug education program on elementary school students’ attitudes toward illegal drugs. The researcher could assess students’ attitudes about illegal drugs (O 1 ), implement the anti-drug program (X), and then immediately after the program ends, the researcher could once again measure students’ attitudes toward illegal drugs (O 2 ). You can see how this would be relatively simple to do in practice, and have probably been involved in this type of research design yourself, even if informally. But hopefully, you can also see that this design would not provide us with much evidence for causality because we have no way of controlling for the effect of extraneous variables. A lot of things could have affected any change in students’ attitudes—maybe girls already had different attitudes about illegal drugs than children of other genders, and when we look at the class’s results as a whole, we couldn’t account for that influence using this design.

All of that doesn’t mean these results aren’t useful, however. If we find that children’s attitudes didn’t change at all after the drug education program, then we need to think seriously about how to make it more effective or whether we should be using it at all. (This immediate, practical application of our results highlights a key difference between program evaluation and research, which we will discuss in Chapter 23 .)

After-only design

As the name suggests, this type of non-experimental design involves measurement only after an intervention. There is no comparison or control group, and everyone receives the intervention. I have seen this design repeatedly in my time as a program evaluation consultant for nonprofit organizations, because often these organizations realize too late that they would like to or need to have some sort of measure of what effect their programs are having.

Because there is no pretest and no comparison group, this design is not useful for supporting causality since we can’t establish the time order and we can’t control for extraneous variables. However, that doesn’t mean it’s not useful at all! Sometimes, agencies need to gather information about how their programs are functioning. A classic example of this design is satisfaction surveys—realistically, these can only be administered after a program or intervention. Questions regarding satisfaction, ease of use or engagement, or other questions that don’t involve comparisons are best suited for this type of design.

Static-group design

A final type of non-experimental research is the static-group design. In this type of research, there are both comparison and experimental groups, which are not randomly assigned. There is no pretest, only a post-test, and the comparison group has to be constructed by the researcher. Sometimes, researchers will use matching techniques to construct the groups, but often, the groups are constructed by convenience of who is being served at the agency.

Non-experimental research designs are easy to execute in practice, but we must be cautious about drawing causal conclusions from the results. A positive result may still suggest that we should continue using a particular intervention (and no result or a negative result should make us reconsider whether we should use that intervention at all). You have likely seen non-experimental research in your daily life or at your agency, and knowing the basics of how to structure such a project will help you ensure you are providing clients with the best care possible.

  • Non-experimental designs are useful for describing phenomena, but cannot demonstrate causality.
  • After-only designs are often used in agency and practice settings because practitioners are often not able to set up pre-test/post-test designs.
  • Non-experimental designs are useful for explanatory questions in program evaluation and are helpful for researchers when they are trying to develop a new assessment or scale.
  • Non-experimental designs are well-suited to qualitative methods.
  • If you were to use a non-experimental design for your research project, which would you choose? Why?
  • Have you conducted non-experimental research in your practice or professional life? Which type of non-experimental design was it?

13.6 Critical, ethical, and cultural considerations

  • Describe critiques of experimental design
  • Identify ethical issues in the design and execution of experiments
  • Identify cultural considerations in experimental design

As I said at the outset, experiments, and especially true experiments, have long been seen as the gold standard to gather scientific evidence. When it comes to research in the biomedical field and other physical sciences, true experiments are subject to far less nuance than experiments in the social world. This doesn’t mean they are easier—just subject to different forces. However, as a society, we have placed the most value on quantitative evidence obtained through empirical observation and especially experimentation.

Major critiques of experimental designs tend to focus on true experiments, especially randomized controlled trials (RCTs), but many of these critiques can be applied to quasi-experimental designs, too. Some researchers, even in the biomedical sciences, question the view that RCTs are inherently superior to other types of quantitative research designs. RCTs are far less flexible and have much more stringent requirements than other types of research. One seemingly small issue, like incorrect information about a research participant, can derail an entire RCT. RCTs also cost a great deal of money to implement and don’t reflect “real world” conditions. The cost of true experimental research or RCTs also means that some communities are unlikely to ever have access to these research methods. It is then easy for people to dismiss their research findings because their methods are seen as “not rigorous.”

Obviously, controlling outside influences is important for researchers to draw strong conclusions, but what if those outside influences are actually important for how an intervention works? Are we missing really important information by focusing solely on control in our research? Is a treatment going to work the same for white women as it does for indigenous women? With the myriad effects of our societal structures, you should be very careful ever assuming this will be the case. This doesn’t mean that cultural differences will negate the effect of an intervention; instead, it means that you should remember to practice cultural humility implementing all interventions, even when we “know” they work.

How we build evidence through experimental research reveals a lot about our values and biases, and historically, much experimental research has been conducted on white people, and especially white men. [4] This makes sense when we consider the extent to which the sciences and academia have historically been dominated by white patriarchy. This is especially important for marginalized groups that have long been ignored in research literature, meaning they have also been ignored in the development of interventions and treatments that are accepted as “effective.” There are examples of marginalized groups being experimented on without their consent, like the Tuskegee Experiment or Nazi experiments on Jewish people during World War II. We cannot ignore the collective consciousness situations like this can create about experimental research for marginalized groups.

None of this is to say that experimental research is inherently bad or that you shouldn’t use it. Quite the opposite—use it when you can, because there are a lot of benefits, as we learned throughout this chapter. As a social work researcher, you are uniquely positioned to conduct experimental research while applying social work values and ethics to the process and be a leader for others to conduct research in the same framework. It can conflict with our professional ethics, especially respect for persons and beneficence, if we do not engage in experimental research with our eyes wide open. We also have the benefit of a great deal of practice knowledge that researchers in other fields have not had the opportunity to get. As with all your research, always be sure you are fully exploring the limitations of the research.

  • While true experimental research gathers strong evidence, it can also be inflexible, expensive, and overly simplistic in terms of important social forces that affect the resources.
  • Marginalized communities’ past experiences with experimental research can affect how they respond to research participation.
  • Social work researchers should use both their values and ethics, and their practice experiences, to inform research and push other researchers to do the same.
  • Think back to the true experiment you sketched out in the exercises for Section 13.3. Are there cultural or historical considerations you hadn’t thought of with your participant group? What are they? Does this change the type of experiment you would want to do?
  • How can you as a social work researcher encourage researchers in other fields to consider social work ethics and values in their experimental research?

Media Attributions

  • Being kinder to yourself © Evgenia Makarova is licensed under a CC BY-NC-ND (Attribution NonCommercial NoDerivatives) license
  • Original by author is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • Original by author. is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • Orginal by author. is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • therapist © Zackary Drucker is licensed under a CC BY-NC-ND (Attribution NonCommercial NoDerivatives) license
  • nonexper-pretest-posttest is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • Engel, R. & Schutt, R. (2016). The practice of research in social work. Thousand Oaks, CA: SAGE Publications, Inc. ↵
  • Sullivan, G. M. (2011). Getting off the “gold standard”: Randomized controlled trials and education research. Journal of Graduate Medical Education ,  3 (3), 285-289. ↵

an operation or procedure carried out under controlled conditions in order to discover an unknown effect or law, to test or establish a hypothesis, or to illustrate a known law.

explains why particular phenomena work in the way that they do; answers “why” questions

variables and characteristics that have an effect on your outcome, but aren't the primary variable whose influence you're interested in testing.

the group of participants in our study who do not receive the intervention we are researching in experiments with random assignment

in experimental design, the group of participants in our study who do receive the intervention we are researching

the group of participants in our study who do not receive the intervention we are researching in experiments without random assignment

using a random process to decide which participants are tested in which conditions

The ability to apply research findings beyond the study sample to some broader population,

Ability to say that one variable "causes" something to happen to another variable. Very important to assess when thinking about studies that examine causation such as experimental or quasi-experimental designs.

the idea that one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief

An experimental design in which one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed

a type of experimental design in which participants are randomly assigned to control and experimental groups, one group receives an intervention, and both groups receive pre- and post-test assessments

A measure of a participant's condition before they receive an intervention or treatment.

A measure of a participant's condition after an intervention or, if they are part of the control/comparison group, at the end of an experiment.

A demonstration that a change occurred after an intervention. An important criterion for establishing causality.

an experimental design in which participants are randomly assigned to control and treatment groups, one group receives an intervention, and both groups receive only a post-test assessment

The measurement error related to how a test is given; the conditions of the testing, including environmental conditions; and acclimation to the test itself

a subtype of experimental design that is similar to a true experiment, but does not have randomly assigned control and treatment groups

In nonequivalent comparison group designs, the process by which researchers match individual cases in the experimental group to similar cases in the comparison group.

In nonequivalent comparison group designs, the process in which researchers match the population profile of the comparison and experimental groups.

a set of measurements taken at intervals over a period of time

Research that involves the use of data that represents human expression through words, pictures, movies, performance and other artifacts.

Graduate research methods in social work Copyright © 2021 by Matthew DeCarlo, Cory Cummings, Kate Agnelli is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Quantitative vs. Qualitative Research in Psychology

  • Key Differences

Quantitative Research Methods

Qualitative research methods.

  • How They Relate

In psychology and other social sciences, researchers are faced with an unresolved question: Can we measure concepts like love or racism the same way we can measure temperature or the weight of a star? Social phenomena⁠—things that happen because of and through human behavior⁠—are especially difficult to grasp with typical scientific models.

At a Glance

Psychologists rely on quantitative and quantitative research to better understand human thought and behavior.

  • Qualitative research involves collecting and evaluating non-numerical data in order to understand concepts or subjective opinions.
  • Quantitative research involves collecting and evaluating numerical data. 

This article discusses what qualitative and quantitative research are, how they are different, and how they are used in psychology research.

Qualitative Research vs. Quantitative Research

In order to understand qualitative and quantitative psychology research, it can be helpful to look at the methods that are used and when each type is most appropriate.

Psychologists rely on a few methods to measure behavior, attitudes, and feelings. These include:

  • Self-reports , like surveys or questionnaires
  • Observation (often used in experiments or fieldwork)
  • Implicit attitude tests that measure timing in responding to prompts

Most of these are quantitative methods. The result is a number that can be used to assess differences between groups.

However, most of these methods are static, inflexible (you can't change a question because a participant doesn't understand it), and provide a "what" answer rather than a "why" answer.

Sometimes, researchers are more interested in the "why" and the "how." That's where qualitative methods come in.

Qualitative research is about speaking to people directly and hearing their words. It is grounded in the philosophy that the social world is ultimately unmeasurable, that no measure is truly ever "objective," and that how humans make meaning is just as important as how much they score on a standardized test.

Used to develop theories

Takes a broad, complex approach

Answers "why" and "how" questions

Explores patterns and themes

Used to test theories

Takes a narrow, specific approach

Answers "what" questions

Explores statistical relationships

Quantitative methods have existed ever since people have been able to count things. But it is only with the positivist philosophy of Auguste Comte (which maintains that factual knowledge obtained by observation is trustworthy) that it became a "scientific method."

The scientific method follows this general process. A researcher must:

  • Generate a theory or hypothesis (i.e., predict what might happen in an experiment) and determine the variables needed to answer their question
  • Develop instruments to measure the phenomenon (such as a survey, a thermometer, etc.)
  • Develop experiments to manipulate the variables
  • Collect empirical (measured) data
  • Analyze data

Quantitative methods are about measuring phenomena, not explaining them.

Quantitative research compares two groups of people. There are all sorts of variables you could measure, and many kinds of experiments to run using quantitative methods.

These comparisons are generally explained using graphs, pie charts, and other visual representations that give the researcher a sense of how the various data points relate to one another.

Basic Assumptions

Quantitative methods assume:

  • That the world is measurable
  • That humans can observe objectively
  • That we can know things for certain about the world from observation

In some fields, these assumptions hold true. Whether you measure the size of the sun 2000 years ago or now, it will always be the same. But when it comes to human behavior, it is not so simple.

As decades of cultural and social research have shown, people behave differently (and even think differently) based on historical context, cultural context, social context, and even identity-based contexts like gender , social class, or sexual orientation .

Therefore, quantitative methods applied to human behavior (as used in psychology and some areas of sociology) should always be rooted in their particular context. In other words: there are no, or very few, human universals.

Statistical information is the primary form of quantitative data used in human and social quantitative research. Statistics provide lots of information about tendencies across large groups of people, but they can never describe every case or every experience. In other words, there are always outliers.

Correlation and Causation

A basic principle of statistics is that correlation is not causation. Researchers can only claim a cause-and-effect relationship under certain conditions:

  • The study was a true experiment.
  • The independent variable can be manipulated (for example, researchers cannot manipulate gender, but they can change the primer a study subject sees, such as a picture of nature or of a building).
  • The dependent variable can be measured through a ratio or a scale.

So when you read a report that "gender was linked to" something (like a behavior or an attitude), remember that gender is NOT a cause of the behavior or attitude. There is an apparent relationship, but the true cause of the difference is hidden.

Pitfalls of Quantitative Research

Quantitative methods are one way to approach the measurement and understanding of human and social phenomena. But what's missing from this picture?

As noted above, statistics do not tell us about personal, individual experiences and meanings. While surveys can give a general idea, respondents have to choose between only a few responses. This can make it difficult to understand the subtleties of different experiences.

Quantitative methods can be helpful when making objective comparisons between groups or when looking for relationships between variables. They can be analyzed statistically, which can be helpful when looking for patterns and relationships.

Qualitative data are not made out of numbers but rather of descriptions, metaphors, symbols, quotes, analysis, concepts, and characteristics. This approach uses interviews, written texts, art, photos, and other materials to make sense of human experiences and to understand what these experiences mean to people.

While quantitative methods ask "what" and "how much," qualitative methods ask "why" and "how."

Qualitative methods are about describing and analyzing phenomena from a human perspective. There are many different philosophical views on qualitative methods, but in general, they agree that some questions are too complex or impossible to answer with standardized instruments.

These methods also accept that it is impossible to be completely objective in observing phenomena. Researchers have their own thoughts, attitudes, experiences, and beliefs, and these always color how people interpret results.

Qualitative Approaches

There are many different approaches to qualitative research, with their own philosophical bases. Different approaches are best for different kinds of projects. For example:

  • Case studies and narrative studies are best for single individuals. These involve studying every aspect of a person's life in great depth.
  • Phenomenology aims to explain experiences. This type of work aims to describe and explore different events as they are consciously and subjectively experienced.
  • Grounded theory develops models and describes processes. This approach allows researchers to construct a theory based on data that is collected, analyzed, and compared to reach new discoveries.
  • Ethnography describes cultural groups. In this approach, researchers immerse themselves in a community or group in order to observe behavior.

Qualitative researchers must be aware of several different methods and know each thoroughly enough to produce valuable research.

Some researchers specialize in a single method, but others specialize in a topic or content area and use many different methods to explore the topic, providing different information and a variety of points of view.

There is not a single model or method that can be used for every qualitative project. Depending on the research question, the people participating, and the kind of information they want to produce, researchers will choose the appropriate approach.

Interpretation

Qualitative research does not look into causal relationships between variables, but rather into themes, values, interpretations, and meanings. As a rule, then, qualitative research is not generalizable (cannot be applied to people outside the research participants).

The insights gained from qualitative research can extend to other groups with proper attention to specific historical and social contexts.

Relationship Between Qualitative and Quantitative Research

It might sound like quantitative and qualitative research do not play well together. They have different philosophies, different data, and different outputs. However, this could not be further from the truth.

These two general methods complement each other. By using both, researchers can gain a fuller, more comprehensive understanding of a phenomenon.

For example, a psychologist wanting to develop a new survey instrument about sexuality might and ask a few dozen people questions about their sexual experiences (this is qualitative research). This gives the researcher some information to begin developing questions for their survey (which is a quantitative method).

After the survey, the same or other researchers might want to dig deeper into issues brought up by its data. Follow-up questions like "how does it feel when...?" or "what does this mean to you?" or "how did you experience this?" can only be answered by qualitative research.

By using both quantitative and qualitative data, researchers have a more holistic, well-rounded understanding of a particular topic or phenomenon.

Qualitative and quantitative methods both play an important role in psychology. Where quantitative methods can help answer questions about what is happening in a group and to what degree, qualitative methods can dig deeper into the reasons behind why it is happening. By using both strategies, psychology researchers can learn more about human thought and behavior.

Gough B, Madill A. Subjectivity in psychological science: From problem to prospect . Psychol Methods . 2012;17(3):374-384. doi:10.1037/a0029313

Pearce T. “Science organized”: Positivism and the metaphysical club, 1865–1875 . J Hist Ideas . 2015;76(3):441-465.

Adams G. Context in person, person in context: A cultural psychology approach to social-personality psychology . In: Deaux K, Snyder M, eds. The Oxford Handbook of Personality and Social Psychology . Oxford University Press; 2012:182-208.

Brady HE. Causation and explanation in social science . In: Goodin RE, ed. The Oxford Handbook of Political Science. Oxford University Press; 2011. doi:10.1093/oxfordhb/9780199604456.013.0049

Chun Tie Y, Birks M, Francis K. Grounded theory research: A design framework for novice researchers .  SAGE Open Med . 2019;7:2050312118822927. doi:10.1177/2050312118822927

Reeves S, Peller J, Goldman J, Kitto S. Ethnography in qualitative educational research: AMEE Guide No. 80 . Medical Teacher . 2013;35(8):e1365-e1379. doi:10.3109/0142159X.2013.804977

Salkind NJ, ed. Encyclopedia of Research Design . Sage Publishing.

Shaughnessy JJ, Zechmeister EB, Zechmeister JS.  Research Methods in Psychology . McGraw Hill Education.

By Anabelle Bernard Fournier Anabelle Bernard Fournier is a researcher of sexual and reproductive health at the University of Victoria as well as a freelance writer on various health topics.

Banner

Nursing 360: Types of Research within Qualitative and Quantitative

  • EBP & the Evidence Pyramid
  • Evidence-Based Research Sources
  • Point of Care Databases
  • Health Statistics
  • Quantitative vs Qualitative Research
  • Independent Variable VS Dependent Variable
  • Types of Studies
  • Cohort vs Case studies
  • Types of Research within Qualitative and Quantitative
  • Table of Evidence
  • Sampling Methods and Statistics
  • Review vs Systematic Review vs ETC...
  • Nursing Models
  • Nursing Related Apps
  • Distance & Online Library Services

Components of Quantative (Empirical) Research Papers

  • Statement of purpose: what was studied and why  
  • Methodology: experimental group, control group, variables, test conditions, test subjects, etc.
  • Results: usually numeric and presented in tables or graphs, often with statistical analysis.
  • Conclusions drawn from the results  
  • References and sometimes footnotes 
  • Author credentials

Hint: The abstract (summary) of a research article is the first place to check for most of the components above.  The abstract appears both in a database record and at the beginning of the article.

Types of Quantitative Research

Four (4) main types of quantitative design: descriptive, correlational, quasi-experimental, experimental

http://samples.jbpub.com/9780763780586/80586_CH03_Keele.pdf

Types of Qualitative Research

 

Case study

Attempts to shed light on a phenomenon by studying in-depth a single case example of the phenomenon.  The case may be an individual, an event, a group, or an institution.

Attempts to explain the social and psychological processes that characterize an event or situation.

Phenomenology

Describes the structures of experience as they present themselves to consciousness, without recourse to theory, deduction, or assumptions from other disciplines.

Ethnography

Focuses on the sociology of meaning through close field observation of sociocultural phenomena. Typically, the ethnographer focuses on a community.

Historical

Systematic collection and objective evaluation of data related to past occurrences in order to test hypotheses concerning causes, effects, or trends of these events that may help to explain present events and anticipate future events. (Gay, 1996)

http://wilderdom.com/OEcourses/PROFLIT/Class6Qualitative1.htm

  • << Previous: Cohort vs Case studies
  • Next: Table of Evidence >>
  • Last Updated: Aug 13, 2024 1:42 PM
  • URL: https://libguides.sdstate.edu/nursing360

LibGuides Footer; South Dakota State University; Brookings, SD 57007; 1-800-952-3541

Educational resources and simple solutions for your research journey

qualitative vs quantitative research

Qualitative vs Quantitative Research: Differences, Examples, and Methods

There are two broad kinds of research approaches: qualitative and quantitative research that are used to study and analyze phenomena in various fields such as natural sciences, social sciences, and humanities. Whether you have realized it or not, your research must have followed either or both research types. In this article we will discuss what qualitative vs quantitative research is, their applications, pros and cons, and when to use qualitative vs quantitative research . Before we get into the details, it is important to understand the differences between the qualitative and quantitative research.     

Table of Contents

Qualitative v s Quantitative Research  

Quantitative research deals with quantity, hence, this research type is concerned with numbers and statistics to prove or disapprove theories or hypothesis. In contrast, qualitative research is all about quality – characteristics, unquantifiable features, and meanings to seek deeper understanding of behavior and phenomenon. These two methodologies serve complementary roles in the research process, each offering unique insights and methods suited to different research questions and objectives.    

Qualitative and quantitative research approaches have their own unique characteristics, drawbacks, advantages, and uses. Where quantitative research is mostly employed to validate theories or assumptions with the goal of generalizing facts to the larger population, qualitative research is used to study concepts, thoughts, or experiences for the purpose of gaining the underlying reasons, motivations, and meanings behind human behavior .   

What Are the Differences Between Qualitative and Quantitative Research  

Qualitative and quantitative research differs in terms of the methods they employ to conduct, collect, and analyze data. For example, qualitative research usually relies on interviews, observations, and textual analysis to explore subjective experiences and diverse perspectives. While quantitative data collection methods include surveys, experiments, and statistical analysis to gather and analyze numerical data. The differences between the two research approaches across various aspects are listed in the table below.    

     
  Understanding meanings, exploring ideas, behaviors, and contexts, and formulating theories  Generating and analyzing numerical data, quantifying variables by using logical, statistical, and mathematical techniques to test or prove hypothesis  
  Limited sample size, typically not representative  Large sample size to draw conclusions about the population  
  Expressed using words. Non-numeric, textual, and visual narrative  Expressed using numerical data in the form of graphs or values. Statistical, measurable, and numerical 
  Interviews, focus groups, observations, ethnography, literature review, and surveys  Surveys, experiments, and structured observations 
  Inductive, thematic, and narrative in nature  Deductive, statistical, and numerical in nature 
  Subjective  Objective 
  Open-ended questions  Close-ended (Yes or No) or multiple-choice questions 
  Descriptive and contextual   Quantifiable and generalizable 
  Limited, only context-dependent findings  High, results applicable to a larger population 
  Exploratory research method  Conclusive research method 
  To delve deeper into the topic to understand the underlying theme, patterns, and concepts  To analyze the cause-and-effect relation between the variables to understand a complex phenomenon 
  Case studies, ethnography, and content analysis  Surveys, experiments, and correlation studies 

experimental group is quantitative or qualitative

Data Collection Methods  

There are differences between qualitative and quantitative research when it comes to data collection as they deal with different types of data. Qualitative research is concerned with personal or descriptive accounts to understand human behavior within society. Quantitative research deals with numerical or measurable data to delineate relations among variables. Hence, the qualitative data collection methods differ significantly from quantitative data collection methods due to the nature of data being collected and the research objectives. Below is the list of data collection methods for each research approach:    

Qualitative Research Data Collection  

  • Interviews  
  • Focus g roups  
  • Content a nalysis  
  • Literature review  
  • Observation  
  • Ethnography  

Qualitative research data collection can involve one-on-one group interviews to capture in-depth perspectives of participants using open-ended questions. These interviews could be structured, semi-structured or unstructured depending upon the nature of the study. Focus groups can be used to explore specific topics and generate rich data through discussions among participants. Another qualitative data collection method is content analysis, which involves systematically analyzing text documents, audio, and video files or visual content to uncover patterns, themes, and meanings. This can be done through coding and categorization of raw data to draw meaningful insights. Data can be collected through observation studies where the goal is to simply observe and document behaviors, interaction, and phenomena in natural settings without interference. Lastly, ethnography allows one to immerse themselves in the culture or environment under study for a prolonged period to gain a deep understanding of the social phenomena.   

Quantitative Research Data Collection  

  • Surveys/ q uestionnaires  
  • Experiments
  • Secondary data analysis  
  • Structured o bservations  
  • Case studies   
  • Tests and a ssessments  

Quantitative research data collection approaches comprise of fundamental methods for generating numerical data that can be analyzed using statistical or mathematical tools. The most common quantitative data collection approach is the usage of structured surveys with close-ended questions to collect quantifiable data from a large sample of participants. These can be conducted online, over the phone, or in person.   

Performing experiments is another important data collection approach, in which variables are manipulated under controlled conditions to observe their effects on dependent variables. This often involves random assignment of participants to different conditions or groups. Such experimental settings are employed to gauge cause-and-effect relationships and understand a complex phenomenon. At times, instead of acquiring original data, researchers may deal with secondary data, which is the dataset curated by others, such as government agencies, research organizations, or academic institute. With structured observations, subjects in a natural environment can be studied by controlling the variables which aids in understanding the relationship among various variables. The secondary data is then analyzed to identify patterns and relationships among variables. Observational studies provide a means to systematically observe and record behaviors or phenomena as they occur in controlled environments. Case studies form an interesting study methodology in which a researcher studies a single entity or a small number of entities (individuals or organizations) in detail to understand complex phenomena within a specific context.   

Qualitative vs Quantitative Research Outcomes  

Qualitative research and quantitative research lead to varied research outcomes, each with its own strengths and limitations. For example, qualitative research outcomes provide deep descriptive accounts of human experiences, motivations, and perspectives that allow us to identify themes or narratives and context in which behavior, attitudes, or phenomena occurs.  Quantitative research outcomes on the other hand produce numerical data that is analyzed statistically to establish patterns and relationships objectively, to form generalizations about the larger population and make predictions. This numerical data can be presented in the form of graphs, tables, or charts. Both approaches offer valuable perspectives on complex phenomena, with qualitative research focusing on depth and interpretation, while quantitative research emphasizes numerical analysis and objectivity.  

experimental group is quantitative or qualitative

When to Use Qualitative vs Quantitative Research Approach  

The decision to choose between qualitative and quantitative research depends on various factors, such as the research question, objectives, whether you are taking an inductive or deductive approach, available resources, practical considerations such as time and money, and the nature of the phenomenon under investigation. To simplify, quantitative research can be used if the aim of the research is to prove or test a hypothesis, while qualitative research should be used if the research question is more exploratory and an in-depth understanding of the concepts, behavior, or experiences is needed.     

Qualitative research approach  

Qualitative research approach is used under following scenarios:   

  • To study complex phenomena: When the research requires understanding the depth, complexity, and context of a phenomenon.  
  • Collecting participant perspectives: When the goal is to understand the why behind a certain behavior, and a need to capture subjective experiences and perceptions of participants.  
  • Generating hypotheses or theories: When generating hypotheses, theories, or conceptual frameworks based on exploratory research.  

Example: If you have a research question “What obstacles do expatriate students encounter when acquiring a new language in their host country?”  

This research question can be addressed using the qualitative research approach by conducting in-depth interviews with 15-25 expatriate university students. Ask open-ended questions such as “What are the major challenges you face while attempting to learn the new language?”, “Do you find it difficult to learn the language as an adult?”, and “Do you feel practicing with a native friend or colleague helps the learning process”?  

Based on the findings of these answers, a follow-up questionnaire can be planned to clarify things. Next step will be to transcribe all interviews using transcription software and identify themes and patterns.   

Quantitative research approach  

Quantitative research approach is used under following scenarios:   

  • Testing hypotheses or proving theories: When aiming to test hypotheses, establish relationships, or examine cause-and-effect relationships.   
  • Generalizability: When needing findings that can be generalized to broader populations using large, representative samples.  
  • Statistical analysis: When requiring rigorous statistical analysis to quantify relationships, patterns, or trends in data.   

Example : Considering the above example, you can conduct a survey of 200-300 expatriate university students and ask them specific questions such as: “On a scale of 1-10 how difficult is it to learn a new language?”  

Next, statistical analysis can be performed on the responses to draw conclusions like, on an average expatriate students rated the difficulty of learning a language 6.5 on the scale of 10.    

Mixed methods approach  

In many cases, researchers may opt for a mixed methods approach , combining qualitative and quantitative methods to leverage the strengths of both approaches. Researchers may use qualitative data to explore phenomena in-depth and generate hypotheses, while quantitative data can be used to test these hypotheses and generalize findings to broader populations.  

Example: Both qualitative and quantitative research methods can be used in combination to address the above research question. Through open-ended questions you can gain insights about different perspectives and experiences while quantitative research allows you to test that knowledge and prove/disprove your hypothesis.   

How to Analyze Qualitative and Quantitative Data  

When it comes to analyzing qualitative and quantitative data, the focus is on identifying patterns in the data to highlight the relationship between elements. The best research method for any given study should be chosen based on the study aim. A few methods to analyze qualitative and quantitative data are listed below.  

Analyzing qualitative data  

Qualitative data analysis is challenging as it is not expressed in numbers and consists majorly of texts, images, or videos. Hence, care must be taken while using any analytical approach. Some common approaches to analyze qualitative data include:  

  • Organization: The first step is data (transcripts or notes) organization into different categories with similar concepts, themes, and patterns to find inter-relationships.  
  • Coding: Data can be arranged in categories based on themes/concepts using coding.  
  • Theme development: Utilize higher-level organization to group related codes into broader themes.  
  • Interpretation: Explore the meaning behind different emerging themes to understand connections. Use different perspectives like culture, environment, and status to evaluate emerging themes.  
  • Reporting: Present findings with quotes or excerpts to illustrate key themes.   

Analyzing quantitative data  

Quantitative data analysis is more direct compared to qualitative data as it primarily deals with numbers. Data can be evaluated using simple math or advanced statistics (descriptive or inferential). Some common approaches to analyze quantitative data include:  

  • Processing raw data: Check missing values, outliers, or inconsistencies in raw data.  
  • Descriptive statistics: Summarize data with means, standard deviations, or standard error using programs such as Excel, SPSS, or R language.  
  • Exploratory data analysis: Usage of visuals to deduce patterns and trends.  
  • Hypothesis testing: Apply statistical tests to find significance and test hypothesis (Student’s t-test or ANOVA).  
  • Interpretation: Analyze results considering significance and practical implications.  
  • Validation: Data validation through replication or literature review.  
  • Reporting: Present findings by means of tables, figures, or graphs.   

experimental group is quantitative or qualitative

Benefits and limitations of qualitative vs quantitative research  

There are significant differences between qualitative and quantitative research; we have listed the benefits and limitations of both methods below:  

Benefits of qualitative research  

  • Rich insights: As qualitative research often produces information-rich data, it aids in gaining in-depth insights into complex phenomena, allowing researchers to explore nuances and meanings of the topic of study.  
  • Flexibility: One of the most important benefits of qualitative research is flexibility in acquiring and analyzing data that allows researchers to adapt to the context and explore more unconventional aspects.  
  • Contextual understanding: With descriptive and comprehensive data, understanding the context in which behaviors or phenomena occur becomes accessible.   
  • Capturing different perspectives: Qualitative research allows for capturing different participant perspectives with open-ended question formats that further enrich data.   
  • Hypothesis/theory generation: Qualitative research is often the first step in generating theory/hypothesis, which leads to future investigation thereby contributing to the field of research.

Limitations of qualitative research  

  • Subjectivity: It is difficult to have objective interpretation with qualitative research, as research findings might be influenced by the expertise of researchers. The risk of researcher bias or interpretations affects the reliability and validity of the results.   
  • Limited generalizability: Due to the presence of small, non-representative samples, the qualitative data cannot be used to make generalizations to a broader population.  
  • Cost and time intensive: Qualitative data collection can be time-consuming and resource-intensive, therefore, it requires strategic planning and commitment.   
  • Complex analysis: Analyzing qualitative data needs specialized skills and techniques, hence, it’s challenging for researchers without sufficient training or experience.   
  • Potential misinterpretation: There is a risk of sampling bias and misinterpretation in data collection and analysis if researchers lack cultural or contextual understanding.   

Benefits of quantitative research  

  • Objectivity: A key benefit of quantitative research approach, this objectivity reduces researcher bias and subjectivity, enhancing the reliability and validity of findings.   
  • Generalizability: For quantitative research, the sample size must be large and representative enough to allow for generalization to broader populations.   
  • Statistical analysis: Quantitative research enables rigorous statistical analysis (increasing power of the analysis), aiding hypothesis testing and finding patterns or relationship among variables.   
  • Efficiency: Quantitative data collection and analysis is usually more efficient compared to the qualitative methods, especially when dealing with large datasets.   
  • Clarity and Precision: The findings are usually clear and precise, making it easier to present them as graphs, tables, and figures to convey them to a larger audience.  

Limitations of quantitative research  

  • Lacks depth and details: Due to its objective nature, quantitative research might lack the depth and richness of qualitative approaches, potentially overlooking important contextual factors or nuances.   
  • Limited exploration: By not considering the subjective experiences of participants in depth , there’s a limited chance to study complex phenomenon in detail.   
  • Potential oversimplification: Quantitative research may oversimplify complex phenomena by boiling them down to numbers, which might ignore key nuances.   
  • Inflexibility: Quantitative research deals with predecided varibales and measures , which limits the ability of researchers to explore unexpected findings or adjust the research design as new findings become available .  
  • Ethical consideration: Quantitative research may raise ethical concerns especially regarding privacy, informed consent, and the potential for harm, when dealing with sensitive topics or vulnerable populations.   

Frequently asked questions  

  • What is the difference between qualitative and quantitative research? 

Quantitative methods use numerical data and statistical analysis for objective measurement and hypothesis testing, emphasizing generalizability. Qualitative methods gather non-numerical data to explore subjective experiences and contexts, providing rich, nuanced insights.  

  • What are the types of qualitative research? 

Qualitative research methods include interviews, observations, focus groups, and case studies. They provide rich insights into participants’ perspectives and behaviors within their contexts, enabling exploration of complex phenomena.  

  • What are the types of quantitative research? 

Quantitative research methods include surveys, experiments, observations, correlational studies, and longitudinal research. They gather numerical data for statistical analysis, aiming for objectivity and generalizability.  

  • Can you give me examples for qualitative and quantitative research? 

Qualitative Research Example: 

Research Question: What are the experiences of parents with autistic children in accessing support services?  

Method: Conducting in-depth interviews with parents to explore their perspectives, challenges, and needs.  

Quantitative Research Example: 

Research Question: What is the correlation between sleep duration and academic performance in college students?  

Method: Distributing surveys to a large sample of college students to collect data on their sleep habits and academic performance, then analyzing the data statistically to determine any correlations.  

Editage All Access is a subscription-based platform that unifies the best AI tools and services designed to speed up, simplify, and streamline every step of a researcher’s journey. The Editage All Access Pack is a one-of-a-kind subscription that unlocks full access to an AI writing assistant, literature recommender, journal finder, scientific illustration tool, and exclusive discounts on professional publication services from Editage.  

Based on 22+ years of experience in academia, Editage All Access empowers researchers to put their best research forward and move closer to success. Explore our top AI Tools pack, AI Tools + Publication Services pack, or Build Your Own Plan. Find everything a researcher needs to succeed, all in one place –  Get All Access now starting at just $14 a month !    

Related Posts

research funding sources

What are the Best Research Funding Sources

experimental groups in research

What are Experimental Groups in Research

Quantitative Research Designs: Non-Experimental vs. Experimental

experimental group is quantitative or qualitative

While there are many types of quantitative research designs, they generally fall under one of three umbrellas: experimental research, quasi-experimental research, and non-experimental research.

Experimental research designs are what many people think of when they think of research; they typically involve the manipulation of variables and random assignment of participants to conditions. A traditional experiment may involve the comparison of a control group to an experimental group who receives a treatment (i.e., a variable is manipulated). When done correctly, experimental designs can provide evidence for cause and effect. Because of their ability to determine causation, experimental designs are the gold-standard for research in medicine, biology, and so on. However, such designs can also be used in the “soft sciences,” like social science. Experimental research has strict standards for control within the research design and for establishing validity. These designs may also be very resource and labor intensive. Additionally, it can be hard to justify the generalizability of the results in a very tightly controlled or artificial experimental setting. However, if done well, experimental research methods can lead to some very convincing and interesting results.

Need help with your research?

Schedule a time to speak with an expert using the calendar below.

Non-experimental research, on the other hand, can be just as interesting, but you cannot draw the same conclusions from it as you can with experimental research. Non-experimental research is usually descriptive or correlational, which means that you are either describing a situation or phenomenon simply as it stands, or you are describing a relationship between two or more variables, all without any interference from the researcher. This means that you do not manipulate any variables (e.g., change the conditions that an experimental group undergoes) or randomly assign participants to a control or treatment group. Without this level of control, you cannot determine any causal effects. While validity is still a concern in non-experimental research, the concerns are more about the validity of the measurements, rather than the validity of the effects.

Finally, a quasi-experimental design is a combination of the two designs described above. For quasi-experimental designs you still can manipulate a variable in the experimental group, but there is no random assignment into groups. Quasi-experimental designs are the most common when the researcher uses a convenience sample to recruit participants. For example, let’s say you were interested in studying the effect of stress on student test scores at the school that you work for. You teach two separate classes so you decide to just use each class as a different group. Class A becomes the experimental group who experiences the stressor manipulation and class B becomes the control group. Because you are sampling from two different pre-existing groups, without any random assignment, this would be known as a quasi-experimental design. These types of designs are very useful for when you want to find a causal relationship between variables but cannot randomly assign people to groups for practical or ethical reasons, such as working with a population of clinically depressed people or looking for gender differences (we can’t randomly assign people to be clinically depressed or to be a different gender). While these types of studies sometimes have higher external validity than a true experimental design, since they involve real world interventions and group rather than a laboratory setting, because of the lack of random assignment in these groups, the generalizability of the study is severely limited.

So, how do you choose between these designs? This will depend on your topic, your available resources, and desired goal. For example, do you want to see if a particular intervention relieves feelings of anxiety? The most convincing results for that would come from a true experimental design with random sampling and random assignment to groups. Ultimately, this is a decision that should be made in close collaboration with your advisor. Therefore, I recommend discussing the pros and cons of each type of research, what it might mean for your personal dissertation process, and what is required of each design before making a decision.

Take the Course: Experimental and Quasi-Experimental Research Design

Sac State Library

  • My Library Account
  • Articles, Books & More
  • Course Reserves
  • Site Search
  • Advanced Search
  • Sac State Library
  • Research Guides

Research Methods Simplified

  • Quantitative Research
  • Qualitative Research
  • Comparative Method/Quasi-Experimental
  • Primary, Seconday and Tertiary Research and Resources
  • Definitions
  • Sources Consulted

Quantitative

Quantitative research - concerned with precise measurement, replicable, controlled and used to predict events. It is a formal, objective, systematic process. N umerical data are used to obtain information about the subject under study.

-uses data that are numeric

- primarily intended to test theories

-it is deductive and outcome orientated

-examples of statistical techniques used for quantitative data analysis are random sampling, regression analysis, factor analysis, correlation, cluster analysis, causal modeling and standardized tests

For comparative information on qualitative v.s. quantitative see: The University of Arkansas University Library Lib Guides

Related Information

Control group - the group of subjects or elements NOT exposed to the experimental treatment in a study where the sample is randomly selected

Experimental group - the group of subjects receiving the experimental treatment, i.e., the independent variable ( controlled measure or cause ) in an experiment.

Independent variable - the variable or measure being manipulated or controlled by the experimenter. The independent variable is assigned to participants by random assignment.

Dependent variable or dependent measure - the factor that the experimenter predicts is affected by the independent variable, i.e., the response, outcome or effect from the participants that the experimenter is measuring.

Four types of Quantitative Research

Descriptive  

1) Descriptive - provides a description and exploration of phenomena in real-life situations and characteristics. Correlational of particular individuals, situations or groups are described.  

Comparative

2) Comparative - a systematic investigation of relationships between two or more variables used to explain the nature of relationships in the world. Correlations may be positive (e.g., if one variable increases, so does the other), or negative (correlation occurs when one variable increases and the other decreases).

Quasi-experimental

3) Quasi-experimental - a study that resembles an experiment but random assignment had no role in determining which participants were placed on a specific level of treatment. Generally would have less validity than experiments.

Experimental (empirical)

4) Experimental (empirical) method- the scientific method used to test an experimental hypothesis or premise. Consists of a control group (not exposed to the experimental treatment, i.e.. is dependent) and the experimental group (is exposed to the treatment, i.e., independent)

  • << Previous: Home
  • Next: Qualitative Research >>
  • Last Updated: Jul 3, 2024 2:35 PM
  • URL: https://csus.libguides.com/res-meth

Frequently asked questions

What’s the difference between a control group and an experimental group.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Frequently asked questions: Methodology

Quantitative observations involve measuring or counting something and expressing the result in numerical form, while qualitative observations involve describing something in non-numerical terms, such as its appearance, texture, or color.

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Scope of research is determined at the beginning of your research process , prior to the data collection stage. Sometimes called “scope of study,” your scope delineates what will and will not be covered in your project. It helps you focus your work and your time, ensuring that you’ll be able to achieve your goals and outcomes.

Defining a scope can be very useful in any research project, from a research proposal to a thesis or dissertation . A scope is needed for all types of research: quantitative , qualitative , and mixed methods .

To define your scope of research, consider the following:

  • Budget constraints or any specifics of grant funding
  • Your proposed timeline and duration
  • Specifics about your population of study, your proposed sample size , and the research methodology you’ll pursue
  • Any inclusion and exclusion criteria
  • Any anticipated control , extraneous , or confounding variables that could bias your research if not accounted for properly.

Inclusion and exclusion criteria are predominantly used in non-probability sampling . In purposive sampling and snowball sampling , restrictions apply as to who can be included in the sample .

Inclusion and exclusion criteria are typically presented and discussed in the methodology section of your thesis or dissertation .

The purpose of theory-testing mode is to find evidence in order to disprove, refine, or support a theory. As such, generalisability is not the aim of theory-testing mode.

Due to this, the priority of researchers in theory-testing mode is to eliminate alternative causes for relationships between variables . In other words, they prioritise internal validity over external validity , including ecological validity .

Convergent validity shows how much a measure of one construct aligns with other measures of the same or related constructs .

On the other hand, concurrent validity is about how a measure matches up to some known criterion or gold standard, which can be another measure.

Although both types of validity are established by calculating the association or correlation between a test score and another variable , they represent distinct validation methods.

Validity tells you how accurately a method measures what it was designed to measure. There are 4 main types of validity :

  • Construct validity : Does the test measure the construct it was designed to measure?
  • Face validity : Does the test appear to be suitable for its objectives ?
  • Content validity : Does the test cover all relevant parts of the construct it aims to measure.
  • Criterion validity : Do the results accurately measure the concrete outcome they are designed to measure?

Criterion validity evaluates how well a test measures the outcome it was designed to measure. An outcome can be, for example, the onset of a disease.

Criterion validity consists of two subtypes depending on the time at which the two measures (the criterion and your test) are obtained:

  • Concurrent validity is a validation strategy where the the scores of a test and the criterion are obtained at the same time
  • Predictive validity is a validation strategy where the criterion variables are measured after the scores of the test

Attrition refers to participants leaving a study. It always happens to some extent – for example, in randomised control trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analysing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Construct validity refers to how well a test measures the concept (or construct) it was designed to measure. Assessing construct validity is especially important when you’re researching concepts that can’t be quantified and/or are intangible, like introversion. To ensure construct validity your test should be based on known indicators of introversion ( operationalisation ).

On the other hand, content validity assesses how well the test represents all aspects of the construct. If some aspects are missing or irrelevant parts are included, the test has low content validity.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

Construct validity has convergent and discriminant subtypes. They assist determine if a test measures the intended notion.

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Reproducibility and replicability are related terms.

  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.
  • Reproducing research entails reanalysing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalisations – often the goal of quantitative research . As such, a snowball sample is not representative of the target population, and is usually a better fit for qualitative research .

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones. 

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extra-marital affairs)

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection , using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

When your population is large in size, geographically dispersed, or difficult to contact, it’s necessary to use a sampling method .

This allows you to gather information from a smaller part of the population, i.e. the sample, and make accurate statements by using statistical analysis. A few sampling methods include simple random sampling , convenience sampling , and snowball sampling .

The two main types of social desirability bias are:

  • Self-deceptive enhancement (self-deception): The tendency to see oneself in a favorable light without realizing it.
  • Impression managemen t (other-deception): The tendency to inflate one’s abilities or achievement in order to make a good impression on other people.

Response bias refers to conditions or factors that take place during the process of responding to surveys, affecting the responses. One type of response bias is social desirability bias .

Demand characteristics are aspects of experiments that may give away the research objective to participants. Social desirability bias occurs when participants automatically try to respond in ways that make them seem likeable in a study, even if it means misrepresenting how they truly feel.

Participants may use demand characteristics to infer social norms or experimenter expectancies and act in socially desirable ways, so you should try to control for demand characteristics wherever possible.

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information – for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Peer review is a process of evaluating submissions to an academic journal. Utilising rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication.

For this reason, academic journals are often considered among the most credible sources you can use in a research project – provided that the journal itself is trustworthy and well regarded.

In general, the peer review process follows the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field.

It acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analysing the data.

Blinding is important to reduce bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behaviour in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a die to randomly assign participants to groups.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalisability of your results, while random assignment improves the internal validity of your study.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardisation and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyse, detect, modify, or remove ‘dirty’ data to make your dataset ‘clean’. Data cleaning is also called data cleansing or data scrubbing.

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimise or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Observer bias occurs when a researcher’s expectations, opinions, or prejudices influence what they perceive or record in a study. It usually affects studies when observers are aware of the research aims or hypotheses. This type of research bias is also called detection bias or ascertainment bias .

The observer-expectancy effect occurs when researchers influence the results of their own study through interactions with participants.

Researchers’ own beliefs and expectations about the study results may unintentionally influence participants through demand characteristics .

You can use several tactics to minimise observer bias .

  • Use masking (blinding) to hide the purpose of your study from all observers.
  • Triangulate your data with different data collection methods or sources.
  • Use multiple observers and ensure inter-rater reliability.
  • Train your observers to make sure data is consistently recorded between them.
  • Standardise your observation procedures to make sure they are structured and clear.

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

Naturalistic observation is a qualitative research method where you record the behaviours of your research subjects in real-world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as ‘people watching’ with a purpose.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

You can organise the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomisation can minimise the bias from order effects.

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or by post. All questions are standardised so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment
  • Random assignment of participants to ensure the groups are equivalent

Depending on your study topic, there are various other methods of controlling variables .

A true experiment (aka a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation (‘ x affects y because …’).

A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses. In a well-designed study , the statistical hypotheses correspond logically to the research hypothesis.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess. It should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations, and statistical analysis of data).

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data are available for analysis; other times your research question may only require a cross-sectional study to answer it.

Cross-sectional studies cannot establish a cause-and-effect relationship or analyse behaviour over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a ‘cross-section’) in the population
Follows in participants over time Provides of society at a given point

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups . Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with ‘yes’ or ‘no’ (questions that start with ‘why’ or ‘how’ are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

Social desirability bias is the tendency for interview participants to give responses that will be viewed favourably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias in research can also occur in observations if the participants know they’re being observed. They might alter their behaviour accordingly.

A focus group is a research method that brings together a small group of people to answer questions in a moderated setting. The group is chosen due to predefined demographic traits, and the questions are designed to shed light on a topic of interest. It is one of four types of interviews .

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order.
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualise your initial thoughts and hypotheses
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when:

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyse your data quickly and efficiently
  • Your research question depends on strong parity between participants, with environmental conditions held constant

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g., understanding the needs of your consumers or user testing your website).
  • You can control and standardise the process for high reliability and validity (e.g., choosing appropriate measurements and sampling methods ).

However, there are also some drawbacks: data collection can be time-consuming, labour-intensive, and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organisations.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

If something is a mediating variable :

  • It’s caused by the independent variable
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g., the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g., water volume or weight).

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

You want to find out how blood sugar levels are affected by drinking diet cola and regular cola, so you conduct an experiment .

  • The type of cola – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of cola.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control, and randomisation.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomisation , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalisation .

In statistics, ordinal and nominal variables are both considered categorical variables .

Even though ordinal data can sometimes be numerical, not all mathematical operations can be performed on them.

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

‘Controlling for a variable’ means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

There are 4 main types of extraneous variables :

  • Demand characteristics : Environmental cues that encourage participants to conform to researchers’ expectations
  • Experimenter effects : Unintentional actions by researchers that influence study outcomes
  • Situational variables : Eenvironmental variables that alter participants’ behaviours
  • Participant variables : Any characteristic or aspect of a participant’s background that could affect study results

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

The term ‘ explanatory variable ‘ is sometimes preferred over ‘ independent variable ‘ because, in real-world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so ‘explanatory variables’ is a more appropriate term.

On graphs, the explanatory variable is conventionally placed on the x -axis, while the response variable is placed on the y -axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called ‘independent’ because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation)

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it ‘depends’ on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalisation : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalisation: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity, and criterion validity to achieve construct validity.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity: The extent to which your measure is unrelated or negatively related to measures of distinct constructs

Attrition bias can skew your sample so that your final sample differs significantly from your original sample. Your sample is biased because some groups from your population are underrepresented.

With a biased final sample, you may not be able to generalise your findings to the original population that you sampled from, so your external validity is compromised.

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment, and situation effect.

The two types of external validity are population validity (whether you can generalise to other groups of people) and ecological validity (whether you can generalise to other situations and settings).

The external validity of a study is the extent to which you can generalise your findings to different groups of people, situations, and measures.

Attrition bias is a threat to internal validity . In experiments, differential rates of attrition between treatment and control groups can skew results.

This bias can affect the relationship between your independent and dependent variables . It can make variables appear to be correlated when they are not, or vice versa.

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction, and attrition .

A sampling error is the difference between a population parameter and a sample statistic .

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 × 5 = 15 subgroups.

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method .

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

In multistage sampling , you can use probability or non-probability sampling methods.

For a probability sample, you have to probability sampling at every stage. You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data are then collected from as large a percentage as possible of this random subset.

Sampling bias occurs when some members of a population are systematically more likely to be selected in a sample than others.

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from county to city to neighbourhood) to create a sample that’s less expensive and time-consuming to collect data from.

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling , and quota sampling .

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

Advantages:

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.

Disadvantages:

  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes
  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomisation. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference between this and a true experiment is that the groups are not randomly assigned.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Triangulation can help:

  • Reduce bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labour-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analysing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.

To design a successful experiment, first identify:

  • A testable hypothesis
  • One or more independent variables that you will manipulate
  • One or more dependent variables that you will measure

When designing the experiment, first decide:

  • How your variable(s) will be manipulated
  • How you will control for any potential confounding or lurking variables
  • How many subjects you will include
  • How you will assign treatments to your subjects

Exploratory research explores the main aspects of a new or barely researched question.

Explanatory research explains the causes and effects of an already widely researched question.

The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.

An observational study could be a good fit for your research if your research question is based on things you observe. If you have ethical, logistical, or practical concerns that make an experimental design challenging, consider an observational study. Remember that in an observational study, it is critical that there be no interference or manipulation of the research subjects. Since it’s not an experiment, there are no control or treatment groups either.

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analysed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analysed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualise your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analysed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organisation to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organise your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyse data (e.g. experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Ask our team

Want to contact us directly? No problem. We are always here for you.

Support team - Nina

Our support team is here to help you daily via chat, WhatsApp, email, or phone between 9:00 a.m. to 11:00 p.m. CET.

Our APA experts default to APA 7 for editing and formatting. For the Citation Editing Service you are able to choose between APA 6 and 7.

Yes, if your document is longer than 20,000 words, you will get a sample of approximately 2,000 words. This sample edit gives you a first impression of the editor’s editing style and a chance to ask questions and give feedback.

How does the sample edit work?

You will receive the sample edit within 24 hours after placing your order. You then have 24 hours to let us know if you’re happy with the sample or if there’s something you would like the editor to do differently.

Read more about how the sample edit works

Yes, you can upload your document in sections.

We try our best to ensure that the same editor checks all the different sections of your document. When you upload a new file, our system recognizes you as a returning customer, and we immediately contact the editor who helped you before.

However, we cannot guarantee that the same editor will be available. Your chances are higher if

  • You send us your text as soon as possible and
  • You can be flexible about the deadline.

Please note that the shorter your deadline is, the lower the chance that your previous editor is not available.

If your previous editor isn’t available, then we will inform you immediately and look for another qualified editor. Fear not! Every Scribbr editor follows the  Scribbr Improvement Model  and will deliver high-quality work.

Yes, our editors also work during the weekends and holidays.

Because we have many editors available, we can check your document 24 hours per day and 7 days per week, all year round.

If you choose a 72 hour deadline and upload your document on a Thursday evening, you’ll have your thesis back by Sunday evening!

Yes! Our editors are all native speakers, and they have lots of experience editing texts written by ESL students. They will make sure your grammar is perfect and point out any sentences that are difficult to understand. They’ll also notice your most common mistakes, and give you personal feedback to improve your writing in English.

Every Scribbr order comes with our award-winning Proofreading & Editing service , which combines two important stages of the revision process.

For a more comprehensive edit, you can add a Structure Check or Clarity Check to your order. With these building blocks, you can customize the kind of feedback you receive.

You might be familiar with a different set of editing terms. To help you understand what you can expect at Scribbr, we created this table:

Types of editing Available at Scribbr?


This is the “proofreading” in Scribbr’s standard service. It can only be selected in combination with editing.


This is the “editing” in Scribbr’s standard service. It can only be selected in combination with proofreading.


Select the Structure Check and Clarity Check to receive a comprehensive edit equivalent to a line edit.


This kind of editing involves heavy rewriting and restructuring. Our editors cannot help with this.

View an example

When you place an order, you can specify your field of study and we’ll match you with an editor who has familiarity with this area.

However, our editors are language specialists, not academic experts in your field. Your editor’s job is not to comment on the content of your dissertation, but to improve your language and help you express your ideas as clearly and fluently as possible.

This means that your editor will understand your text well enough to give feedback on its clarity, logic and structure, but not on the accuracy or originality of its content.

Good academic writing should be understandable to a non-expert reader, and we believe that academic editing is a discipline in itself. The research, ideas and arguments are all yours – we’re here to make sure they shine!

After your document has been edited, you will receive an email with a link to download the document.

The editor has made changes to your document using ‘Track Changes’ in Word. This means that you only have to accept or ignore the changes that are made in the text one by one.

It is also possible to accept all changes at once. However, we strongly advise you not to do so for the following reasons:

  • You can learn a lot by looking at the mistakes you made.
  • The editors don’t only change the text – they also place comments when sentences or sometimes even entire paragraphs are unclear. You should read through these comments and take into account your editor’s tips and suggestions.
  • With a final read-through, you can make sure you’re 100% happy with your text before you submit!

You choose the turnaround time when ordering. We can return your dissertation within 24 hours , 3 days or 1 week . These timescales include weekends and holidays. As soon as you’ve paid, the deadline is set, and we guarantee to meet it! We’ll notify you by text and email when your editor has completed the job.

Very large orders might not be possible to complete in 24 hours. On average, our editors can complete around 13,000 words in a day while maintaining our high quality standards. If your order is longer than this and urgent, contact us to discuss possibilities.

Always leave yourself enough time to check through the document and accept the changes before your submission deadline.

Scribbr is specialised in editing study related documents. We check:

  • Graduation projects
  • Dissertations
  • Admissions essays
  • College essays
  • Application essays
  • Personal statements
  • Process reports
  • Reflections
  • Internship reports
  • Academic papers
  • Research proposals
  • Prospectuses

Calculate the costs

The fastest turnaround time is 24 hours.

You can upload your document at any time and choose between four deadlines:

At Scribbr, we promise to make every customer 100% happy with the service we offer. Our philosophy: Your complaint is always justified – no denial, no doubts.

Our customer support team is here to find the solution that helps you the most, whether that’s a free new edit or a refund for the service.

Yes, in the order process you can indicate your preference for American, British, or Australian English .

If you don’t choose one, your editor will follow the style of English you currently use. If your editor has any questions about this, we will contact you.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Quasi-Experimental Design | Definition, Types & Examples

Quasi-Experimental Design | Definition, Types & Examples

Published on July 31, 2020 by Lauren Thomas . Revised on January 22, 2024.

Like a true experiment , a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable .

However, unlike a true experiment, a quasi-experiment does not rely on random assignment . Instead, subjects are assigned to groups based on non-random criteria.

Quasi-experimental design is a useful tool in situations where true experiments cannot be used for ethical or practical reasons.

Quasi-experimental design vs. experimental design

Table of contents

Differences between quasi-experiments and true experiments, types of quasi-experimental designs, when to use quasi-experimental design, advantages and disadvantages, other interesting articles, frequently asked questions about quasi-experimental designs.

There are several common differences between true and quasi-experimental designs.

True experimental design Quasi-experimental design
Assignment to treatment The researcher subjects to control and treatment groups. Some other, method is used to assign subjects to groups.
Control over treatment The researcher usually . The researcher often , but instead studies pre-existing groups that received different treatments after the fact.
Use of Requires the use of . Control groups are not required (although they are commonly used).

Example of a true experiment vs a quasi-experiment

However, for ethical reasons, the directors of the mental health clinic may not give you permission to randomly assign their patients to treatments. In this case, you cannot run a true experiment.

Instead, you can use a quasi-experimental design.

You can use these pre-existing groups to study the symptom progression of the patients treated with the new therapy versus those receiving the standard course of treatment.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

experimental group is quantitative or qualitative

Many types of quasi-experimental designs exist. Here we explain three of the most common types: nonequivalent groups design, regression discontinuity, and natural experiments.

Nonequivalent groups design

In nonequivalent group design, the researcher chooses existing groups that appear similar, but where only one of the groups experiences the treatment.

In a true experiment with random assignment , the control and treatment groups are considered equivalent in every way other than the treatment. But in a quasi-experiment where the groups are not random, they may differ in other ways—they are nonequivalent groups .

When using this kind of design, researchers try to account for any confounding variables by controlling for them in their analysis or by choosing groups that are as similar as possible.

This is the most common type of quasi-experimental design.

Regression discontinuity

Many potential treatments that researchers wish to study are designed around an essentially arbitrary cutoff, where those above the threshold receive the treatment and those below it do not.

Near this threshold, the differences between the two groups are often so minimal as to be nearly nonexistent. Therefore, researchers can use individuals just below the threshold as a control group and those just above as a treatment group.

However, since the exact cutoff score is arbitrary, the students near the threshold—those who just barely pass the exam and those who fail by a very small margin—tend to be very similar, with the small differences in their scores mostly due to random chance. You can therefore conclude that any outcome differences must come from the school they attended.

Natural experiments

In both laboratory and field experiments, researchers normally control which group the subjects are assigned to. In a natural experiment, an external event or situation (“nature”) results in the random or random-like assignment of subjects to the treatment group.

Even though some use random assignments, natural experiments are not considered to be true experiments because they are observational in nature.

Although the researchers have no control over the independent variable , they can exploit this event after the fact to study the effect of the treatment.

However, as they could not afford to cover everyone who they deemed eligible for the program, they instead allocated spots in the program based on a random lottery.

Although true experiments have higher internal validity , you might choose to use a quasi-experimental design for ethical or practical reasons.

Sometimes it would be unethical to provide or withhold a treatment on a random basis, so a true experiment is not feasible. In this case, a quasi-experiment can allow you to study the same causal relationship without the ethical issues.

The Oregon Health Study is a good example. It would be unethical to randomly provide some people with health insurance but purposely prevent others from receiving it solely for the purposes of research.

However, since the Oregon government faced financial constraints and decided to provide health insurance via lottery, studying this event after the fact is a much more ethical approach to studying the same problem.

True experimental design may be infeasible to implement or simply too expensive, particularly for researchers without access to large funding streams.

At other times, too much work is involved in recruiting and properly designing an experimental intervention for an adequate number of subjects to justify a true experiment.

In either case, quasi-experimental designs allow you to study the question by taking advantage of data that has previously been paid for or collected by others (often the government).

Quasi-experimental designs have various pros and cons compared to other types of studies.

  • Higher external validity than most true experiments, because they often involve real-world interventions instead of artificial laboratory settings.
  • Higher internal validity than other non-experimental types of research, because they allow you to better control for confounding variables than other types of studies do.
  • Lower internal validity than true experiments—without randomization, it can be difficult to verify that all confounding variables have been accounted for.
  • The use of retrospective data that has already been collected for other purposes can be inaccurate, incomplete or difficult to access.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Thomas, L. (2024, January 22). Quasi-Experimental Design | Definition, Types & Examples. Scribbr. Retrieved August 14, 2024, from https://www.scribbr.com/methodology/quasi-experimental-design/

Is this article helpful?

Lauren Thomas

Lauren Thomas

Other students also liked, guide to experimental design | overview, steps, & examples, random assignment in experiments | introduction & examples, control variables | what are they & why do they matter, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Banner Image

Quantitative and Qualitative Research

  • I NEED TO . . .

What is Quantitative Research?

  • What is Qualitative Research?
  • Quantitative vs Qualitative
  • Step 1: Accessing CINAHL
  • Step 2: Create a Keyword Search
  • Step 3: Create a Subject Heading Search
  • Step 4: Repeat Steps 1-3 for Second Concept
  • Step 5: Repeat Steps 1-3 for Quantitative Terms
  • Step 6: Combining All Searches
  • Step 7: Adding Limiters
  • Step 8: Save Your Search!
  • What Kind of Article is This?
  • More Research Help This link opens in a new window

Quantitative methodology is the dominant research framework in the social sciences. It refers to a set of strategies, techniques and assumptions used to study psychological, social and economic processes through the exploration of numeric patterns . Quantitative research gathers a range of numeric data. Some of the numeric data is intrinsically quantitative (e.g. personal income), while in other cases the numeric structure is  imposed (e.g. ‘On a scale from 1 to 10, how depressed did you feel last week?’). The collection of quantitative information allows researchers to conduct simple to extremely sophisticated statistical analyses that aggregate the data (e.g. averages, percentages), show relationships among the data (e.g. ‘Students with lower grade point averages tend to score lower on a depression scale’) or compare across aggregated data (e.g. the USA has a higher gross domestic product than Spain). Quantitative research includes methodologies such as questionnaires, structured observations or experiments and stands in contrast to qualitative research. Qualitative research involves the collection and analysis of narratives and/or open-ended observations through methodologies such as interviews, focus groups or ethnographies.

Coghlan, D., Brydon-Miller, M. (2014).  The SAGE encyclopedia of action research  (Vols. 1-2). London, : SAGE Publications Ltd doi: 10.4135/9781446294406

What is the purpose of quantitative research?

The purpose of quantitative research is to generate knowledge and create understanding about the social world. Quantitative research is used by social scientists, including communication researchers, to observe phenomena or occurrences affecting individuals. Social scientists are concerned with the study of people. Quantitative research is a way to learn about a particular group of people, known as a sample population. Using scientific inquiry, quantitative research relies on data that are observed or measured to examine questions about the sample population.

Allen, M. (2017).  The SAGE encyclopedia of communication research methods  (Vols. 1-4). Thousand Oaks, CA: SAGE Publications, Inc doi: 10.4135/9781483381411

How do I know if the study is a quantitative design?  What type of quantitative study is it?

Quantitative Research Designs: Descriptive non-experimental, Quasi-experimental or Experimental?

Studies do not always explicitly state what kind of research design is being used.  You will need to know how to decipher which design type is used.  The following video will help you determine the quantitative design type.

  • << Previous: I NEED TO . . .
  • Next: What is Qualitative Research? >>
  • Last Updated: Aug 14, 2024 3:26 PM
  • URL: https://libguides.uta.edu/quantitative_and_qualitative_research

University of Texas Arlington Libraries 702 Planetarium Place · Arlington, TX 76019 · 817-272-3000

  • Internet Privacy
  • Accessibility
  • Problems with a guide? Contact Us.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMJ Glob Health
  • v.4(Suppl 1); 2019

Logo of bmjgh

Synthesising quantitative and qualitative evidence to inform guidelines on complex interventions: clarifying the purposes, designs and outlining some methods

1 School of Social Sciences, Bangor University, Wales, UK

Andrew Booth

2 School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK

Graham Moore

3 School of Social Sciences, Cardiff University, Wales, UK

Kate Flemming

4 Department of Health Sciences, The University of York, York, UK

Özge Tunçalp

5 Department of Reproductive Health and Research including UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), World Health Organization, Geneva, Switzerland

Elham Shakibazadeh

6 Department of Health Education and Promotion, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran

Associated Data

bmjgh-2018-000893supp001.pdf

bmjgh-2018-000893supp002.pdf

bmjgh-2018-000893supp003.pdf

bmjgh-2018-000893supp005.pdf

bmjgh-2018-000893supp004.pdf

Guideline developers are increasingly dealing with more difficult decisions concerning whether to recommend complex interventions in complex and highly variable health systems. There is greater recognition that both quantitative and qualitative evidence can be combined in a mixed-method synthesis and that this can be helpful in understanding how complexity impacts on interventions in specific contexts. This paper aims to clarify the different purposes, review designs, questions, synthesis methods and opportunities to combine quantitative and qualitative evidence to explore the complexity of complex interventions and health systems. Three case studies of guidelines developed by WHO, which incorporated quantitative and qualitative evidence, are used to illustrate possible uses of mixed-method reviews and evidence. Additional examples of methods that can be used or may have potential for use in a guideline process are outlined. Consideration is given to the opportunities for potential integration of quantitative and qualitative evidence at different stages of the review and guideline process. Encouragement is given to guideline commissioners and developers and review authors to consider including quantitative and qualitative evidence. Recommendations are made concerning the future development of methods to better address questions in systematic reviews and guidelines that adopt a complexity perspective.

Summary box

  • When combined in a mixed-method synthesis, quantitative and qualitative evidence can potentially contribute to understanding how complex interventions work and for whom, and how the complex health systems into which they are implemented respond and adapt.
  • The different purposes and designs for combining quantitative and qualitative evidence in a mixed-method synthesis for a guideline process are described.
  • Questions relevant to gaining an understanding of the complexity of complex interventions and the wider health systems within which they are implemented that can be addressed by mixed-method syntheses are presented.
  • The practical methodological guidance in this paper is intended to help guideline producers and review authors commission and conduct mixed-method syntheses where appropriate.
  • If more mixed-method syntheses are conducted, guideline developers will have greater opportunities to access this evidence to inform decision-making.

Introduction

Recognition has grown that while quantitative methods remain vital, they are usually insufficient to address complex health systems related research questions. 1 Quantitative methods rely on an ability to anticipate what must be measured in advance. Introducing change into a complex health system gives rise to emergent reactions, which cannot be fully predicted in advance. Emergent reactions can often only be understood through combining quantitative methods with a more flexible qualitative lens. 2 Adopting a more pluralist position enables a diverse range of research options to the researcher depending on the research question being investigated. 3–5 As a consequence, where a research study sits within the multitude of methods available is driven by the question being asked, rather than any particular methodological or philosophical stance. 6

Publication of guidance on designing complex intervention process evaluations and other works advocating mixed-methods approaches to intervention research have stimulated better quality evidence for synthesis. 1 7–13 Methods for synthesising qualitative 14 and mixed-method evidence have been developed or are in development. Mixed-method research and review definitions are outlined in box 1 .

Defining mixed-method research and reviews

Pluye and Hong 52 define mixed-methods research as “a research approach in which a researcher integrates (a) qualitative and quantitative research questions, (b) qualitative research methods* and quantitative research designs, (c) techniques for collecting and analyzing qualitative and quantitative evidence, and (d) qualitative findings and quantitative results”.A mixed-method synthesis can integrate quantitative, qualitative and mixed-method evidence or data from primary studies.† Mixed-method primary studies are usually disaggregated into quantitative and qualitative evidence and data for the purposes of synthesis. Thomas and Harden further define three ways in which reviews are mixed. 53

  • The types of studies included and hence the type of findings to be synthesised (ie, qualitative/textual and quantitative/numerical).
  • The types of synthesis method used (eg, statistical meta-analysis and qualitative synthesis).
  • The mode of analysis: theory testing AND theory building.

*A qualitative study is one that uses qualitative methods of data collection and analysis to produce a narrative understanding of the phenomena of interest. Qualitative methods of data collection may include, for example, interviews, focus groups, observations and analysis of documents.

†The Cochrane Qualitative and Implementation Methods group coined the term ‘qualitative evidence synthesis’ to mean that the synthesis could also include qualitative data. For example, qualitative data from case studies, grey literature reports and open-ended questions from surveys. ‘Evidence’ and ‘data’ are used interchangeably in this paper.

This paper is one of a series that aims to explore the implications of complexity for systematic reviews and guideline development, commissioned by WHO. This paper is concerned with the methodological implications of including quantitative and qualitative evidence in mixed-method systematic reviews and guideline development for complex interventions. The guidance was developed through a process of bringing together experts in the field, literature searching and consensus building with end users (guideline developers, clinicians and reviewers). We clarify the different purposes, review designs, questions and synthesis methods that may be applicable to combine quantitative and qualitative evidence to explore the complexity of complex interventions and health systems. Three case studies of WHO guidelines that incorporated quantitative and qualitative evidence are used to illustrate possible uses of mixed-method reviews and mechanisms of integration ( table 1 , online supplementary files 1–3 ). Additional examples of methods that can be used or may have potential for use in a guideline process are outlined. Opportunities for potential integration of quantitative and qualitative evidence at different stages of the review and guideline process are presented. Specific considerations when using an evidence to decision framework such as the Developing and Evaluating Communication strategies to support Informed Decisions and practice based on Evidence (DECIDE) framework 15 or the new WHO-INTEGRATE evidence to decision framework 16 at the review design and evidence to decision stage are outlined. See online supplementary file 4 for an example of a health systems DECIDE framework and Rehfuess et al 16 for the new WHO-INTEGRATE framework. Encouragement is given to guideline commissioners and developers and review authors to consider including quantitative and qualitative evidence in guidelines of complex interventions that take a complexity perspective and health systems focus.

Designs and methods and their use or applicability in guidelines and systematic reviews taking a complexity perspective

Case study examples and referencesComplexity-related questions of interest in the guidelineTypes of synthesis used in the guidelineMixed-method review design and integration mechanismsObservations, concerns and considerations
A. Mixed-method review designs used in WHO guideline development
Antenatal Care (ANC) guidelines ( )
What do women in high-income, medium-income and low-income countries want and expect from antenatal care (ANC), based on their own accounts of their beliefs, views, expectations and experiences of pregnancy?Qualitative synthesis
Framework synthesis
Meta-ethnography

Quantitative and qualitative reviews undertaken separately (segregated), an initial scoping review of qualitative evidence established women’s preferences and outcomes for ANC, which informed design of the quantitative intervention review (contingent)
A second qualitative evidence synthesis was undertaken to look at implementation factors (sequential)
Integration: quantitative and qualitative findings were brought together in a series of DECIDE frameworks Tools included:
Psychological theory
SURE framework conceptual framework for implementing policy options
Conceptual framework for analysing integration of targeted health interventions into health systems to analyse contextual health system factors
An innovative approach to guideline development
No formal cross-study synthesis process and limited testing of theory. The hypothetical nature of meta-ethnography findings may be challenging for guideline panel members to process without additional training
See Flemming for considerations when selecting meta-ethnography
What are the evidence-based practices during ANC that improved outcomes and lead to positive pregnancy experience and how should these practices be delivered?Quantitative review of trials
Factors that influence the uptake of routine antenatal services by pregnant women
Views and experiences of maternity care providers
Qualitative synthesis
Framework synthesis
Meta-ethnography
Task shifting guidelines ( ) What are the effects of lay health worker interventions in primary and community healthcare on maternal and child health and the management of infectious diseases?Quantitative review of trials
Several published quantitative reviews were used (eg, Cochrane review of lay health worker interventions)
Additional new qualitative evidence syntheses were commissioned (segregated)

Integration: quantitative and qualitative review findings on lay health workers were brought together in several DECIDE frameworks. Tools included adapted SURE Framework and post hoc logic model
An innovative approach to guideline development
The post hoc logic model was developed after the guideline was completed
What factors affect the implementation of lay health worker programmes for maternal and child health?Qualitative evidence synthesis
Framework synthesis
Risk communication guideline ( ) Quantitative review of quantitative evidence (descriptive)
Qualitative using framework synthesis

A knowledge map of studies was produced to identify the method, topic and geographical spread of evidence. Reviews first organised and synthesised evidence by method-specific streams and reported method-specific findings. Then similar findings across method-specific streams were grouped and further developed using all the relevant evidence
Integration: where possible, quantitative and qualitative evidence for the same intervention and question was mapped against core DECIDE domains. Tools included framework using public health emergency model and disaster phases
Very few trials were identified. Quantitative and qualitative evidence was used to construct a high level view of what appeared to work and what happened when similar broad groups of interventions or strategies were implemented in different contexts
Example of a fully integrated mixed-method synthesis.
Without evidence of effect, it was highly challenging to populate a DECIDE framework
B. Mixed-method review designs that can be used in guideline development
Factors influencing children’s optimal fruit and vegetable consumption Potential to explore theoretical, intervention and implementation complexity issues
New question(s) of interest are developed and tested in a cross-study synthesis
Mixed-methods synthesis
Each review typically has three syntheses:
Statistical meta-analysis
Qualitative thematic synthesis
Cross-study synthesis

Aim is to generate and test theory from diverse body of literature
Integration: used integrative matrix based on programme theory
Can be used in a guideline process as it fits with the current model of conducting method specific reviews separately then bringing the review products together
C. Mixed-method review designs with the potential for use in guideline development
Interventions to promote smoke alarm ownership and function
Intervention effect and/or intervention implementation related questions within a systemNarrative synthesis (specifically Popay’s methodology)
Four stage approach to integrate quantitative (trials) with qualitative evidence
Integration: initial theory and logic model used to integrate evidence of effect with qualitative case summaries. Tools used included tabulation, groupings and clusters, transforming data: constructing a common rubric, vote-counting as a descriptive tool, moderator variables and subgroup analyses, idea webbing/conceptual mapping, creating qualitative case descriptions, visual representation of relationship between study characteristics and results
Few published examples with the exception of Rodgers, who reinterpreted a Cochrane review on the same topic with narrative synthesis methodology.
Methodology is complex. Most subsequent examples have only partially operationalised the methodology
An intervention effect review will still be required to feed into the guideline process
Factors affecting childhood immunisation
What factors explain complexity and causal pathways?Bayesian synthesis of qualitative and quantitative evidence
Aim is theory-testing by fusing findings from qualitative and quantitative research
Produces a set of weighted factors associated with/predicting the phenomenon under review
Not yet used in a guideline context.
Complex methodology.
Undergoing development and testing for a health context. The end product may not easily ‘fit’ into an evidence to decision framework and an effect review will still be required
Providing effective and preferred care closer to home: a realist review of intermediate care. Developing and testing theories of change underpinning complex policy interventions
What works for whom in what contexts and how?
Realist synthesis
NB. Other theory-informed synthesis methods follow similar processes

Development of a theory from the literature, analysis of quantitative and qualitative evidence against the theory leads to development of context, mechanism and outcome chains that explain how outcomes come about
Integration: programme theory and assembling mixed-method evidence to create Context, Mechanism and Outcome (CMO) configurations
May be useful where there are few trials. The hypothetical nature of findings may be challenging for guideline panel members to process without additional training. The end product may not easily ‘fit’ into an evidence to decision framework and an effect review will still be required
Use of morphine to treat cancer-related pain Any aspect of complexity could potentially be explored
How does the context of morphine use affect the established effectiveness of morphine?
Critical interpretive synthesis
Aims to generate theory from large and diverse body of literature
Segregated sequential design
Integration: integrative grid
There are few examples and the methodology is complex.
The hypothetical nature of findings may be challenging for guideline panel members to process without additional training.
The end product would need to be designed to feed into an evidence to decision framework and an intervention effect review will still be required
Food sovereignty, food security and health equity Examples have examined health system complexity
To understand the state of knowledge on relationships between health equity—ie, health inequalities that are socially produced—and food systems, where the concepts of 'food security' and 'food sovereignty' are prominent
Focused on eight pathways to health (in)equity through the food system: (1) Multi-Scalar Environmental, Social Context; (2) Occupational Exposures; (3) Environmental Change; (4) Traditional Livelihoods, Cultural Continuity; (5) Intake of Contaminants; (6) Nutrition; (7) Social Determinants of Health; (8) Political, Economic and Regulatory context
Meta-narrativeAim is to review research on diffusion of innovation to inform healthcare policy
Which research (or epistemic) traditions have considered this broad topic area?; How has each tradition conceptualised the topic (for example, including assumptions about the nature of reality, preferred study designs and ways of knowing)?; What theoretical approaches and methods did they use?; What are the main empirical findings?; and What insights can be drawn by combining and comparing findings from different traditions?
Integration: analysis leads to production of a set of meta-narratives (‘storylines of research’)
Not yet used in a guideline context. The originators are calling for meta-narrative reviews to be used in a guideline process.
Potential to provide a contextual overview within which to interpret other types of reviews in a guideline process. The meta-narrative review findings may require tailoring to ‘fit’ into an evidence to decision framework and an intervention effect review will still be required
Few published examples and the methodology is complex

Supplementary data

Taking a complexity perspective.

The first paper in this series 17 outlines aspects of complexity associated with complex interventions and health systems that can potentially be explored by different types of evidence, including synthesis of quantitative and qualitative evidence. Petticrew et al 17 distinguish between a complex interventions perspective and a complex systems perspective. A complex interventions perspective defines interventions as having “implicit conceptual boundaries, representing a flexible, but common set of practices, often linked by an explicit or implicit theory about how they work”. A complex systems perspective differs in that “ complexity arises from the relationships and interactions between a system’s agents (eg, people, or groups that interact with each other and their environment), and its context. A system perspective conceives the intervention as being part of the system, and emphasises changes and interconnections within the system itself”. Aspects of complexity associated with implementation of complex interventions in health systems that could potentially be addressed with a synthesis of quantitative and qualitative evidence are summarised in table 2 . Another paper in the series outlines criteria used in a new evidence to decision framework for making decisions about complex interventions implemented in complex systems, against which the need for quantitative and qualitative evidence can be mapped. 16 A further paper 18 that explores how context is dealt with in guidelines and reviews taking a complexity perspective also recommends using both quantitative and qualitative evidence to better understand context as a source of complexity. Mixed-method syntheses of quantitative and qualitative evidence can also help with understanding of whether there has been theory failure and or implementation failure. The Cochrane Qualitative and Implementation Methods Group provide additional guidance on exploring implementation and theory failure that can be adapted to address aspects of complexity of complex interventions when implemented in health systems. 19

Health-system complexity-related questions that a synthesis of quantitative and qualitative evidence could address (derived from Petticrew et al 17 )

Aspect of complexity of interestExamples of potential research question(s) that a synthesis of qualitative and quantitative evidence could addressTypes of studies or data that could contribute to a review of qualitative and quantitative evidence
What ‘is’ the system? How can it be described?What are the main influences on the health problem? How are they created and maintained? How do these influences interconnect? Where might one intervene in the system?Quantitative: previous systematic reviews of the causes of the problem); epidemiological studies (eg, cohort studies examining risk factors of obesity); network analysis studies showing the nature of social and other systems
Qualitative data: theoretical papers; policy documents
Interactions of interventions with context and adaptation Qualitative: (1) eg, qualitative studies; case studies
Quantitative: (2) trials or other effectiveness studies from different contexts; multicentre trials, with stratified reporting of findings; other quantitative studies that provide evidence of moderating effects of context
System adaptivity (how does the system change?)(How) does the system change when the intervention is introduced? Which aspects of the system are affected? Does this potentiate or dampen its effects?Quantitative: longitudinal data; possibly historical data; effectiveness studies providing evidence of differential effects across different contexts; system modelling (eg, agent-based modelling)
Qualitative: qualitative studies; case studies
Emergent propertiesWhat are the effects (anticipated and unanticipated) which follow from this system change?Quantitative: prospective quantitative evaluations; retrospective studies (eg, case–control studies, surveys) may also help identify less common effects; dose–response evaluations of impacts at aggregate level in individual studies or across studies included with systematic reviews (see suggested examples)
Qualitative: qualitative studies
Positive (reinforcing) and negative (balancing) feedback loopsWhat explains change in the effectiveness of the intervention over time?
Are the effects of an intervention are damped/suppressed by other aspects of the system (eg, contextual influences?)
Quantitative: studies of moderators of effectiveness; long-term longitudinal studies
Qualitative: studies of factors that enable or inhibit implementation of interventions
Multiple (health and non-health) outcomesWhat changes in processes and outcomes follow the introduction of this system change? At what levels in the system are they experienced?Quantitative: studies tracking change in the system over time
Qualitative: studies exploring effects of the change in individuals, families, communities (including equity considerations and factors that affect engagement and participation in change)

It may not be apparent which aspects of complexity or which elements of the complex intervention or health system can be explored in a guideline process, or whether combining qualitative and quantitative evidence in a mixed-method synthesis will be useful, until the available evidence is scoped and mapped. 17 20 A more extensive lead in phase is typically required to scope the available evidence, engage with stakeholders and to refine the review parameters and questions that can then be mapped against potential review designs and methods of synthesis. 20 At the scoping stage, it is also common to decide on a theoretical perspective 21 or undertake further work to refine a theoretical perspective. 22 This is also the stage to begin articulating the programme theory of the complex intervention that may be further developed to refine an understanding of complexity and show how the intervention is implemented in and impacts on the wider health system. 17 23 24 In practice, this process can be lengthy, iterative and fluid with multiple revisions to the review scope, often developing and adapting a logic model 17 as the available evidence becomes known and the potential to incorporate different types of review designs and syntheses of quantitative and qualitative evidence becomes better understood. 25 Further questions, propositions or hypotheses may emerge as the reviews progress and therefore the protocols generally need to be developed iteratively over time rather than a priori.

Following a scoping exercise and definition of key questions, the next step in the guideline development process is to identify existing or commission new systematic reviews to locate and summarise the best available evidence in relation to each question. For example, case study 2, ‘Optimising health worker roles for maternal and newborn health through task shifting’, included quantitative reviews that did and did not take an additional complexity perspective, and qualitative evidence syntheses that were able to explain how specific elements of complexity impacted on intervention outcomes within the wider health system. Further understanding of health system complexity was facilitated through the conduct of additional country-level case studies that contributed to an overall understanding of what worked and what happened when lay health worker interventions were implemented. See table 1 online supplementary file 2 .

There are a few existing examples, which we draw on in this paper, but integrating quantitative and qualitative evidence in a mixed-method synthesis is relatively uncommon in a guideline process. Box 2 includes a set of key questions that guideline developers and review authors contemplating combining quantitative and qualitative evidence in mixed-methods design might ask. Subsequent sections provide more information and signposting to further reading to help address these key questions.

Key questions that guideline developers and review authors contemplating combining quantitative and qualitative evidence in a mixed-methods design might ask

Compound questions requiring both quantitative and qualitative evidence?

Questions requiring mixed-methods studies?

Separate quantitative and qualitative questions?

Separate quantitative and qualitative research studies?

Related quantitative and qualitative research studies?

Mixed-methods studies?

Quantitative unpublished data and/or qualitative unpublished data, eg, narrative survey data?

Throughout the review?

Following separate reviews?

At the question point?

At the synthesis point?

At the evidence to recommendations stage?

Or a combination?

Narrative synthesis or summary?

Quantitising approach, eg, frequency analysis?

Qualitising approach, eg, thematic synthesis?

Tabulation?

Logic model?

Conceptual model/framework?

Graphical approach?

  • WHICH: Which mixed-method designs, methodologies and methods best fit into a guideline process to inform recommendations?

Complexity-related questions that a synthesis of quantitative and qualitative evidence can potentially address

Petticrew et al 17 define the different aspects of complexity and examples of complexity-related questions that can potentially be explored in guidelines and systematic reviews taking a complexity perspective. Relevant aspects of complexity outlined by Petticrew et al 17 are summarised in table 2 below, together with the corresponding questions that could be addressed in a synthesis combining qualitative and quantitative evidence. Importantly, the aspects of complexity and their associated concepts of interest have however yet to be translated fully in primary health research or systematic reviews. There are few known examples where selected complexity concepts have been used to analyse or reanalyse a primary intervention study. Most notable is Chandler et al 26 who specifically set out to identify and translate a set of relevant complexity theory concepts for application in health systems research. Chandler then reanalysed a trial process evaluation using selected complexity theory concepts to better understand the complex causal pathway in the health system that explains some aspects of complexity in table 2 .

Rehfeuss et al 16 also recommends upfront consideration of the WHO-INTEGRATE evidence to decision criteria when planning a guideline and formulating questions. The criteria reflect WHO norms and values and take account of a complexity perspective. The framework can be used by guideline development groups as a menu to decide which criteria to prioritise, and which study types and synthesis methods can be used to collect evidence for each criterion. Many of the criteria and their related questions can be addressed using a synthesis of quantitative and qualitative evidence: the balance of benefits and harms, human rights and sociocultural acceptability, health equity, societal implications and feasibility (see table 3 ). Similar aspects in the DECIDE framework 15 could also be addressed using synthesis of qualitative and quantitative evidence.

Integrate evidence to decision framework criteria, example questions and types of studies to potentially address these questions (derived from Rehfeuss et al 16 )

Domains of the WHO-INTEGRATE EtD frameworkExamples of potential research question(s) that a synthesis of qualitative and/or quantitative evidence could addressTypes of studies that could contribute to a review of qualitative and quantitative evidence
Balance of benefits and harmsTo what extent do patients/beneficiaries different health outcomes?Qualitative: studies of views and experiences
Quantitative: Questionnaire surveys
Human rights and sociocultural acceptabilityIs the intervention to patients/beneficiaries as well as to those implementing it?
To what extent do patients/beneficiaries different non-health outcomes?
How does the intervention affect an individual’s, population group’s or organisation’s , that is, their ability to make a competent, informed and voluntary decision?
Qualitative: discourse analysis, qualitative studies (ideally longitudinal to examine changes over time)
Quantitative: pro et contra analysis, discrete choice experiments, longitudinal quantitative studies (to examine changes over time), cross-sectional studies
Mixed-method studies; case studies
Health equity, equality and non-discriminationHow is the intervention for individuals, households or communities?
How —in terms of physical as well as informational access—is the intervention across different population groups?
Qualitative: studies of views and experiences
Quantitative: cross-sectional or longitudinal observational studies, discrete choice experiments, health expenditure studies; health system barrier studies, cross-sectional or longitudinal observational studies, discrete choice experiments, ethical analysis, GIS-based studies
Societal implicationsWhat is the of the intervention: are there features of the intervention that increase or reduce stigma and that lead to social consequences? Does the intervention enhance or limit social goals, such as education, social cohesion and the attainment of various human rights beyond health? Does it change social norms at individual or population level?
What is the of the intervention? Does it contribute to or limit the achievement of goals to protect the environment and efforts to mitigate or adapt to climate change?
Qualitative: studies of views and experiences
Quantitative: RCTs, quasi-experimental studies, comparative observational studies, longitudinal implementation studies, case studies, power analyses, environmental impact assessments, modelling studies
Feasibility and health system considerationsAre there any that impact on implementation of the intervention?
How might , such as past decisions and strategic considerations, positively or negatively impact the implementation of the intervention?
How does the intervention ? Is it likely to fit well or not, is it likely to impact on it in positive or negative ways?
How does the intervention interact with the need for and usage of the existing , at national and subnational levels?
How does the intervention interact with the need for and usage of the as well as other relevant infrastructure, at national and subnational levels?
Non-research: policy and regulatory frameworks
Qualitative: studies of views and experiences
Mixed-method: health systems research, situation analysis, case studies
Quantitative: cross-sectional studies

GIS, Geographical Information System; RCT, randomised controlled trial.

Questions as anchors or compasses

Questions can serve as an ‘anchor’ by articulating the specific aspects of complexity to be explored (eg, Is successful implementation of the intervention context dependent?). 27 Anchor questions such as “How does intervention x impact on socioeconomic inequalities in health behaviour/outcome x” are the kind of health system question that requires a synthesis of both quantitative and qualitative evidence and hence a mixed-method synthesis. Quantitative evidence can quantify the difference in effect, but does not answer the question of how . The ‘how’ question can be partly answered with quantitative and qualitative evidence. For example, quantitative evidence may reveal where socioeconomic status and inequality emerges in the health system (an emergent property) by exploring questions such as “ Does patterning emerge during uptake because fewer people from certain groups come into contact with an intervention in the first place? ” or “ are people from certain backgrounds more likely to drop out, or to maintain effects beyond an intervention differently? ” Qualitative evidence may help understand the reasons behind all of these mechanisms. Alternatively, questions can act as ‘compasses’ where a question sets out a starting point from which to explore further and to potentially ask further questions or develop propositions or hypotheses to explore through a complexity perspective (eg, What factors enhance or hinder implementation?). 27 Other papers in this series provide further guidance on developing questions for qualitative evidence syntheses and guidance on question formulation. 14 28

For anchor and compass questions, additional application of a theory (eg, complexity theory) can help focus evidence synthesis and presentation to explore and explain complexity issues. 17 21 Development of a review specific logic model(s) can help to further refine an initial understanding of any complexity-related issues of interest associated with a specific intervention, and if appropriate the health system or section of the health system within which to contextualise the review question and analyse data. 17 23–25 Specific tools are available to help clarify context and complex interventions. 17 18

If a complexity perspective, and certain criteria within evidence to decision frameworks, is deemed relevant and desirable by guideline developers, it is only possible to pursue a complexity perspective if the evidence is available. Careful scoping using knowledge maps or scoping reviews will help inform development of questions that are answerable with available evidence. 20 If evidence of effect is not available, then a different approach to develop questions leading to a more general narrative understanding of what happened when complex interventions were implemented in a health system will be required (such as in case study 3—risk communication guideline). This should not mean that the original questions developed for which no evidence was found when scoping the literature were not important. An important function of creating a knowledge map is also to identify gaps to inform a future research agenda.

Table 2 and online supplementary files 1–3 outline examples of questions in the three case studies, which were all ‘COMPASS’ questions for the qualitative evidence syntheses.

Types of integration and synthesis designs in mixed-method reviews

The shift towards integration of qualitative and quantitative evidence in primary research has, in recent years, begun to be mirrored within research synthesis. 29–31 The natural extension to undertaking quantitative or qualitative reviews has been the development of methods for integrating qualitative and quantitative evidence within reviews, and within the guideline process using evidence to decision-frameworks. Advocating the integration of quantitative and qualitative evidence assumes a complementarity between research methodologies, and a need for both types of evidence to inform policy and practice. Below, we briefly outline the current designs for integrating qualitative and quantitative evidence within a mixed-method review or synthesis.

One of the early approaches to integrating qualitative and quantitative evidence detailed by Sandelowski et al 32 advocated three basic review designs: segregated, integrated and contingent designs, which have been further developed by Heyvaert et al 33 ( box 3 ).

Segregated, integrated and contingent designs 32 33

Segregated design.

Conventional separate distinction between quantitative and qualitative approaches based on the assumption they are different entities and should be treated separately; can be distinguished from each other; their findings warrant separate analyses and syntheses. Ultimately, the separate synthesis results can themselves be synthesised.

Integrated design

The methodological differences between qualitative and quantitative studies are minimised as both are viewed as producing findings that can be readily synthesised into one another because they address the same research purposed and questions. Transformation involves either turning qualitative data into quantitative (quantitising) or quantitative findings are turned into qualitative (qualitising) to facilitate their integration.

Contingent design

Takes a cyclical approach to synthesis, with the findings from one synthesis informing the focus of the next synthesis, until all the research objectives have been addressed. Studies are not necessarily grouped and categorised as qualitative or quantitative.

A recent review of more than 400 systematic reviews 34 combining quantitative and qualitative evidence identified two main synthesis designs—convergent and sequential. In a convergent design, qualitative and quantitative evidence is collated and analysed in a parallel or complementary manner, whereas in a sequential synthesis, the collation and analysis of quantitative and qualitative evidence takes place in a sequence with one synthesis informing the other ( box 4 ). 6 These designs can be seen to build on the work of Sandelowski et al , 32 35 particularly in relation to the transformation of data from qualitative to quantitative (and vice versa) and the sequential synthesis design, with a cyclical approach to reviewing that evokes Sandelowski’s contingent design.

Convergent and sequential synthesis designs 34

Convergent synthesis design.

Qualitative and quantitative research is collected and analysed at the same time in a parallel or complementary manner. Integration can occur at three points:

a. Data-based convergent synthesis design

All included studies are analysed using the same methods and results presented together. As only one synthesis method is used, data transformation occurs (qualitised or quantised). Usually addressed one review question.

b. Results-based convergent synthesis design

Qualitative and quantitative data are analysed and presented separately but integrated using a further synthesis method; eg, narratively, tables, matrices or reanalysing evidence. The results of both syntheses are combined in a third synthesis. Usually addresses an overall review question with subquestions.

c. Parallel-results convergent synthesis design

Qualitative and quantitative data are analysed and presented separately with integration occurring in the interpretation of results in the discussion section. Usually addresses two or more complimentary review questions.

Sequential synthesis design

A two-phase approach, data collection and analysis of one type of evidence (eg, qualitative), occurs after and is informed by the collection and analysis of the other type (eg, quantitative). Usually addresses an overall question with subquestions with both syntheses complementing each other.

The three case studies ( table 1 , online supplementary files 1–3 ) illustrate the diverse combination of review designs and synthesis methods that were considered the most appropriate for specific guidelines.

Methods for conducting mixed-method reviews in the context of guidelines for complex interventions

In this section, we draw on examples where specific review designs and methods have been or can be used to explore selected aspects of complexity in guidelines or systematic reviews. We also identify other review methods that could potentially be used to explore aspects of complexity. Of particular note, we could not find any specific examples of systematic methods to synthesise highly diverse research designs as advocated by Petticrew et al 17 and summarised in tables 2 and 3 . For example, we could not find examples of methods to synthesise qualitative studies, case studies, quantitative longitudinal data, possibly historical data, effectiveness studies providing evidence of differential effects across different contexts, and system modelling studies (eg, agent-based modelling) to explore system adaptivity.

There are different ways that quantitative and qualitative evidence can be integrated into a review and then into a guideline development process. In practice, some methods enable integration of different types of evidence in a single synthesis, while in other methods, the single systematic review may include a series of stand-alone reviews or syntheses that are then combined in a cross-study synthesis. Table 1 provides an overview of the characteristics of different review designs and methods and guidance on their applicability for a guideline process. Designs and methods that have already been used in WHO guideline development are described in part A of the table. Part B outlines a design and method that can be used in a guideline process, and part C covers those that have the potential to integrate quantitative, qualitative and mixed-method evidence in a single review design (such as meta-narrative reviews and Bayesian syntheses), but their application in a guideline context has yet to be demonstrated.

Points of integration when integrating quantitative and qualitative evidence in guideline development

Depending on the review design (see boxes 3 and 4 ), integration can potentially take place at a review team and design level, and more commonly at several key points of the review or guideline process. The following sections outline potential points of integration and associated practical considerations when integrating quantitative and qualitative evidence in guideline development.

Review team level

In a guideline process, it is common for syntheses of quantitative and qualitative evidence to be done separately by different teams and then to integrate the evidence. A practical consideration relates to the organisation, composition and expertise of the review teams and ways of working. If the quantitative and qualitative reviews are being conducted separately and then brought together by the same team members, who are equally comfortable operating within both paradigms, then a consistent approach across both paradigms becomes possible. If, however, a team is being split between the quantitative and qualitative reviews, then the strengths of specialisation can be harnessed, for example, in quality assessment or synthesis. Optimally, at least one, if not more, of the team members should be involved in both quantitative and qualitative reviews to offer the possibility of making connexions throughout the review and not simply at re-agreed junctures. This mirrors O’Cathain’s conclusion that mixed-methods primary research tends to work only when there is a principal investigator who values and is able to oversee integration. 9 10 While the above decisions have been articulated in the context of two types of evidence, variously quantitative and qualitative, they equally apply when considering how to handle studies reporting a mixed-method study design, where data are usually disaggregated into quantitative and qualitative for the purposes of synthesis (see case study 3—risk communication in humanitarian disasters).

Question formulation

Clearly specified key question(s), derived from a scoping or consultation exercise, will make it clear if quantitative and qualitative evidence is required in a guideline development process and which aspects will be addressed by which types of evidence. For the remaining stages of the process, as documented below, a review team faces challenges as to whether to handle each type of evidence separately, regardless of whether sequentially or in parallel, with a view to joining the two products on completion or to attempt integration throughout the review process. In each case, the underlying choice is of efficiencies and potential comparability vs sensitivity to the underlying paradigm.

Once key questions are clearly defined, the guideline development group typically needs to consider whether to conduct a single sensitive search to address all potential subtopics (lumping) or whether to conduct specific searches for each subtopic (splitting). 36 A related consideration is whether to search separately for qualitative, quantitative and mixed-method evidence ‘streams’ or whether to conduct a single search and then identify specific study types at the subsequent sifting stage. These two considerations often mean a trade-off between a single search process involving very large numbers of records or a more protracted search process retrieving smaller numbers of records. Both approaches have advantages and choice may depend on the respective availability of resources for searching and sifting.

Screening and selecting studies

Closely related to decisions around searching are considerations relating to screening and selecting studies for inclusion in a systematic review. An important consideration here is whether the review team will screen records for all review types, regardless of their subsequent involvement (‘altruistic sifting’), or specialise in screening for the study type with which they are most familiar. The risk of missing relevant reports might be minimised by whole team screening for empirical reports in the first instance and then coding them for a specific quantitative, qualitative or mixed-methods report at a subsequent stage.

Assessment of methodological limitations in primary studies

Within a guideline process, review teams may be more limited in their choice of instruments to assess methodological limitations of primary studies as there are mandatory requirements to use the Cochrane risk of bias tool 37 to feed into Grading of Recommendations Assessment, Development and Evaluation (GRADE) 38 or to select from a small pool of qualitative appraisal instruments in order to apply GRADE; Confidence in the Evidence from Reviews of Qualitative Research (GRADE-CERQual) 39 to assess the overall certainty or confidence in findings. The Cochrane Qualitative and Implementation Methods Group has recently issued guidance on the selection of appraisal instruments and core assessment criteria. 40 The Mixed-Methods Appraisal Tool, which is currently undergoing further development, offers a single quality assessment instrument for quantitative, qualitative and mixed-methods studies. 41 Other options include using corresponding instruments from within the same ‘stable’, for example, using different Critical Appraisal Skills Programme instruments. 42 While using instruments developed by the same team or organisation may achieve a degree of epistemological consonance, benefits may come more from consistency of approach and reporting rather than from a shared view of quality. Alternatively, a more paradigm-sensitive approach would involve selecting the best instrument for each respective review while deferring challenges from later heterogeneity of reporting.

Data extraction

The way in which data and evidence are extracted from primary research studies for review will be influenced by the type of integrated synthesis being undertaken and the review purpose. Initially, decisions need to be made regarding the nature and type of data and evidence that are to be extracted from the included studies. Method-specific reporting guidelines 43 44 provide a good template as to what quantitative and qualitative data it is potentially possible to extract from different types of method-specific study reports, although in practice reporting quality varies. Online supplementary file 5 provides a hypothetical example of the different types of studies from which quantitative and qualitative evidence could potentially be extracted for synthesis.

The decisions around what data or evidence to extract will be guided by how ‘integrated’ the mixed-method review will be. For those reviews where the quantitative and qualitative findings of studies are synthesised separately and integrated at the point of findings (eg, segregated or contingent approaches or sequential synthesis design), separate data extraction approaches will likely be used.

Where integration occurs during the process of the review (eg, integrated approach or convergent synthesis design), an integrated approach to data extraction may be considered, depending on the purpose of the review. This may involve the use of a data extraction framework, the choice of which needs to be congruent with the approach to synthesis chosen for the review. 40 45 The integrative or theoretical framework may be decided on a priori if a pre-developed theoretical or conceptual framework is available in the literature. 27 The development of a framework may alternatively arise from the reading of the included studies, in relation to the purpose of the review, early in the process. The Cochrane Qualitative and Implementation Methods Group provide further guidance on extraction of qualitative data, including use of software. 40

Synthesis and integration

Relatively few synthesis methods start off being integrated from the beginning, and these methods have generally been subject to less testing and evaluation particularly in a guideline context (see table 1 ). A review design that started off being integrated from the beginning may be suitable for some guideline contexts (such as in case study 3—risk communication in humanitarian disasters—where there was little evidence of effect), but in general if there are sufficient trials then a separate systematic review and meta-analysis will be required for a guideline. Other papers in this series offer guidance on methods for synthesising quantitative 46 and qualitative evidence 14 in reviews that take a complexity perspective. Further guidance on integrating quantitative and qualitative evidence in a systematic review is provided by the Cochrane Qualitative and Implementation Methods Group. 19 27 29 40 47

Types of findings produced by specific methods

It is highly likely (unless there are well-designed process evaluations) that the primary studies may not themselves seek to address the complexity-related questions required for a guideline process. In which case, review authors will need to configure the available evidence and transform the evidence through the synthesis process to produce explanations, propositions and hypotheses (ie, findings) that were not obvious at primary study level. It is important that guideline commissioners, developers and review authors are aware that specific methods are intended to produce a type of finding with a specific purpose (such as developing new theory in the case of meta-ethnography). 48 Case study 1 (antenatal care guideline) provides an example of how a meta-ethnography was used to develop a new theory as an end product, 48 49 as well as framework synthesis which produced descriptive and explanatory findings that were more easily incorporated into the guideline process. 27 The definitions ( box 5 ) may be helpful when defining the different types of findings.

Different levels of findings

Descriptive findings —qualitative evidence-driven translated descriptive themes that do not move beyond the primary studies.

Explanatory findings —may either be at a descriptive or theoretical level. At the descriptive level, qualitative evidence is used to explain phenomena observed in quantitative results, such as why implementation failed in specific circumstances. At the theoretical level, the transformed and interpreted findings that go beyond the primary studies can be used to explain the descriptive findings. The latter description is generally the accepted definition in the wider qualitative community.

Hypothetical or theoretical finding —qualitative evidence-driven transformed themes (or lines of argument) that go beyond the primary studies. Although similar, Thomas and Harden 56 make a distinction in the purposes between two types of theoretical findings: analytical themes and the product of meta-ethnographies, third-order interpretations. 48

Analytical themes are a product of interrogating descriptive themes by placing the synthesis within an external theoretical framework (such as the review question and subquestions) and are considered more appropriate when a specific review question is being addressed (eg, in a guideline or to inform policy). 56

Third-order interpretations come from translating studies into one another while preserving the original context and are more appropriate when a body of literature is being explored in and of itself with broader or emergent review questions. 48

Bringing mixed-method evidence together in evidence to decision (EtD) frameworks

A critical element of guideline development is the formulation of recommendations by the Guideline Development Group, and EtD frameworks help to facilitate this process. 16 The EtD framework can also be used as a mechanism to integrate and display quantitative and qualitative evidence and findings mapped against the EtD framework domains with hyperlinks to more detailed evidence summaries from contributing reviews (see table 1 ). It is commonly the EtD framework that enables the findings of the separate quantitative and qualitative reviews to be brought together in a guideline process. Specific challenges when populating the DECIDE evidence to decision framework 15 were noted in case study 3 (risk communication in humanitarian disasters) as there was an absence of intervention effect data and the interventions to communicate public health risks were context specific and varied. These problems would not, however, have been addressed by substitution of the DECIDE framework with the new INTEGRATE 16 evidence to decision framework. A d ifferent type of EtD framework needs to be developed for reviews that do not include sufficient evidence of intervention effect.

Mixed-method review and synthesis methods are generally the least developed of all systematic review methods. It is acknowledged that methods for combining quantitative and qualitative evidence are generally poorly articulated. 29 50 There are however some fairly well-established methods for using qualitative evidence to explore aspects of complexity (such as contextual, implementation and outcome complexity), which can be combined with evidence of effect (see sections A and B of table 1 ). 14 There are good examples of systematic reviews that use these methods to combine quantitative and qualitative evidence, and examples of guideline recommendations that were informed by evidence from both quantitative and qualitative reviews (eg, case studies 1–3). With the exception of case study 3 (risk communication), the quantitative and qualitative reviews for these specific guidelines have been conducted separately, and the findings subsequently brought together in an EtD framework to inform recommendations.

Other mixed-method review designs have potential to contribute to understanding of complex interventions and to explore aspects of wider health systems complexity but have not been sufficiently developed and tested for this specific purpose, or used in a guideline process (section C of table 1 ). Some methods such as meta-narrative reviews also explore different questions to those usually asked in a guideline process. Methods for processing (eg, quality appraisal) and synthesising the highly diverse evidence suggested in tables 2 and 3 that are required to explore specific aspects of health systems complexity (such as system adaptivity) and to populate some sections of the INTEGRATE EtD framework remain underdeveloped or in need of development.

In addition to the required methodological development mentioned above, there is no GRADE approach 38 for assessing confidence in findings developed from combined quantitative and qualitative evidence. Another paper in this series outlines how to deal with complexity and grading different types of quantitative evidence, 51 and the GRADE CERQual approach for qualitative findings is described elsewhere, 39 but both these approaches are applied to method-specific and not mixed-method findings. An unofficial adaptation of GRADE was used in the risk communication guideline that reported mixed-method findings. Nor is there a reporting guideline for mixed-method reviews, 47 and for now reports will need to conform to the relevant reporting requirements of the respective method-specific guideline. There is a need to further adapt and test DECIDE, 15 WHO-INTEGRATE 16 and other types of evidence to decision frameworks to accommodate evidence from mixed-method syntheses which do not set out to determine the statistical effects of interventions and in circumstances where there are no trials.

When conducting quantitative and qualitative reviews that will subsequently be combined, there are specific considerations for managing and integrating the different types of evidence throughout the review process. We have summarised different options for combining qualitative and quantitative evidence in mixed-method syntheses that guideline developers and systematic reviewers can choose from, as well as outlining the opportunities to integrate evidence at different stages of the review and guideline development process.

Review commissioners, authors and guideline developers generally have less experience of combining qualitative and evidence in mixed-methods reviews. In particular, there is a relatively small group of reviewers who are skilled at undertaking fully integrated mixed-method reviews. Commissioning additional qualitative and mixed-method reviews creates an additional cost. Large complex mixed-method reviews generally take more time to complete. Careful consideration needs to be given as to which guidelines would benefit most from additional qualitative and mixed-method syntheses. More training is required to develop capacity and there is a need to develop processes for preparing the guideline panel to consider and use mixed-method evidence in their decision-making.

This paper has presented how qualitative and quantitative evidence, combined in mixed-method reviews, can help understand aspects of complex interventions and the systems within which they are implemented. There are further opportunities to use these methods, and to further develop the methods, to look more widely at additional aspects of complexity. There is a range of review designs and synthesis methods to choose from depending on the question being asked or the questions that may emerge during the conduct of the synthesis. Additional methods need to be developed (or existing methods further adapted) in order to synthesise the full range of diverse evidence that is desirable to explore the complexity-related questions when complex interventions are implemented into health systems. We encourage review commissioners and authors, and guideline developers to consider using mixed-methods reviews and synthesis in guidelines and to report on their usefulness in the guideline development process.

Handling editor: Soumyadeep Bhaumik

Contributors: JN, AB, GM, KF, ÖT and ES drafted the manuscript. All authors contributed to paper development and writing and agreed the final manuscript. Anayda Portela and Susan Norris from WHO managed the series. Helen Smith was series Editor. We thank all those who provided feedback on various iterations.

Funding: Funding provided by the World Health Organization Department of Maternal, Newborn, Child and Adolescent Health through grants received from the United States Agency for International Development and the Norwegian Agency for Development Cooperation.

Disclaimer: ÖT is a staff member of WHO. The author alone is responsible for the views expressed in this publication and they do not necessarily represent the decisions or policies of WHO.

Competing interests: No financial interests declared. JN, AB and ÖT have an intellectual interest in GRADE CERQual; and JN has an intellectual interest in the iCAT_SR tool.

Patient consent: Not required.

Provenance and peer review: Not commissioned; externally peer reviewed.

Data sharing statement: No additional data are available.

Supplemental material: This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Quantitative and qualitative similarity measure for data clustering analysis

  • Published: 08 August 2024

Cite this article

experimental group is quantitative or qualitative

  • Jamil AlShaqsi   ORCID: orcid.org/0000-0002-4408-7967 1 ,
  • Wenjia Wang   ORCID: orcid.org/0000-0001-9372-0418 2 ,
  • Osama Drogham 3 , 4 &
  • Rami S. Alkhawaldeh   ORCID: orcid.org/0000-0002-2413-7074 5  

80 Accesses

6 Altmetric

Explore all metrics

This paper introduces a novel similarity function that evaluates both the quantitative and qualitative similarities between data instances, named QQ-Means (Qualitative and Quantitative-Means). The values are naturally scaled to fall within the range of − 1 to 1. The magnitude signifies the extent of quantitative similarity, while the sign denotes qualitative similarity. The effectiveness of the QQ-Means for cluster analysis is tested by incorporating it into the K-means clustering algorithm. We compare the results of the proposed distance measure with commonly used distance or similarity measures such as Euclidean distance, Hamming distance, Mutual Information, Manhattan distance, and Chebyshev distance. These measures are also applied to the classic K-means algorithm or its variations to ensure consistency in the experimental procedure and conditions. The QQ-Means similarity metric was evaluated on gene-expression datasets and real-world complex datasets. The experimental findings demonstrate the effectiveness of the novel similarity measurement method in extracting valuable information from the data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

experimental group is quantitative or qualitative

Similar content being viewed by others

experimental group is quantitative or qualitative

Impact of Distance Measures on the Performance of Clustering Algorithms

experimental group is quantitative or qualitative

Band-based similarity indices for gene expression classification and clustering

Explore related subjects.

  • Artificial Intelligence

Data availability

The datasets generated during and/or analysed during the current study are available from the corresponding author upon reasonable request.

UCI machine learning repository: https://archive.ics.uci.edu/

Rehman, A., Naz, S., Razzak, I.: Leveraging big data analytics in healthcare enhancement: trends, challenges and opportunities. Multimed. Syst. 28 (4), 1339–1371 (2022)

Article   Google Scholar  

Cantelmi, R., Di Gravio, G., Patriarca, R.: Reviewing qualitative research approaches in the context of critical infrastructure resilience. Environ. Syst. Decis. 41 (3), 341–376 (2021)

Ikotun, A.M., Ezugwu, A.E., Abualigah, L., Abuhaija, B., Heming, J.: K-means clustering algorithms: a comprehensive review, variants analysis, and advances in the era of big data. Inform. Sci. 622 , 178–210 (2023)

Oyewole, G.J., Thopil, G.A.: Data clustering: application and trends. Artif. Intell. Rev. 56 (7), 6439–6475 (2023)

Dorgham, O., Naser, M., Ryalat, M., Hyari, A., Al-Najdawi, N., Mirjalili, S.: U-NetCTS: U-Net deep neural network for fully automatic segmentation of 3D CT DICOM volume. Smart Health 26 , 100304 (2022)

Ran, X., Xi, Y., Lu, Y., Wang, X., Lu, Z.: Comprehensive survey on hierarchical clustering algorithms and the recent developments. Artif. Intell. Rev. 56 (8), 8219–8264 (2023)

Hassaoui, M., Hanini, M., El Kafhali, S.: Unsupervised clustering for a comparative methodology of machine learning models to detect domain-generated algorithms based on an alphanumeric features analysis. J. Netw. Syst. Manage. 32 (1), 1–38 (2024)

Li, B., Mostafavi, A.: Unraveling fundamental properties of power system resilience curves using unsupervised machine learning. Energy AI (2024). https://doi.org/10.1016/j.egyai.2024.100351

Sarker, I.H.: Deep learning: a comprehensive overview on techniques, taxonomy, applications and research directions. SN Comput. Sci. 2 (6), 420 (2021)

Klemm, C., Vennemann, P.: Modeling and optimization of multi-energy systems in mixed-use districts: a review of existing methods and approaches. Renew. Sustain. Energy Rev. 135 , 110206 (2021)

Lee, J.H., Moon, I.-C., Oh, R.: Similarity search on wafer bin map through nonparametric and hierarchical clustering. IEEE Trans. Semicond. Manuf. 34 (4), 464–474 (2021)

José-García, A., Handl, J., Gómez-Flores, W., Garza-Fabre, M.: An evolutionary many-objective approach to multiview clustering using feature and relational data. Appl. Soft Comput. 108 , 107425 (2021)

Irfan, S., Dwivedi, G., Ghosh, S.: Optimization of k-means clustering using genetic algorithm. In: 2017 International Conference on Computing and Communication Technologies for Smart Nation (IC3TSN), IEEE, pp. 156–161 (2017).

Verma, T., Gopalakrishnan, P.: Categorising the existing irradiance based blind control occupant behavior models (bc-obms) using unsupervised machine learning approach: a case of office building in india. Energy and Buildings 279 , 112700 (2023)

He, Q., Borgonovi, F., Suárez-Álvarez, J.: Clustering sequential navigation patterns in multiple-source reading tasks with dynamic time warping method. J. Comput. Assist. Learn. 39 (3), 719–736 (2023)

Fkih, F.: Similarity measures for collaborative filtering-based recommender systems: review and experimental comparison. J. King Saud Univ.-Comput. Inform. Sci. 34 (9), 7645–7669 (2022)

Google Scholar  

Sharma, K.K., Seal, A., Yazidi, A., Selamat, A., Krejcar, O.: Clustering uncertain data objects using jeffreys-divergence and maximum bipartite matching based similarity measure. IEEE Access 9 , 79505–79519 (2021)

Sharma, K.K., Seal, A., Yazidi, A., Krejcar, O.: A new adaptive mixture distance-based improved density peaks clustering for gearbox fault diagnosis. IEEE Trans. Instrum. Measure. 71 , 1–16 (2022)

Bui, Q.-T., Ngo, M.-P., Snasel, V., Pedrycz, W., Vo, B.: Information measures based on similarity under neutrosophic fuzzy environment and multi-criteria decision problems. Eng. Appl. Artif. Intell. 122 , 106026 (2023)

Cheng, L., Zhu, P., Sun, W., Han, Z., Tang, K., Cui, X.: Time series classification by euclidean distance-based visibility graph. Phys. A: Stat. Mech. Its Appl. 625 , 129 (2023)

Mao, J., Jain, A.K.: A self-organizing network for hyperellipsoidal clustering (hec). IEEE Trans. Neural Netw. 7 (1), 16–29 (1996)

Kouser, K., Sunita, S.: A comparative study of k means algorithm by different distance measures. Int. J. Innov. Res. Comput. Commun. Eng. 1 (9), 2443–2447 (2013)

Lance, G.N., Williams, W.T.: Mixed-data classificatory programs I-agglomerative systems. Aust. Comput. J. 1 (1), 15–20 (1967)

Hedges, T.: An empirical modification to linear wave theory. Proc. Ins. Civil Eng. 61 (3), 575–579 (1976)

Cheng, H., Liu, Z., Hou, L., Yang, J.: Sparsity-induced similarity measure and its applications. IEEE Trans. Circuits Syst. Video Technol. 26 (4), 613–626 (2012)

Simovici, D.A.: CLUSTERING: Theoretical and Practical Aspects. World Scientific, Singapore (2021)

Book   Google Scholar  

Huang, Z.: A fast clustering algorithm to cluster very large categorical data sets in data mining. Dmkd 3 (8), 34–39 (1997)

Tversky, A.: Features of similarity. Psychol. Rev. 84 (4), 327 (1977)

Chaturvedi, A., Green, P.E., Caroll, J.D.: K-modes clustering. J. Classif. 18 , 35–55 (2001)

Article   MathSciNet   Google Scholar  

Jiang, Y., Wang, X., Zheng, H.-T.: A semantic similarity measure based on information distance for ontology alignment. Inform. Sci. 278 , 76–87 (2014)

Gong, H., Li, Y., Zhang, J., Zhang, B., Wang, X.: A new filter feature selection algorithm for classification task by ensembling pearson correlation coefficient and mutual information. Eng. Appl. Artif. Intell. 131 , 107865 (2024)

Zhou, H., Wang, X., Zhang, Y.: Feature selection based on weighted conditional mutual information. Appl. Comput. Inform. 20 (1/2), 55–68 (2024)

He, Z., Xu, X., Deng, S.: K-anmi: a mutual information based clustering algorithm for categorical data. Inform. Fusion 9 (2), 223–233 (2008)

Velesaca, H.O., Bastidas, G., Rouhani, M., Sappa, A.D.: Multimodal image registration techniques: a comprehensive survey. Multimed. Tools Appl. (2024). https://doi.org/10.1007/s11042-023-17991-2

Lin, Y.-S., Jiang, J.-Y., Lee, S.-J.: A similarity measure for text classification and clustering. IEEE Trans. Knowl. Data Eng. 26 (7), 1575–1590 (2013)

Ashraf, S., Naeem, M., Khan, A., Rehman, N., Pandit, M., et al.: Novel information measures for fermatean fuzzy sets and their applications to pattern recognition and medical diagnosis. Comput. Intell. Neurosci. (2023). https://doi.org/10.1155/2023/9273239

Salcedo, G.E., Montoya, A.M., Arenas, A.F.: A spectral similarity measure between time series applied to the identification of protein-protein interactions. In: BIOMAT 2014: International Symposium on Mathematical and Computational Biology, World Scientific, pp. 129–139 (2015)

Dubey, V.K., Saxena, A.K.: A sequential cosine similarity based feature selection technique for high dimensional datasets. In: 2015 39th National Systems Conference (NSC), IEEE, pp. 1–5 (2015)

Verde, R., Irpino, A., Balzanella, A.: Dimension reduction techniques for distributional symbolic data. IEEE Trans. Cybern. 46 (2), 344–355 (2015)

Li, T., Rezaeipanah, A., El Din, E.M.T.: An ensemble agglomerative hierarchical clustering algorithm based on clusters clustering technique and the novel similarity measurement. J. King Saud Univ.-Comput. Inform. Sci. 34 (6), 3828–3842 (2022)

Bagherinia, A., Minaei-Bidgoli, B., Hosseinzadeh, M., Parvin, H.: Reliability-based fuzzy clustering ensemble. Fuzzy Sets Syst. 413 , 1–28 (2021)

Dogan, A., Birant, D.: K-centroid link: a novel hierarchical clustering linkage method. Appl. Intell. (2022). https://doi.org/10.1007/s10489-021-02624-8

Ma, T., Zhang, Z., Guo, L., Wang, X., Qian, Y., Al-Nabhan, N.: Semi-supervised selective clustering ensemble based on constraint information. Neurocomputing 462 , 412–425 (2021)

Al-Shaqsi, J.,Wang, W.: A clustering ensemble method for clustering mixed data. In: The 2010 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2010). IEEE

Poggiali, A., Berti, A., Bernasconi, A., Del Corso, G.M., Guidotti, R.: Quantum clustering with k-means: a hybrid approach. Theor. Comput. Sci. (2024). https://doi.org/10.1016/j.tcs.2024.114466

Hu, H., Liu, J., Zhang, X., Fang, M.: An effective and adaptable k-means algorithm for big data cluster analysis. Pattern Recognit. 139 , 109404 (2023)

Al Shaqsi, J., Wang, W.: Estimating the predominant number of clusters in a dataset. Intelligent Data Analysis 17(4), 603–626 (2013)

Theodoridis, S., Koutroumbas, K.: Pattern Recognition. Elsevier, Amsterdam (2006)

Halkidi, M., Batistakis, Y., Vazirgiannis, M.: Cluster validity methods: part I. ACM Sigmod Record 31 (2), 40–45 (2002)

Aranganayagi, S., Thangavel, K.: Improved k-modes for categorical clustering using weighted dissimilarity measure. Int. J. Comput. Inform. Eng. 3 (3), 729–735 (2009)

He, Z., Xu, X., Deng, S.: Scalable algorithms for clustering large datasets with mixed type attributes. Int. J. Intell. Syst. 20 (10), 1077–1089 (2005)

Yeung, K.Y., Ruzzo, W.L.: Details of the adjusted rand index and clustering algorithms, supplement to the paper an empirical study on principal component analysis for clustering gene expression data. Bioinformatics 17 (9), 763–774 (2001)

Yang, Y., Guan, X., You, J.: Clope: a fast and effective clustering algorithm for transactional data. In: Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 682–687 (2002)

Tasoulis, D.K., Vrahatis, M.N.: Generalizing the k-windows clustering algorithm in metric spaces. Math. Comput. Model. 46 (1–2), 268–277 (2007)

Xiao, Y., Li, H.-B., Zhang, Y.-P.: Dbgsa: a novel data adaptive bregman clustering algorithm. Eng. Appl. Artif. Intell. 131 , 107846 (2024)

Demšar, J.: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7 , 1–30 (2006)

MathSciNet   Google Scholar  

Chai, J.S., Selvachandran, G., Smarandache, F., Gerogiannis, V.C., Son, L.H., Bui, Q.-T., Vo, B.: New similarity measures for single-valued neutrosophic sets with applications in pattern recognition and medical diagnosis problems. Complex intell. systems. 7 , 703–723 (2021)

Ghobaei-Arani, M.: A workload clustering based resource provisioning mechanism using biogeography based optimization technique in the cloud based systems. Soft Comput. 25 (5), 3813–3830 (2021)

Rezaeipanah, A., Amiri, P., Nazari, H., Mojarad, M., Parvin, H.: An energy-aware hybrid approach for wireless sensor networks using re-clustering-based multi-hop routing. Wirel. Personal Commun. 120 (4), 3293–3314 (2021)

Li, G., Chen, Y., Cao, D., Qu, X., Cheng, B., Li, K.: Extraction of descriptive driving patterns from driving data using unsupervised algorithms. Mech. Syst. Signal Proc. 156 , 107589 (2021)

Al Shaqsi, J., Borghan, M., Drogham, O., Al Whahaibi, S.: A machine learning approach to predict the parameters of covid-19 severity to improve the diagnosis protocol in oman. SN Appl. Sci. 5 (10), 273 (2023)

Al Shaqsi, J., Drogham, O., Aburass, S.: Advanced machine learning based exploration for predicting pandemic fatality: Oman dataset. Inform. Med. Unlocked 43 , 101393 (2023)

Zhang, C., Huang, W., Niu, T., Liu, Z., Li, G., Cao, D.: Review of clustering technology and its application in coordinating vehicle subsystems. Automot. Innov. 6 (1), 89–115 (2023)

Yeung, K.Y., Medvedovic, M., Bumgarner, R.E.: Clustering gene-expression data with repeated measurements. Genome Biol. 4 , 1–17 (2003)

Fiorini, S.: Gene expression cancer RNA-Seq data set (2021)

Zhang, Y., Deng, Q., Liang, W., Zou, X., et al.: An efficient feature selection strategy based on multiple support vector machine technology with gene expression data. BioMed Res. Int. 20 , 18 (2018). https://doi.org/10.1155/2018/7538204

Weinstein, J.N., Collisson, E.A., Mills, G.B., Shaw, K.R., Ozenberger, B.A., Ellrott, K., Shmulevich, I., Sander, C., Stuart, J.M.: The cancer genome atlas pan-cancer analysis project. Nat. Genet. 45 (10), 1113–1120 (2013)

Download references

This work is funded.

Author information

Authors and affiliations.

Information Systems Department, Sultanate Qaboos University, Muscat, Oman

Jamil AlShaqsi

School of Computing Sciences, University of East Anglia, Norwich, UK

Wenjia Wang

Prince Abdullah bin Ghazi Faculty of Information and Communication Technology, Al-Balqa Applied University, Al-Salt, 19117, Jordan

Osama Drogham

School of Information Technology, Skyline University College, University City of Sharjah, Sharjah, 1797, United Arab Emirates

Department of Computer Information Systems, The University of Jordan, Aqaba, 77110, Jordan

Rami S. Alkhawaldeh

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, A, J., W, W.; methodology, A, J., W, W.; software, A, J., W, W., D, O., A, R.; formal analysis, A, J., A, R.; investigation, A, J., W, W., A, R.; resources, A, J., W, W., D, O., A, R.; data curation, W, W., D, O., A, R.; writing—original draft preparation, A, J., W, W., D, O.; writing—review and editing, A, R., A, J.; visualization, A, J., W, W., A, R.; supervision, A, J., W, W.; project administration,A, J., W, W.; funding acquisition, A, J., D, O. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Jamil AlShaqsi .

Ethics declarations

Conflict of interest.

The author(s) declare(s) that there is no conflict of interest regarding the publication of this paper.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher's note.

Springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A Proof of the Properties of the proposed Distance metric

As the core of our proposed similarity measure lies in the new distance metric and thus it is necessary this appendix provides the mathematical proofs that the distance function is a metric.

Definition of the distance function: For two data points x and y \(\in\) \(\Re\) , the distance \(\delta\) (x, y) between them is defined as

Theorem 1: \(\delta\) is a metric on R, and satisfies the following three properties:

\(\delta \left( x,y\right) \ge 0\)

\(\mathrm {symmetry:\ }\delta \left( x,y\right) =\delta \left( y,x\right)\)

Triangle inequality: \(\delta \left( x,y\right) \le \delta \left( x,z\right) +\delta \left( z,y\right) ,\ \forall \ x,y,\ z\ \in \ R\)

The proof of this theorem is carried out for all possible cases on real positive and negative real domains separately below.

Firstly, we consider R \({}^{+}\) x R \({}^{+}\) [0, 1]

(N.B we assume \(0\ \epsilon \ R^+\) .)

Let \(y=a\) be fixed. Then as x varies, the function \(x\mapsto \delta \left( x,a\right)\) has a curve as shown in Fig. 6 .

When \(x\ge a,\mathrm{\; \; }\delta \left( x,y\right) =\frac{x-a}{x} =1-\frac{a}{x} ,\mathrm{\; \; \; }{\mathop {\lim }\limits _{x\rightarrow \infty }} \left( 1-\frac{a}{x} \right) \Rightarrow 1\)

Corollary A1

\(\delta\) is a metric on R \({}^{+}\) , and satisfies the following three properties:

figure 6

Illustration of the distance function on the + real domain

Triangle inequality: \(\delta \left( x,y\right) \le \delta \left( x,z\right) +\delta \left( z,y\right) ,\ \forall \ x,y,\ z\ \in \ R^+\)

The first two points are very clear but the third point is less obvious. Therefore, we have to consider 6 cases to prove it.

CASE 1: \(\varvec{x}\varvec{\le }\varvec{y}\varvec{\le }\varvec{z}\)

Hence on this case \(\delta \left( x,y\right) \le \delta \left( x,z\right) +\delta \left( z,y\right)\)

CASE 2: \(\varvec{x}\varvec{\le }\varvec{z}\varvec{\le }\varvec{y}\)

CASE 3: \(\varvec{y}\varvec{\le }\varvec{x}\varvec{\le }\varvec{z}\)

By CASE 1, \(\delta \left( y,x\right) \le \delta \left( y,z\right) +\delta \left( z,x\right)\)

Hence, by symmetry of \(\delta \left( x,y\right) \le \delta \left( x,z\right) +\delta \left( z,y\right)\)

CASE 4: \(\varvec{y}\varvec{\le }\varvec{z}\varvec{\le }\varvec{x}\)

By CASE 2, \(\delta \left( y,x\right) \le \delta \left( y,z\right) +\delta \left( z,x\right)\)

Again, the result holds in this case by symmetry.

CASE 5: \(\varvec{z}\varvec{\le }\varvec{x}\varvec{\le }\varvec{y}\)

Here Triangle inequality holds in this case.

CASE 6: \(\varvec{z}\varvec{\le }\varvec{y}\varvec{\le }\varvec{x}\)

By case 5, \(\delta \left( y,z\right) +\delta \left( z,x\right) \ge \delta \left( y,x\right)\)

Since we have considered all possible orderings of x ,  y ,  z and in all cases the Triangle inequality holds, the theorem is established.

Corollary A2

\(\delta\) is a metric on R \({}^{-}\)

Corollary 3: \(\delta\) is a metric on \(R^+\bigcup {\ \left\{ 0\right\} }\ \textrm{where}\ \delta \left( 0,0\right) \ \mathrm {is\ assumed\ to\ be\ 0\ on\ }R^+\bigcup {\ \left\{ 0\right\} .}\)

Again consider (1) and (2) of a metric are obvious but it is the Triangle inequality that needs careful consideration if all three values of x ,  y ,  z are positive (or negative) the result holds.

If all are ZERO , the result is obvious.

So \(\delta\) is a metric on \({\varvec{R}}^{\varvec{+}}\bigcup {\left\{ \varvec{0}\right\} }\varvec{\ }\varvec{or}\varvec{\ \ }{\varvec{R}}^{\varvec{-}}\bigcup {\left\{ \varvec{0}\right\} }\)

We now have to consider the cases where not all of \(x,y,\textrm{and}\ z\) are either \(\ge 0\ \textrm{or}\ \le 0.\)

Consider the cases where one value is negative and two values are \(\ge 0\)

Case 1: \(\varvec{x}\varvec{\le }\varvec{0}\varvec{\ }\varvec{\textrm{and}}\varvec{\ }\varvec{0}\varvec{\le }\varvec{y}\varvec{\ }\varvec{\le }\varvec{z}.\)

Case 1a: \(x=-a,\ \left( a>0\right) \ \textrm{and}\ 0<a\le y\le z\)

Theorem \(\delta \left( x,z\right) +\delta \left( z,y\right) -\delta \left( x,y\right)\)

Case 1b: \(x=-a,\ \left( a>0\right) \ \textrm{and}\ 0\le y\le a\le z\)

Case 1c: \(x=-a,\ \left( a>0\right) \ \textrm{and}\ 0\le y\le z\le a\)

So for case 1 the Triangle inequality holds.

Case 2: \(\varvec{x}<0\ \varvec{\textrm{and}}\varvec{\ }\varvec{0}\varvec{\le }\varvec{z}\varvec{\ }\varvec{\le }\varvec{y}.\)

Again let \(x=-a,\ \left( a>0\right) .\)

Case 2a: \(a\le z\ \le y\)

Case 2b: \(\ 0\le z\ \le a\ \le y.\)

Case 2c: \(\ 0\le z\ \le y\ \le a,\ \ \textrm{if}\ z\ \ne \ 0\)

Hence, case 2 is established.

Hence, the case where x is only negative value is established.

Now, consider the case where y is the only negative value but \(x\ge 0\ \textrm{and}\ z\ge 0.\)

Case 3: \(\varvec{y}<0\ \varvec{\textrm{and}}\varvec{\ }\varvec{0}\varvec{\le }\varvec{x}\varvec{\ }\varvec{\le }\varvec{z}\)

Case 3a: \(0<b\le x\ \le z\)

ase 3b: \(0\le x\le b\ \le z,\ \left( b>0\right)\)

Case 3c: \(0\le x\le z\ \le b\)

\(\textrm{if}\) z = 0  then x = 0  as well

Hence, the Triangle inequality for case 3 is established.

Case 4: \(\varvec{\ }\varvec{y}<0\ \varvec{\textrm{and}}\varvec{\ }\varvec{0}\varvec{\le }\varvec{z}\varvec{\ }\varvec{\le }\varvec{x}\)

Case 4a: \(0<b\le z\ \le x\)

Case 4b: \(0\le z\le b\ \le x\)

Case 4c: \(0\le z\le x\ \le b\)

if z = 0 then z = 0 also and then

here the Triangle inequality holds for case 4.

Case 5: \(\ z<0\; \text{and}\; 0\le x\ \le y\)

Case 5a: \(0<c\le x\ \le y\)

Case 5b: \(0\le x\le c\ \le y\)

Case 5c: \(0\le x\le y\ \le c\)

if y = 0 then x = 0 also and

Hence, the Triangle inequality holds for this case.

Case 6: \(\varvec{z}<0\ \varvec{\textrm{and}}\varvec{\ }\varvec{0}\varvec{\le }\varvec{y}\varvec{\ }\varvec{\le }\varvec{x}\)

Case 6a: \(0<c\le y\ \le x\)

Case 6b: \(0\le y\le c\ \le x\)

Case 6c: \(0\le y\le x\ \le c\)

If x = 0 then y = 0 also then, \(\delta \left( x,z\right) +\delta \left( z,y\right) -\delta \left( x,y\right) =1+1-0>0.\)

Hence, the Triangle inequality holds for case 6.

Hence the case where just one value of x ,  y ,  z is negative we have established that the \(\Delta\) inequality holds.

Now consider the case where just two values of x ,  y ,  z are negative and the remaining value is \(\ge 0\)

by the results above since either just one of the values \(-x,-y,-z\) is negative or all values \(-x,-y,-z\) are \(\ge 0\) , or being 0. Hence, in all possible cases, the Triangle inequality holds and we have established the Theorem that \(\delta\) is a metric on R.

It is interesting to draw the distance curve for the metric on the whole of one dimension space R(- \(\mathrm {\infty }\) , + \(\mathrm {\infty }\) ) extended on Figs.  6 and 7 .

figure 7

Illustration of the new distance metric

The following properties can be seen from the figure.

When two data samples x and y have the same sign, i.e. \(\textrm{if}\ y>0\ and\ x\ge 0\ then\ \delta \left( x,y\right) \in \left[ 0,1\right]\) .

Specifically,

When one of two points, say y, is fixed at a, then,

When x and y have different signs, e.g. \(\textrm{if}\ y>0\ and\ x\le 0\ then\ \delta \left( x,y\right) \in \left[ 1,2\right]\) , representing both quantitative and qualitative difference.

When both data points are zeros, i.e.

When one of two is zero,

When one data point is approaching infinite, the distance is limited to one,

This property can be further explored and potentially used to deal with data outliers because the distance is bounded so that the influence of a data point larger than a given value, e.g. outlier, is limited, and the farther out an outlier is, the less influence it will have, which is not the case in other distance-based measures.

Special non-differential points: when y = 0, x = a, (or x = 0, y = a), the distance curve reaches its peak values 0 or 2, at x = a and x = -a respectively, where the distance function is still continuous but unsmooth, i.e. not differentiable .

More in-depth analysis is required on its theoretical properties and usefulness in various applications.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

AlShaqsi, J., Wang, W., Drogham, O. et al. Quantitative and qualitative similarity measure for data clustering analysis. Cluster Comput (2024). https://doi.org/10.1007/s10586-024-04664-4

Download citation

Received : 11 April 2024

Revised : 15 June 2024

Accepted : 05 July 2024

Published : 08 August 2024

DOI : https://doi.org/10.1007/s10586-024-04664-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • K-means clustering
  • Similarity measure
  • Clustering analysis
  • Quantitative and qualitative similarity
  • Clustering purity
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Qualitative vs Quantitative Research: Differences and Examples

    experimental group is quantitative or qualitative

  2. Qualitative Versus Quantitative Research

    experimental group is quantitative or qualitative

  3. Qualitative vs. Quantitative Research

    experimental group is quantitative or qualitative

  4. The Difference Between Control and Experimental Group

    experimental group is quantitative or qualitative

  5. 🎉 Key differences between qualitative and quantitative research

    experimental group is quantitative or qualitative

  6. Qualitative Research: Definition, Types, Methods and Examples (2023)

    experimental group is quantitative or qualitative

COMMENTS

  1. Qualitative vs. Quantitative Research

    When collecting and analyzing data, quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings. Both are important for gaining different kinds of knowledge. Quantitative research. Quantitative research is expressed in numbers and graphs. It is used to test or confirm theories and assumptions.

  2. Understanding Quantitative and Qualitative Approaches

    Quantitative designs can be experimental, quasi-experimental, descriptive, or correlational. Qualitative is usually more subjective, although like quantitative research, it also uses a systematic approach. Qualitative research is generally preferred when the clinical question centers around life experiences or meaning.

  3. Qualitative vs Quantitative Research: What's the Difference?

    Qualitative research aims to produce rich and detailed descriptions of the phenomenon being studied, and to uncover new insights and meanings. Quantitative data is information about quantities, and therefore numbers, and qualitative data is descriptive, and regards phenomenon which can be observed but not measured, such as language.

  4. Types of Research within Qualitative and Quantitative

    ♦ Statement of purpose—what was studied and why.. ♦ Description of the methodology (experimental group, control group, variables, test conditions, test subjects, etc.).. ♦ Results (usually numeric in form presented in tables or graphs, often with statistical analysis).. ♦ Conclusions drawn from the results.. ♦ Footnotes, a bibliography, author credentials.

  5. A Practical Guide to Writing Quantitative and Qualitative Research

    It is crucial to have knowledge of both quantitative and qualitative research2 as both types of research involve writing research questions and hypotheses.7 However, ... The women in the experimental group touched and held an infant for 30 min (experimental intervention protocol), whereas those in the control group watched a DVD movie of an ...

  6. Experiments and Quantitative Research

    Experimental design is one of several forms of scientific inquiry employed to identify the cause-and-effect relation between two or more variables and to assess the magnitude of the effect (s) produced. The independent variable is the experiment or treatment applied (e.g. a social policy measure, an educational reform, different incentive ...

  7. 13. Experimental design

    Key Takeaways. Experimental designs are useful for establishing causality, but some types of experimental design do this better than others. Experiments help researchers isolate the effect of the independent variable on the dependent variable by controlling for the effect of extraneous variables.; Experiments use a control/comparison group and an experimental group to test the effects of ...

  8. Difference Between Qualitative and Qualitative Research

    Qualitative and quantitative methods both play an important role in psychology. Where quantitative methods can help answer questions about what is happening in a group and to what degree, qualitative methods can dig deeper into the reasons behind why it is happening. By using both strategies, psychology researchers can learn more about human ...

  9. Types of Research within Qualitative and Quantitative

    4. Experimental research, often called true experimentation, uses the scientific method to establish the cause-effect relationship among a group of variables that make up a study. The true experiment is often thought of as a laboratory study, but this is not always the case; a laboratory setting has nothing to do with it.

  10. Types of Research within Qualitative and Quantitative

    Methodology: experimental group, control group, variables, test conditions, test subjects, etc. Results: usually numeric and presented in tables or graphs, often with statistical analysis. Conclusions drawn from the results ; References and sometimes footnotes ; Author credentials

  11. Qualitative vs Quantitative Research

    Qualitative and quantitative research differs in terms of the methods they employ to conduct, collect, and analyze data. For example, qualitative research usually relies on interviews, observations, and textual analysis to explore subjective experiences and diverse perspectives. While quantitative data collection methods include surveys ...

  12. Quantitative Research Designs: Non-Experimental vs. Experimental

    Qualitative Methodology. While there are many types of quantitative research designs, they generally fall under one of three umbrellas: experimental research, quasi-experimental research, and non-experimental research. ... A traditional experiment may involve the comparison of a control group to an experimental group who receives a treatment (i ...

  13. PDF Chapter 2: Quantitative, Qualitative, and Mixed Research

    This chapter is our introduction to the three major research methodology paradigms. A paradigm is a perspective based on a set of assumptions, concepts, and values that are held and practiced by a community of researchers. For the most of the 20th century the quantitative paradigm was dominant. During the 1980s, the qualitative paradigm came of ...

  14. What Is Quantitative Research?

    Revised on June 22, 2023. Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations. Quantitative research is the opposite of qualitative research, which involves collecting and analyzing ...

  15. Quantitative Research

    Related Information. Control group- the group of subjects or elements NOT exposed to the experimental treatment in a study where the sample is randomly selected. Experimental group- the group of subjects receiving the experimental treatment, i.e., the independent variable (controlled measure or cause) in an experiment.. Independent variable- the variable or measure being manipulated or ...

  16. What's the difference between a control group and an experimental group?

    An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways. ... Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of ...

  17. PDF Quantitative Research Designs: Experimental, Quasi-Experimental, and

    156 Chapter 6: Quantitative Research Designs: Experimental, Quasi-Experimental, and Descriptive 9781284126464_CH06_PASS02.indd 156 12/01/17 2:53 pm ... researcher to compare and evaluate the performance of the experimental group on the outcome (dependent) variable. 3. Randomization. The researcher randomly assigns each participant to a group so

  18. Quantitative and Qualitative Research: An Overview of Approaches

    While quantitative research essentially deals with the collection of numerical data to address a research problem and involves rigorous statistical analysis of the data to provide meaningful results, qualitative research deals with the collection of non-numerical data (e.g., words) to explore the experiences and knowledge of living from the point of view of people living in the environment.

  19. Quasi-Experimental Design

    Revised on January 22, 2024. Like a true experiment, a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable. However, unlike a true experiment, a quasi-experiment does not rely on random assignment. Instead, subjects are assigned to groups based on non-random criteria.

  20. Quantitative and Qualitative Research Methods

    5.1 Quantitative Research Methods. Quantitative research uses methods that seek to explain phenomena by collecting numerical data, which are then analysed mathematically, typically by statistics. With quantitative approaches, the data produced are always numerical; if there are no numbers, then the methods are not quantitative.

  21. How to use and assess qualitative research methods

    How to conduct qualitative research? Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [13, 14].As Fossey puts it: "sampling, data collection, analysis and interpretation are related to each other in a cyclical ...

  22. Conducting and Writing Quantitative and Qualitative Research

    Combination of quantitative and qualitative research. When both quantitative and qualitative research methods are used in the same research, mixed-method research is applied.25 This combination provides a complete view of the research problem and achieves triangulation to corroborate findings, complementarity to clarify results, expansion to ...

  23. Quantitative and Qualitative Research

    Quantitative research is used by social scientists, including communication researchers, to observe phenomena or occurrences affecting individuals. Social scientists are concerned with the study of people. Quantitative research is a way to learn about a particular group of people, known as a sample population.

  24. PDF Fall 2024 SRM 600 CRN 14635 Course Syllabus Office Hours

    Quantitative Research and Different Types of Designs • Experimental research • Quasi-Experimental Designs (Causal-Comparative research) • Non-Experimental Designs: Ex post facto research Correlational research the Social Sciences Survey research PowerPoint: Quasi Chapter 7: Survey Research Chapter 8: Correlational Research

  25. Synthesising quantitative and qualitative evidence to inform guidelines

    The Cochrane Qualitative and Implementation Methods Group has recently issued guidance on the selection of appraisal instruments and core assessment criteria. 40 The Mixed-Methods Appraisal Tool, which is currently undergoing further development, offers a single quality assessment instrument for quantitative, qualitative and mixed-methods ...

  26. Quantitative and qualitative similarity measure for data clustering

    This paper introduces a novel similarity function that evaluates both the quantitative and qualitative similarities between data instances, named QQ-Means (Qualitative and Quantitative-Means). The values are naturally scaled to fall within the range of − 1 to 1. The magnitude signifies the extent of quantitative similarity, while the sign denotes qualitative similarity. The effectiveness of ...