If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Biology archive

Course: biology archive   >   unit 1.

  • The scientific method

Controlled experiments

  • The scientific method and experimental design

experimental group and controlled

Introduction

How are hypotheses tested.

  • One pot of seeds gets watered every afternoon.
  • The other pot of seeds doesn't get any water at all.

Control and experimental groups

Independent and dependent variables, independent variables, dependent variables, variability and repetition, controlled experiment case study: co 2 ‍   and coral bleaching.

  • What your control and experimental groups would be
  • What your independent and dependent variables would be
  • What results you would predict in each group

Experimental setup

  • Some corals were grown in tanks of normal seawater, which is not very acidic ( pH ‍   around 8.2 ‍   ). The corals in these tanks served as the control group .
  • Other corals were grown in tanks of seawater that were more acidic than usual due to addition of CO 2 ‍   . One set of tanks was medium-acidity ( pH ‍   about 7.9 ‍   ), while another set was high-acidity ( pH ‍   about 7.65 ‍   ). Both the medium-acidity and high-acidity groups were experimental groups .
  • In this experiment, the independent variable was the acidity ( pH ‍   ) of the seawater. The dependent variable was the degree of bleaching of the corals.
  • The researchers used a large sample size and repeated their experiment. Each tank held 5 ‍   fragments of coral, and there were 5 ‍   identical tanks for each group (control, medium-acidity, and high-acidity). Note: None of these tanks was "acidic" on an absolute scale. That is, the pH ‍   values were all above the neutral pH ‍   of 7.0 ‍   . However, the two groups of experimental tanks were moderately and highly acidic to the corals , that is, relative to their natural habitat of plain seawater.

Analyzing the results

Non-experimental hypothesis tests, case study: coral bleaching and temperature, attribution:, works cited:.

  • Hoegh-Guldberg, O. (1999). Climate change, coral bleaching, and the future of the world's coral reefs. Mar. Freshwater Res. , 50 , 839-866. Retrieved from www.reef.edu.au/climate/Hoegh-Guldberg%201999.pdf.
  • Anthony, K. R. N., Kline, D. I., Diaz-Pulido, G., Dove, S., and Hoegh-Guldberg, O. (2008). Ocean acidification causes bleaching and productivity loss in coral reef builders. PNAS , 105 (45), 17442-17446. http://dx.doi.org/10.1073/pnas.0804478105 .
  • University of California Museum of Paleontology. (2016). Misconceptions about science. In Understanding science . Retrieved from http://undsci.berkeley.edu/teaching/misconceptions.php .
  • Hoegh-Guldberg, O. and Smith, G. J. (1989). The effect of sudden changes in temperature, light and salinity on the density and export of zooxanthellae from the reef corals Stylophora pistillata (Esper, 1797) and Seriatopora hystrix (Dana, 1846). J. Exp. Mar. Biol. Ecol. , 129 , 279-303. Retrieved from http://www.reef.edu.au/ohg/res-pic/HG%20papers/HG%20and%20Smith%201989%20BLEACH.pdf .

Additional references:

Want to join the conversation.

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Great Answer

Back Home

  • Science Notes Posts
  • Contact Science Notes
  • Todd Helmenstine Biography
  • Anne Helmenstine Biography
  • Free Printable Periodic Tables (PDF and PNG)
  • Periodic Table Wallpapers
  • Interactive Periodic Table
  • Periodic Table Posters
  • Science Experiments for Kids
  • How to Grow Crystals
  • Chemistry Projects
  • Fire and Flames Projects
  • Holiday Science
  • Chemistry Problems With Answers
  • Physics Problems
  • Unit Conversion Example Problems
  • Chemistry Worksheets
  • Biology Worksheets
  • Periodic Table Worksheets
  • Physical Science Worksheets
  • Science Lab Worksheets
  • My Amazon Books

Control Group Definition and Examples

Control Group in an Experiment

The control group is the set of subjects that does not receive the treatment in a study. In other words, it is the group where the independent variable is held constant. This is important because the control group is a baseline for measuring the effects of a treatment in an experiment or study. A controlled experiment is one which includes one or more control groups.

  • The experimental group experiences a treatment or change in the independent variable. In contrast, the independent variable is constant in the control group.
  • A control group is important because it allows meaningful comparison. The researcher compares the experimental group to it to assess whether or not there is a relationship between the independent and dependent variable and the magnitude of the effect.
  • There are different types of control groups. A controlled experiment has one more control group.

Control Group vs Experimental Group

The only difference between the control group and experimental group is that subjects in the experimental group receive the treatment being studied, while participants in the control group do not. Otherwise, all other variables between the two groups are the same.

Control Group vs Control Variable

A control group is not the same thing as a control variable. A control variable or controlled variable is any factor that is held constant during an experiment. Examples of common control variables include temperature, duration, and sample size. The control variables are the same for both the control and experimental groups.

Types of Control Groups

There are different types of control groups:

  • Placebo group : A placebo group receives a placebo , which is a fake treatment that resembles the treatment in every respect except for the active ingredient. Both the placebo and treatment may contain inactive ingredients that produce side effects. Without a placebo group, these effects might be attributed to the treatment.
  • Positive control group : A positive control group has conditions that guarantee a positive test result. The positive control group demonstrates an experiment is capable of producing a positive result. Positive controls help researchers identify problems with an experiment.
  • Negative control group : A negative control group consists of subjects that are not exposed to a treatment. For example, in an experiment looking at the effect of fertilizer on plant growth, the negative control group receives no fertilizer.
  • Natural control group : A natural control group usually is a set of subjects who naturally differ from the experimental group. For example, if you compare the effects of a treatment on women who have had children, the natural control group includes women who have not had children. Non-smokers are a natural control group in comparison to smokers.
  • Randomized control group : The subjects in a randomized control group are randomly selected from a larger pool of subjects. Often, subjects are randomly assigned to either the control or experimental group. Randomization reduces bias in an experiment. There are different methods of randomly assigning test subjects.

Control Group Examples

Here are some examples of different control groups in action:

Negative Control and Placebo Group

For example, consider a study of a new cancer drug. The experimental group receives the drug. The placebo group receives a placebo, which contains the same ingredients as the drug formulation, minus the active ingredient. The negative control group receives no treatment. The reason for including the negative group is because the placebo group experiences some level of placebo effect, which is a response to experiencing some form of false treatment.

Positive and Negative Controls

For example, consider an experiment looking at whether a new drug kills bacteria. The experimental group exposes bacterial cultures to the drug. If the group survives, the drug is ineffective. If the group dies, the drug is effective.

The positive control group has a culture of bacteria that carry a drug resistance gene. If the bacteria survive drug exposure (as intended), then it shows the growth medium and conditions allow bacterial growth. If the positive control group dies, it indicates a problem with the experimental conditions. A negative control group of bacteria lacking drug resistance should die. If the negative control group survives, something is wrong with the experimental conditions.

  • Bailey, R. A. (2008).  Design of Comparative Experiments . Cambridge University Press. ISBN 978-0-521-68357-9.
  • Chaplin, S. (2006). “The placebo response: an important part of treatment”.  Prescriber . 17 (5): 16–22. doi: 10.1002/psb.344
  • Hinkelmann, Klaus; Kempthorne, Oscar (2008).  Design and Analysis of Experiments, Volume I: Introduction to Experimental Design  (2nd ed.). Wiley. ISBN 978-0-471-72756-9.
  • Pithon, M.M. (2013). “Importance of the control group in scientific research.” Dental Press J Orthod . 18 (6):13-14. doi: 10.1590/s2176-94512013000600003
  • Stigler, Stephen M. (1992). “A Historical View of Statistical Concepts in Psychology and Educational Research”. American Journal of Education . 101 (1): 60–70. doi: 10.1086/444032

Related Posts

experimental group and controlled

Understanding Control Groups for Research

experimental group and controlled

Introduction

What are control groups in research, examples of control groups in research, control group vs. experimental group, types of control groups, control groups in non-experimental research.

A control group is typically thought of as the baseline in an experiment. In an experiment, clinical trial, or other sort of controlled study, there are at least two groups whose results are compared against each other.

The experimental group receives some sort of treatment, and their results are compared against those of the control group, which is not given the treatment. This is important to determine whether there is an identifiable causal relationship between the treatment and the resulting effects.

As intuitive as this may sound, there is an entire methodology that is useful to understanding the role of the control group in experimental research and as part of a broader concept in research. This article will examine the particulars of that methodology so you can design your research more rigorously .

experimental group and controlled

Suppose that a friend or colleague of yours has a headache. You give them some over-the-counter medicine to relieve some of the pain. Shortly after they take the medicine, the pain is gone and they feel better. In casual settings, we can assume that it must be the medicine that was the cause of their headache going away.

In scientific research, however, we don't really know if the medicine made a difference or if the headache would have gone away on its own. Maybe in the time it took for the headache to go away, they ate or drank something that might have had an effect. Perhaps they had a quick nap that helped relieve the tension from the headache. Without rigorously exploring this phenomenon , any number of confounding factors exist that can make us question the actual efficacy of any particular treatment.

Experimental research relies on observing differences between the two groups by "controlling" the independent variable , or in the case of our example above, the medicine that is given or not given depending on the group. The dependent variable in this case is the change in how the person suffering the headache feels, and the difference between taking and not taking the medicine is evidence (or lack thereof) that the treatment is effective.

The catch is that, between the control group and other groups (typically called experimental groups), it's important to ensure that all other factors are the same or at least as similar as possible. Things such as age, fitness level, and even occupation can affect the likelihood someone has a headache and whether a certain medication is effective.

Faced with this dynamic, researchers try to make sure that participants in their control group and experimental group are as similar as possible to each other, with the only difference being the treatment they receive.

Experimental research is often associated with scientists in lab coats holding beakers containing liquids with funny colors. Clinical trials that deal with medical treatments rely primarily, if not exclusively, on experimental research designs involving comparisons between control and experimental groups.

However, many studies in the social sciences also employ some sort of experimental design which calls for the use of control groups. This type of research is useful when researchers are trying to confirm or challenge an existing notion or measure the difference in effects.

Workplace efficiency research

How might a company know if an employee training program is effective? They may decide to pilot the program to a small group of their employees before they implement the training to their entire workforce.

If they adopt an experimental design, they could compare results between an experimental group of workers who participate in the training program against a control group who continues as per usual without any additional training.

experimental group and controlled

Qualitative data analysis starts with ATLAS.ti

Turn data into rich insights with our powerful data analysis software. Get started with a free trial.

Mental health research

Music certainly has profound effects on psychology, but what kind of music would be most effective for concentration? Here, a researcher might be interested in having participants in a control group perform a series of tasks in an environment with no background music, and participants in multiple experimental groups perform those same tasks with background music of different genres. The subsequent analysis could determine how well people perform with classical music, jazz music, or no music at all in the background.

Educational research

Suppose that you want to improve reading ability among elementary school students, and there is research on a particular teaching method that is associated with facilitating reading comprehension. How do you measure the effects of that teaching method?

A study could be conducted on two groups of otherwise equally proficient students to measure the difference in test scores. The teacher delivers the same instruction to the control group as they have to previous students, but they teach the experimental group using the new technique. A reading test after a certain amount of instruction could determine the extent of effectiveness of the new teaching method.

experimental group and controlled

As you can see from the three examples above, experimental groups are the counterbalance to control groups. A control group offers an essential point of comparison. For an experimental study to be considered credible, it must establish a baseline against which novel research is conducted.

Researchers can determine the makeup of their experimental and control groups from their literature review . Remember that the objective of a review is to establish what is known about the object of inquiry and what is not known. Where experimental groups explore the unknown aspects of scientific knowledge, a control group is a sort of simulation of what would happen if the treatment or intervention was not administered. As a result, it will benefit researchers to have a foundational knowledge of the existing research to create a credible control group against which experimental results are compared, especially in terms of remaining sensitive to relevant participant characteristics that could confound the effects of your treatment or intervention so that you can appropriately distribute participants between the experimental and control groups.

There are multiple control groups to consider depending on the study you are looking to conduct. All of them are variations of the basic control group used to establish a baseline for experimental conditions.

No-treatment control group

This kind of control group is common when trying to establish the effects of an experimental treatment against the absence of treatment. This is arguably the most straightforward approach to an experimental design as it aims to directly demonstrate how a certain change in conditions produces an effect.

Placebo control group

In this case, the control group receives some sort of treatment under the exact same procedures as those in the experimental group. The only difference in this case is that the treatment in the placebo control group has already been judged to be ineffective, except that the research participants don't know that it is ineffective.

Placebo control groups (or negative control groups) are useful for allowing researchers to account for any psychological or affective factors that might impact the outcomes. The negative control group exists to explicitly eliminate factors other than changes in the independent variable conditions as causes of the effects experienced in the experimental group.

Positive control group

Contrasted with a no-treatment control group, a positive control group employs a treatment against which the treatment in the experimental group is compared. However, unlike in a placebo group, participants in a positive control group receive treatment that is known to have an effect.

If we were to use our first example of headache medicine, a researcher could compare results between medication that is commonly known as effective against the newer medication that the researcher thinks is more effective. Positive control groups are useful for validating experimental results when compared against familiar results.

Historical control group

Rather than study participants in control group conditions, researchers may employ existing data to create historical control groups. This form of control group is useful for examining changing conditions over time, particularly when incorporating past conditions that can't be replicated in the analysis.

Qualitative research more often relies on non-experimental research such as observations and interviews to examine phenomena in their natural environments. This sort of research is more suited for inductive and exploratory inquiries, not confirmatory studies meant to test or measure a phenomenon.

That said, the broader concept of a control group is still present in observational and interview research in the form of a comparison group. Comparison groups are used in qualitative research designs to show differences between phenomena, with the exception being that there is no baseline against which data is analyzed.

Comparison groups are useful when an experimental environment cannot produce results that would be applicable to real-world conditions. Research inquiries examining the social world face challenges of having too many variables to control, making observations and interviews across comparable groups more appropriate for data collection than clinical or sterile environments.

experimental group and controlled

Analyze data and generate rich results with ATLAS.ti

Try out a free trial of ATLAS.ti to see how you can make the most of your qualitative data.

experimental group and controlled

Frequently asked questions

What is the difference between a control group and an experimental group.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Frequently asked questions: Methodology

Attrition refers to participants leaving a study. It always happens to some extent—for example, in randomized controlled trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Action research is conducted in order to solve a particular issue immediately, while case studies are often conducted over a longer period of time and focus more on observing and analyzing a particular ongoing phenomenon.

Action research is focused on solving a problem or informing individual and community-based knowledge in a way that impacts teaching, learning, and other related processes. It is less focused on contributing theoretical input, instead producing actionable input.

Action research is particularly popular with educators as a form of systematic inquiry because it prioritizes reflection and bridges the gap between theory and practice. Educators are able to simultaneously investigate an issue as they solve it, and the method is very iterative and flexible.

A cycle of inquiry is another name for action research . It is usually visualized in a spiral shape following a series of steps, such as “planning → acting → observing → reflecting.”

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analyzing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalizations —often the goal of quantitative research . As such, a snowball sample is not representative of the target population and is usually a better fit for qualitative research .

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extramarital affairs)

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Reproducibility and replicability are related terms.

  • Reproducing research entails reanalyzing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 
  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Purposive and convenience sampling are both sampling methods that are typically used in qualitative data collection.

A convenience sample is drawn from a source that is conveniently accessible to the researcher. Convenience sampling does not distinguish characteristics among the participants. On the other hand, purposive sampling focuses on selecting participants possessing characteristics associated with the research study.

The findings of studies based on either convenience or purposive sampling can only be generalized to the (sub)population from which the sample is drawn, and not to the entire population.

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection, using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

An observational study is a great choice for you if your research question is based purely on observations. If there are ethical, logistical, or practical concerns that prevent you from conducting a traditional experiment , an observational study may be a good choice. In an observational study, there is no interference or manipulation of the research subjects, as well as no control or treatment groups .

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity , and criterion validity to achieve construct validity.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity : The extent to which your measure is unrelated or negatively related to measures of distinct constructs

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

Naturalistic observation is a qualitative research method where you record the behaviors of your research subjects in real world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as “people watching” with a purpose.

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it “depends” on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation).

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups. Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with “yes” or “no” (questions that start with “why” or “how” are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when: 

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data quickly and efficiently.
  • Your research question depends on strong parity between participants, with environmental conditions held constant.

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions.
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualize your initial thoughts and hypotheses.
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts.

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalization: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Triangulation can help:

  • Reduce research bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labor-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analyzing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps: 

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardization and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Every dataset requires different techniques to clean dirty data , but you need to address these issues in a systematic way. You focus on finding and resolving data points that don’t agree or fit with the rest of your dataset.

These data might be missing values, outliers, duplicate values, incorrectly formatted, or irrelevant. You’ll start with screening and diagnosing your data. Then, you’ll often standardize and accept or remove data to make your dataset consistent and valid.

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimize or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyze, detect, modify, or remove “dirty” data to make your dataset “clean.” Data cleaning is also called data cleansing or data scrubbing.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

In multistage sampling , you can use probability or non-probability sampling methods .

For a probability sample, you have to conduct probability sampling at every stage.

You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analyzed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualize your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analyzed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.

No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.

To find the slope of the line, you’ll need to perform a regression analysis .

Correlation coefficients always range between -1 and 1.

The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.

The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.

These are the assumptions your data must meet if you want to use Pearson’s r :

  • Both variables are on an interval or ratio level of measurement
  • Data from both variables follow normal distributions
  • Your data have no outliers
  • Your data is from a random or representative sample
  • You expect a linear relationship between the two variables

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

Correlation describes an association between variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables.

Causation means that changes in one variable brings about changes in the other (i.e., there is a cause-and-effect relationship between variables). The two variables are correlated with each other, and there’s also a causal link between them.

While causation and correlation can exist simultaneously, correlation does not imply causation. In other words, correlation is simply a relationship where A relates to B—but A doesn’t necessarily cause B to happen (or vice versa). Mistaking correlation for causation is a common error and can lead to false cause fallacy .

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

Random error  is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .

You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.

Systematic error is generally a bigger problem in research.

With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.

Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.

Random and systematic error are two types of measurement error.

Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).

Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).

On graphs, the explanatory variable is conventionally placed on the x-axis, while the response variable is placed on the y-axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

The term “ explanatory variable ” is sometimes preferred over “ independent variable ” because, in real world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so “explanatory variables” is a more appropriate term.

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment.
  • Random assignment of participants to ensure the groups are equivalent.

Depending on your study topic, there are various other methods of controlling variables .

There are 4 main types of extraneous variables :

  • Demand characteristics : environmental cues that encourage participants to conform to researchers’ expectations.
  • Experimenter effects : unintentional actions by researchers that influence study outcomes.
  • Situational variables : environmental variables that alter participants’ behaviors.
  • Participant variables : any characteristic or aspect of a participant’s background that could affect study results.

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

Advantages:

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes

Disadvantages:

  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.
  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

“Controlling for a variable” means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

If something is a mediating variable :

  • It’s caused by the independent variable .
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered.

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 x 5 = 15 subgroups.

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data is then collected from as large a percentage as possible of this random subset.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

Blinding is important to reduce research bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behavior in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analyzing the data.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalization .

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g. understanding the needs of your consumers or user testing your website)
  • You can control and standardize the process for high reliability and validity (e.g. choosing appropriate measurements and sampling methods )

However, there are also some drawbacks: data collection can be time-consuming, labor-intensive and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control and randomization.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomization , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both!

You want to find out how blood sugar levels are affected by drinking diet soda and regular soda, so you conduct an experiment .

  • The type of soda – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of soda.

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

Using careful research design and sampling procedures can help you avoid sampling bias . Oversampling can be used to correct undercoverage bias .

Some common types of sampling bias include self-selection bias , nonresponse bias , undercoverage bias , survivorship bias , pre-screening or advertising bias, and healthy user bias.

Sampling bias is a threat to external validity – it limits the generalizability of your findings to a broader group of people.

A sampling error is the difference between a population parameter and a sample statistic .

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment and situation effect.

The two types of external validity are population validity (whether you can generalize to other groups of people) and ecological validity (whether you can generalize to other situations and settings).

The external validity of a study is the extent to which you can generalize your findings to different groups of people, situations, and measures.

Cross-sectional studies cannot establish a cause-and-effect relationship or analyze behavior over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data is available for analysis; other times your research question may only require a cross-sectional study to answer it.

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a “cross-section”) in the population
Follows in participants over time Provides of society at a given point

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction and attrition .

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g. the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g. water volume or weight).

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

I nternal validity is the degree of confidence that the causal relationship you are testing is not influenced by other factors or variables .

External validity is the extent to which your results can be generalized to other contexts.

The validity of your experiment depends on your experimental design .

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is powered by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

  • COVID-19 Tracker
  • Biochemistry
  • Anatomy & Physiology
  • Microbiology
  • Neuroscience
  • Animal Kingdom
  • NGSS High School
  • Latest News
  • Editors’ Picks
  • Weekly Digest
  • Quotes about Biology

Biology Dictionary

Experimental Group

BD Editors

Reviewed by: BD Editors

Experimental Group Definition

In a comparative experiment, the experimental group (aka the treatment group) is the group being tested for a reaction to a change in the variable. There may be experimental groups in a study, each testing a different level or amount of the variable. The other type of group, the control group , can show the effects of the variable by having a set amount, or none, of the variable. The experimental groups vary in the level of variable they are exposed to, which shows the effects of various levels of a variable on similar organisms.

In biological experiments, the subjects being studied are often living organisms. In such cases, it is desirable that all the subjects be closely related, in order to reduce the amount of genetic variation present in the experiment. The complicated interactions between genetics and the environment can cause very peculiar results when exposed to the same variable. If the organisms being tested are not related, the results could be the effects of the genetics and not the variable. This is why new human drugs must be rigorously tested in a variety of animals before they can be tested on humans. These different experimental groups allow researchers to see the effects of their drug on different genetics. By using animals that are closer and closer in their relation to humans, eventually human trials can take place without severe risks for the first people to try the drug.

Examples of Experimental Group

A simple experiment.

A student is conducting an experiment on the effects music has on growing plants. The student wants to know if music can help plants grow and, if so, which type of music the plants prefer. The students divide a group of plants in to two main groups, the control group and the experimental group. The control group will be kept in a room with no music, while the experimental group will be further divided into smaller experimental groups. Each of the experimental groups is placed in a separate room, with a different type of music.

Ideally, each room would have many plants in it, and all the plants used in the experiment would be clones of the same plant. Even more ideally, the plant would breed true, or would be homozygous for all genes. This would introduce the smallest amount of genetic variation into the experiment. By limiting all other variables, such as the temperature and humidity, the experiment can determine with validity that the effects produced in each room are attributable to the music, and nothing else.

Bugs in the River

To study the effects of variable on many organisms at once, scientist sometimes study ecosystems as a whole. The productivity of these ecosystems is often determined by the amount of oxygen they produce, which is an indication of how much algae is present. Ecologists sometimes study the interactions of organisms on these environments by excluding or adding organisms to an experimental group of ecosystems, and test the effects of their variable against ecosystems with no tampering. This method can sometimes show the drastic effects that various organisms have on an ecosystem.

Many experiments of this kind take place, and a common theme is to separate a single ecosystem into parts, with artificial divisions. Thus, a river could be separated by netting it into areas with and without bugs. The area with no nets allows bugs into the water. The bugs not only eat algae, but die and provide nutrients for the algae to grow. Without the bugs, various effects can be seen on the experimental portion of the river, covered by netting. The levels of oxygen in the water in each system can be measured, as well as other indicators of water quality. By comparing these groups, ecologists can begin to discern the complex relationships between populations of organisms in the environment.

Related Biology Terms

  • Control Group – The group that remains unchanged during the experiment, to provide comparison.
  • Scientific Method – The process scientists use to obtain valid, repeatable results.
  • Comparative Experiment – An experiment in which two groups, the control and experiment groups, are compared.
  • Validity – A measure of whether an experiment was caused by the changes in the variable, or simply the forces of chance.

Cite This Article

Subscribe to our newsletter, privacy policy, terms of service, scholarship, latest posts, white blood cell, t cell immunity, satellite cells, embryonic stem cells, popular topics, translation, animal cell, hydrochloric acid, adenosine triphosphate (atp), digestive system, horticulture.

Encyclopedia Britannica

  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • Games & Quizzes
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center
  • Where was science invented?
  • When did science begin?

Blackboard inscribed with scientific formulas and calculations in physics and mathematics

control group

Our editors will review what you’ve submitted and determine whether to revise the article.

  • Verywell Mind - What Is a Control Group?
  • National Center for Biotechnology Information - PubMed Central - Control Group Design: Enhancing Rigor in Research of Mind-Body Therapies for Depression

control group , the standard to which comparisons are made in an experiment. Many experiments are designed to include a control group and one or more experimental groups; in fact, some scholars reserve the term experiment for study designs that include a control group. Ideally, the control group and the experimental groups are identical in every way except that the experimental groups are subjected to treatments or interventions believed to have an effect on the outcome of interest while the control group is not. Inclusion of a control group greatly strengthens researchers’ ability to draw conclusions from a study. Indeed, only in the presence of a control group can a researcher determine whether a treatment under investigation truly has a significant effect on an experimental group, and the possibility of making an erroneous conclusion is reduced. See also scientific method .

A typical use of a control group is in an experiment in which the effect of a treatment is unknown and comparisons between the control group and the experimental group are used to measure the effect of the treatment. For instance, in a pharmaceutical study to determine the effectiveness of a new drug on the treatment of migraines , the experimental group will be administered the new drug and the control group will be administered a placebo (a drug that is inert, or assumed to have no effect). Each group is then given the same questionnaire and asked to rate the effectiveness of the drug in relieving symptoms . If the new drug is effective, the experimental group is expected to have a significantly better response to it than the control group. Another possible design is to include several experimental groups, each of which is given a different dosage of the new drug, plus one control group. In this design, the analyst will compare results from each of the experimental groups to the control group. This type of experiment allows the researcher to determine not only if the drug is effective but also the effectiveness of different dosages. In the absence of a control group, the researcher’s ability to draw conclusions about the new drug is greatly weakened, due to the placebo effect and other threats to validity. Comparisons between the experimental groups with different dosages can be made without including a control group, but there is no way to know if any of the dosages of the new drug are more or less effective than the placebo.

It is important that every aspect of the experimental environment be as alike as possible for all subjects in the experiment. If conditions are different for the experimental and control groups, it is impossible to know whether differences between groups are actually due to the difference in treatments or to the difference in environment. For example, in the new migraine drug study, it would be a poor study design to administer the questionnaire to the experimental group in a hospital setting while asking the control group to complete it at home. Such a study could lead to a misleading conclusion, because differences in responses between the experimental and control groups could have been due to the effect of the drug or could have been due to the conditions under which the data were collected. For instance, perhaps the experimental group received better instructions or was more motivated by being in the hospital setting to give accurate responses than the control group.

In non-laboratory and nonclinical experiments, such as field experiments in ecology or economics , even well-designed experiments are subject to numerous and complex variables that cannot always be managed across the control group and experimental groups. Randomization, in which individuals or groups of individuals are randomly assigned to the treatment and control groups, is an important tool to eliminate selection bias and can aid in disentangling the effects of the experimental treatment from other confounding factors. Appropriate sample sizes are also important.

A control group study can be managed in two different ways. In a single-blind study, the researcher will know whether a particular subject is in the control group, but the subject will not know. In a double-blind study , neither the subject nor the researcher will know which treatment the subject is receiving. In many cases, a double-blind study is preferable to a single-blind study, since the researcher cannot inadvertently affect the results or their interpretation by treating a control subject differently from an experimental subject.

Study Design 101

  • Helpful formulas
  • Finding specific study types
  • Randomized Controlled Trial
  • Meta- Analysis
  • Systematic Review
  • Practice Guideline
  • Cohort Study
  • Case Control Study
  • Case Reports

A study design that randomly assigns participants into an experimental group or a control group. As the study is conducted, the only expected difference between the control and experimental groups in a randomized controlled trial (RCT) is the outcome variable being studied.

  • Good randomization will "wash out" any population bias
  • Easier to blind/mask than observational studies
  • Results can be analyzed with well known statistical tools
  • Populations of participating individuals are clearly identified

Disadvantages

  • Expensive in terms of time and money
  • Volunteer biases: the population that participates may not be representative of the whole
  • Loss to follow-up attributed to treatment

Design pitfalls to look out for

An RCT should be a study of one population only.

Was the randomization actually "random", or are there really two populations being studied?

The variables being studied should be the only variables between the experimental group and the control group.

Are there any confounding variables between the groups?

Fictitious Example

To determine how a new type of short wave UVA-blocking sunscreen affects the general health of skin in comparison to a regular long wave UVA-blocking sunscreen, 40 trial participants were randomly separated into equal groups of 20: an experimental group and a control group. All participants' skin health was then initially evaluated. The experimental group wore the short wave UVA-blocking sunscreen daily, and the control group wore the long wave UVA-blocking sunscreen daily.

After one year, the general health of the skin was measured in both groups and statistically analyzed. In the control group, wearing long wave UVA-blocking sunscreen daily led to improvements in general skin health for 60% of the participants. In the experimental group, wearing short wave UVA-blocking sunscreen daily led to improvements in general skin health for 75% of the participants.

Real-life Examples

van Der Horst, N., Smits, D., Petersen, J., Goedhart, E., & Backx, F. (2015). The preventive effect of the nordic hamstring exercise on hamstring injuries in amateur soccer players: a randomized controlled trial. The American Journal of Sports Medicine, 43 (6), 1316-1323. https://doi.org/10.1177/0363546515574057

This article reports on the research investigating whether the Nordic Hamstring Exercise is effective in preventing both the incidence and severity of hamstring injuries in male amateur soccer players. Over the course of a year, there was a statistically significant reduction in the incidence of hamstring injuries in players performing the NHE, but for those injured, there was no difference in severity of injury. There was also a high level of compliance in performing the NHE in that group of players.

Natour, J., Cazotti, L., Ribeiro, L., Baptista, A., & Jones, A. (2015). Pilates improves pain, function and quality of life in patients with chronic low back pain: a randomized controlled trial. Clinical Rehabilitation, 29 (1), 59-68. https://doi.org/10.1177/0269215514538981

This study assessed the effect of adding pilates to a treatment regimen of NSAID use for individuals with chronic low back pain. Individuals who included the pilates method in their therapy took fewer NSAIDs and experienced statistically significant improvements in pain, function, and quality of life.

Related Formulas

  • Relative Risk

Related Terms

Blinding/Masking

When the groups that have been randomly selected from a population do not know whether they are in the control group or the experimental group.

Being able to show that an independent variable directly causes the dependent variable. This is generally very difficult to demonstrate in most study designs.

Confounding Variables

Variables that cause/prevent an outcome from occurring outside of or along with the variable being studied. These variables render it difficult or impossible to distinguish the relationship between the variable and outcome being studied).

Correlation

A relationship between two variables, but not necessarily a causation relationship.

Double Blinding/Masking

When the researchers conducting a blinded study do not know which participants are in the control group of the experimental group.

Null Hypothesis

That the relationship between the independent and dependent variables the researchers believe they will prove through conducting a study does not exist. To "reject the null hypothesis" is to say that there is a relationship between the variables.

Population/Cohort

A group that shares the same characteristics among its members (population).

Population Bias/Volunteer Bias

A sample may be skewed by those who are selected or self-selected into a study. If only certain portions of a population are considered in the selection process, the results of a study may have poor validity.

Randomization

Any of a number of mechanisms used to assign participants into different groups with the expectation that these groups will not differ in any significant way other than treatment and outcome.

Research (alternative) Hypothesis

The relationship between the independent and dependent variables that researchers believe they will prove through conducting a study.

Sensitivity

The relationship between what is considered a symptom of an outcome and the outcome itself; or the percent chance of not getting a false positive (see formulas).

Specificity

The relationship between not having a symptom of an outcome and not having the outcome itself; or the percent chance of not getting a false negative (see formulas).

Type 1 error

Rejecting a null hypothesis when it is in fact true. This is also known as an error of commission.

Type 2 error

The failure to reject a null hypothesis when it is in fact false. This is also known as an error of omission.

Now test yourself!

1. Having a volunteer bias in the population group is a good thing because it means the study participants are eager and make the study even stronger.

a) True b) False

2. Why is randomization important to assignment in an RCT?

a) It enables blinding/masking b) So causation may be extrapolated from results c) It balances out individual characteristics between groups. d) a and c e) b and c

← Previous Next →

© 2011-2019, The Himmelfarb Health Sciences Library Questions? Ask us .

Creative Commons License

  • Himmelfarb Intranet
  • Privacy Notice
  • Terms of Use
  • GW is committed to digital accessibility. If you experience a barrier that affects your ability to access content on this page, let us know via the Accessibility Feedback Form .

What are Controlled Experiments?

Determining Cause and Effect

skynesher / Getty Images

  • Research, Samples, and Statistics
  • Key Concepts
  • Major Sociologists
  • News & Issues
  • Recommended Reading
  • Archaeology

A controlled experiment is a highly focused way of collecting data and is especially useful for determining patterns of cause and effect. This type of experiment is used in a wide variety of fields, including medical, psychological, and sociological research. Below, we’ll define what controlled experiments are and provide some examples.

Key Takeaways: Controlled Experiments

  • A controlled experiment is a research study in which participants are randomly assigned to experimental and control groups.
  • A controlled experiment allows researchers to determine cause and effect between variables.
  • One drawback of controlled experiments is that they lack external validity (which means their results may not generalize to real-world settings).

Experimental and Control Groups

To conduct a controlled experiment , two groups are needed: an experimental group and a control group . The experimental group is a group of individuals that are exposed to the factor being examined. The control group, on the other hand, is not exposed to the factor. It is imperative that all other external influences are held constant . That is, every other factor or influence in the situation needs to remain exactly the same between the experimental group and the control group. The only thing that is different between the two groups is the factor being researched.

For example, if you were studying the effects of taking naps on test performance, you could assign participants to two groups: participants in one group would be asked to take a nap before their test, and those in the other group would be asked to stay awake. You would want to ensure that everything else about the groups (the demeanor of the study staff, the environment of the testing room, etc.) would be equivalent for each group. Researchers can also develop more complex study designs with more than two groups. For example, they might compare test performance among participants who had a 2-hour nap, participants who had a 20-minute nap, and participants who didn’t nap.

Assigning Participants to Groups

In controlled experiments, researchers use  random assignment (i.e. participants are randomly assigned to be in the experimental group or the control group) in order to minimize potential confounding variables in the study. For example, imagine a study of a new drug in which all of the female participants were assigned to the experimental group and all of the male participants were assigned to the control group. In this case, the researchers couldn’t be sure if the study results were due to the drug being effective or due to gender—in this case, gender would be a confounding variable.

Random assignment is done in order to ensure that participants are not assigned to experimental groups in a way that could bias the study results. A study that compares two groups but does not randomly assign participants to the groups is referred to as quasi-experimental, rather than a true experiment.

Blind and Double-Blind Studies

In a blind experiment, participants don’t know whether they are in the experimental or control group. For example, in a study of a new experimental drug, participants in the control group may be given a pill (known as a placebo ) that has no active ingredients but looks just like the experimental drug. In a double-blind study , neither the participants nor the experimenter knows which group the participant is in (instead, someone else on the research staff is responsible for keeping track of group assignments). Double-blind studies prevent the researcher from inadvertently introducing sources of bias into the data collected.

Example of a Controlled Experiment

If you were interested in studying whether or not violent television programming causes aggressive behavior in children, you could conduct a controlled experiment to investigate. In such a study, the dependent variable would be the children’s behavior, while the independent variable would be exposure to violent programming. To conduct the experiment, you would expose an experimental group of children to a movie containing a lot of violence, such as martial arts or gun fighting. The control group, on the other hand, would watch a movie that contained no violence.

To test the aggressiveness of the children, you would take two measurements : one pre-test measurement made before the movies are shown, and one post-test measurement made after the movies are watched. Pre-test and post-test measurements should be taken of both the control group and the experimental group. You would then use statistical techniques to determine whether the experimental group showed a significantly greater increase in aggression, compared to participants in the control group.

Studies of this sort have been done many times and they usually find that children who watch a violent movie are more aggressive afterward than those who watch a movie containing no violence.

Strengths and Weaknesses

Controlled experiments have both strengths and weaknesses. Among the strengths is the fact that results can establish causation. That is, they can determine cause and effect between variables. In the above example, one could conclude that being exposed to representations of violence causes an increase in aggressive behavior. This kind of experiment can also zero-in on a single independent variable, since all other factors in the experiment are held constant.

On the downside, controlled experiments can be artificial. That is, they are done, for the most part, in a manufactured laboratory setting and therefore tend to eliminate many real-life effects. As a result, analysis of a controlled experiment must include judgments about how much the artificial setting has affected the results. Results from the example given might be different if, say, the children studied had a conversation about the violence they watched with a respected adult authority figure, like a parent or teacher, before their behavior was measured. Because of this, controlled experiments can sometimes have lower external validity (that is, their results might not generalize to real-world settings).

Updated  by Nicki Lisa Cole, Ph.D.

  • An Overview of Qualitative Research Methods
  • Using Ethnomethodology to Understand Social Order
  • Pros and Cons of Secondary Data Analysis
  • Immersion Definition: Cultural, Language, and Virtual
  • Sociology Explains Why Some People Cheat on Their Spouses
  • What Is Participant Observation Research?
  • The Differences Between Indexes and Scales
  • Definition and Overview of Grounded Theory
  • Deductive Versus Inductive Reasoning
  • The Study of Cultural Artifacts via Content Analysis
  • Units of Analysis as Related to Sociology
  • Data Sources For Sociological Research
  • Full-Text Sociology Journals Online
  • How Race and Gender Biases Impact Students in Higher Ed
  • The Racial Wealth Gap
  • A Review of Software Tools for Quantitative Data Analysis

What is a Randomized Control Trial (RCT)?

Julia Simkus

Editor at Simply Psychology

BA (Hons) Psychology, Princeton University

Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.

Learn about our Editorial Process

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

A randomized control trial (RCT) is a type of study design that involves randomly assigning participants to either an experimental group or a control group to measure the effectiveness of an intervention or treatment.

Randomized Controlled Trials (RCTs) are considered the “gold standard” in medical and health research due to their rigorous design.

Randomized Controlled Trial RCT

Control Group

A control group consists of participants who do not receive any treatment or intervention but a placebo or reference treatment. The control participants serve as a comparison group.

The control group is matched as closely as possible to the experimental group, including age, gender, social class, ethnicity, etc.

Because the participants are randomly assigned, the characteristics between the two groups should be balanced, enabling researchers to attribute any differences in outcome to the study intervention.

Since researchers can be confident that any differences between the control and treatment groups are due solely to the effects of the treatments, scientists view RCTs as the gold standard for clinical trials.

Random Allocation

Random allocation and random assignment are terms used interchangeably in the context of a randomized controlled trial (RCT).

Both refer to assigning participants to different groups in a study (such as a treatment group or a control group) in a way that is completely determined by chance.

The process of random assignment controls for confounding variables , ensuring differences between groups are due to chance alone.

Without randomization, researchers might consciously or subconsciously assign patients to a particular group for various reasons.

Several methods can be used for randomization in a Randomized Control Trial (RCT). Here are a few examples:

  • Simple Randomization: This is the simplest method, like flipping a coin. Each participant has an equal chance of being assigned to any group. This can be achieved using random number tables, computerized random number generators, or drawing lots or envelopes.
  • Block Randomization: In this method, participants are randomized within blocks, ensuring that each block has an equal number of participants in each group. This helps to balance the number of participants in each group at any given time during the study.
  • Stratified Randomization: This method is used when researchers want to ensure that certain subgroups of participants are equally represented in each group. Participants are divided into strata, or subgroups, based on characteristics like age or disease severity, and then randomized within these strata.
  • Cluster Randomization: In this method, groups of participants (like families or entire communities), rather than individuals, are randomized.
  • Adaptive Randomization: In this method, the probability of being assigned to each group changes based on the participants already assigned to each group. For example, if more participants have been assigned to the control group, new participants will have a higher probability of being assigned to the experimental group.

Computer software can generate random numbers or sequences that can be used to assign participants to groups in a simple randomization process.

For more complex methods like block, stratified, or adaptive randomization, computer algorithms can be used to consider the additional parameters and ensure that participants are assigned to groups appropriately.

Using a computerized system can also help to maintain the integrity of the randomization process by preventing researchers from knowing in advance which group a participant will be assigned to (a principle known as allocation concealment). This can help to prevent selection bias and ensure the validity of the study results .

Allocation Concealment

Allocation concealment is a technique to ensure the random allocation process is truly random and unbiased.

RCTs use allocation concealment to decide which patients get the real medicine and which get a placebo (a fake medicine)

It involves keeping the sequence of group assignments (i.e., who gets assigned to the treatment group and who gets assigned to the control group next) hidden from the researchers before a participant has enrolled in the study.

This helps to prevent the researchers from consciously or unconsciously selecting certain participants for one group or the other based on their knowledge of which group is next in the sequence.

Allocation concealment ensures that the investigator does not know in advance which treatment the next person will get, thus maintaining the integrity of the randomization process.

Blinding (Masking)

Binding, or masking, refers to withholding information regarding the group assignments (who is in the treatment group and who is in the control group) from the participants, the researchers, or both during the study .

A blinded study prevents the participants from knowing about their treatment to avoid bias in the research. Any information that can influence the subjects is withheld until the completion of the research.

Blinding can be imposed on any participant in an experiment, including researchers, data collectors, evaluators, technicians, and data analysts.

Good blinding can eliminate experimental biases arising from the subjects’ expectations, observer bias, confirmation bias, researcher bias, observer’s effect on the participants, and other biases that may occur in a research test.

In a double-blind study , neither the participants nor the researchers know who is receiving the drug or the placebo. When a participant is enrolled, they are randomly assigned to one of the two groups. The medication they receive looks identical whether it’s the drug or the placebo.

Evidence-based medicine pyramid.

Figure 1 . Evidence-based medicine pyramid. The levels of evidence are appropriately represented by a pyramid as each level, from bottom to top, reflects the quality of research designs (increasing) and quantity (decreasing) of each study design in the body of published literature. For example, randomized control trials are higher quality and more labor intensive to conduct, so there is a lower quantity published.

Resesearch Designs

The choice of design should be guided by the research question, the nature of the treatments or interventions being studied, practical considerations (like sample size and resources), and ethical considerations (such as ensuring all participants have access to potentially beneficial treatments).

The goal is to select a design that provides the most valid and reliable answers to your research questions while minimizing potential biases and confounds.

1. Between-participants randomized designs

Between-participant design involves randomly assigning participants to different treatment conditions. In its simplest form, it has two groups: an experimental group receiving the treatment and a control group.

With more than two levels, multiple treatment conditions are compared. The key feature is that each participant experiences only one condition.

This design allows for clear comparison between groups without worrying about order effects or carryover effects.

It’s particularly useful for treatments that have lasting impacts or when experiencing one condition might influence how participants respond to subsequent conditions.

A study testing a new antidepressant medication might randomly assign 100 participants to either receive the new drug or a placebo.

The researchers would then compare depression scores between the two groups after a specified treatment period to determine if the new medication is more effective than the placebo.

Use this design when:

  • You want to compare the effects of different treatments or interventions
  • Carryover effects are likely (e.g., learning effects or lasting physiological changes)
  • The treatment effect is expected to be permanent
  • You have a large enough sample size to ensure groups are equivalent through randomization

2. Factorial designs

Factorial designs investigate the effects of two or more independent variables simultaneously. They allow researchers to study both main effects of each variable and interaction effects between variables.

These can be between-participants (different groups for each combination of conditions), within-participants (all participants experience all conditions), or mixed (combining both approaches).

Factorial designs allow researchers to examine how different factors combine to influence outcomes, providing a more comprehensive understanding of complex phenomena.

They’re more efficient than running separate studies for each variable and can reveal important interactions that might be missed in simpler designs.

A study examining the effects of both exercise intensity (high vs. low) and diet type (high-protein vs. high-carb) on weight loss might use a 2×2 factorial design.

Participants would be randomly assigned to one of four groups: high-intensity exercise with high-protein diet, high-intensity exercise with high-carb diet, low-intensity exercise with high-protein diet, or low-intensity exercise with high-carb diet.

  • You want to study the effects of multiple independent variables simultaneously
  • You’re interested in potential interactions between variables
  • You want to increase the efficiency of your study by testing multiple hypotheses at once

3. Cluster randomized designs

In cluster randomized trials, groups or “clusters” of participants are randomized to treatment conditions, rather than individuals.

This is often used when individual randomization is impractical or when the intervention is naturally applied at a group level.

It’s particularly useful in educational or community-based research where individual randomization might be disruptive or lead to treatment diffusion.

A study testing a new teaching method might randomize entire classrooms to either use the new method or continue with the standard curriculum.

The researchers would then compare student outcomes between the classrooms using the different methods, rather than randomizing individual students.

  • You have a smaller sample size available
  • Individual differences are likely to be large
  • The effects of the treatment are temporary
  • You can effectively control for order and carryover effects

4. Within-participants (repeated measures) designs

In these designs, each participant experiences all treatment conditions, serving as their own control.

Within-participants designs are more statistically powerful as they control for individual differences. They require fewer participants, making them more efficient.

However, they’re only appropriate when the treatment effects are temporary and when you can effectively counterbalance to control for order effects.

A study on the effects of caffeine on cognitive performance might have participants complete cognitive tests on three separate occasions: after consuming no caffeine, a low dose of caffeine, and a high dose of caffeine.

The order of these conditions would be counterbalanced across participants to control for order effects.

5. Crossover designs

Crossover designs are a specific type of within-participants design where participants receive different treatments in different time periods.

This allows each participant to serve as their own control and can be more efficient than between-participants designs.

Crossover designs combine the benefits of within-participants designs (increased power, control for individual differences) with the ability to compare different treatments.

They’re particularly useful in clinical trials where you want each participant to experience all treatments, but need to ensure that the effects of one treatment don’t carry over to the next.

A study comparing two different pain medications might have participants use one medication for a month, then switch to the other medication for another month after a washout period.

Pain levels would be measured during both treatment periods, allowing for within-participant comparisons of the two medications’ effectiveness.

  • You want to compare the effects of different treatments within the same individuals
  • The treatments have temporary effects with a known washout period
  • You want to increase statistical power while using a smaller sample size
  • You want to control for individual differences in response to treatment

Prevents bias

In randomized control trials, participants must be randomly assigned to either the intervention group or the control group, such that each individual has an equal chance of being placed in either group.

This is meant to prevent selection bias and allocation bias and achieve control over any confounding variables to provide an accurate comparison of the treatment being studied.

Because the distribution of characteristics of patients that could influence the outcome is randomly assigned between groups, any differences in outcome can be explained only by the treatment.

High statistical power

Because the participants are randomized and the characteristics between the two groups are balanced, researchers can assume that if there are significant differences in the primary outcome between the two groups, the differences are likely to be due to the intervention.

This warrants researchers to be confident that randomized control trials will have high statistical power compared to other types of study designs.

Since the focus of conducting a randomized control trial is eliminating bias, blinded RCTs can help minimize any unconscious information bias.

In a blinded RCT, the participants do not know which group they are assigned to or which intervention is received. This blinding procedure should also apply to researchers, health care professionals, assessors, and investigators when possible.

“Single-blind” refers to an RCT where participants do not know the details of the treatment, but the researchers do.

“ Double-blind ” refers to an RCT where both participants and data collectors are masked of the assigned treatment.

Limitations

Costly and timely.

Some interventions require years or even decades to evaluate, rendering them expensive and time-consuming.

It might take an extended period of time before researchers can identify a drug’s effects or discover significant results.

Requires large sample size

There must be enough participants in each group of a randomized control trial so researchers can detect any true differences or effects in outcomes between the groups.

Researchers cannot detect clinically important results if the sample size is too small.

Change in population over time

Because randomized control trials are longitudinal in nature, it is almost inevitable that some participants will not complete the study, whether due to death, migration, non-compliance, or loss of interest in the study.

This tendency is known as selective attrition and can threaten the statistical power of an experiment.

Randomized control trials are not always practical or ethical, and such limitations can prevent researchers from conducting their studies.

For example, a treatment could be too invasive, or administering a placebo instead of an actual drug during a trial for treating a serious illness could deny a participant’s normal course of treatment. Without ethical approval, a randomized control trial cannot proceed.

Fictitious Example

An example of an RCT would be a clinical trial comparing a drug’s effect or a new treatment on a select population.

The researchers would randomly assign participants to either the experimental group or the control group and compare the differences in outcomes between those who receive the drug or treatment and those who do not.

Real-life Examples

  • Preventing illicit drug use in adolescents: Long-term follow-up data from a randomized control trial of a school population (Botvin et al., 2000).
  • A prospective randomized control trial comparing medical and surgical treatment for early pregnancy failure (Demetroulis et al., 2001).
  • A randomized control trial to evaluate a paging system for people with traumatic brain injury (Wilson et al., 2009).
  • Prehabilitation versus Rehabilitation: A Randomized Control Trial in Patients Undergoing Colorectal Resection for Cancer (Gillis et al., 2014).
  • A Randomized Control Trial of Right-Heart Catheterization in Critically Ill Patients (Guyatt, 1991).
  • Berry, R. B., Kryger, M. H., & Massie, C. A. (2011). A novel nasal excitatory positive airway pressure (EPAP) device for the treatment of obstructive sleep apnea: A randomized controlled trial. Sleep , 34, 479–485.
  • Gloy, V. L., Briel, M., Bhatt, D. L., Kashyap, S. R., Schauer, P. R., Mingrone, G., . . . Nordmann, A. J. (2013, October 22). Bariatric surgery versus non-surgical treatment for obesity: A systematic review and meta-analysis of randomized controlled trials. BMJ , 347.
  • Streeton, C., & Whelan, G. (2001). Naltrexone, a relapse prevention maintenance treatment of alcohol dependence: A meta-analysis of randomized controlled trials. Alcohol and Alcoholism, 36 (6), 544–552.

How Should an RCT be Reported?

Reporting of a Randomized Controlled Trial (RCT) should be done in a clear, transparent, and comprehensive manner to allow readers to understand the design, conduct, analysis, and interpretation of the trial.

The Consolidated Standards of Reporting Trials ( CONSORT ) statement is a widely accepted guideline for reporting RCTs.

Further Information

  • Cocks, K., & Torgerson, D. J. (2013). Sample size calculations for pilot randomized trials: a confidence interval approach. Journal of clinical epidemiology, 66(2), 197-201.
  • Kendall, J. (2003). Designing a research project: randomised controlled trials and their principles. Emergency medicine journal: EMJ, 20(2), 164.

Akobeng, A.K., Understanding randomized controlled trials. Archives of Disease in Childhood , 2005; 90: 840-844.

Bell, C. C., Gibbons, R., & McKay, M. M. (2008). Building protective factors to offset sexually risky behaviors among black youths: a randomized control trial. Journal of the National Medical Association, 100 (8), 936-944.

Bhide, A., Shah, P. S., & Acharya, G. (2018). A simplified guide to randomized controlled trials. Acta obstetricia et gynecologica Scandinavica, 97 (4), 380-387.

Botvin, G. J., Griffin, K. W., Diaz, T., Scheier, L. M., Williams, C., & Epstein, J. A. (2000). Preventing illicit drug use in adolescents: Long-term follow-up data from a randomized control trial of a school population. Addictive Behaviors, 25 (5), 769-774.

Demetroulis, C., Saridogan, E., Kunde, D., & Naftalin, A. A. (2001). A prospective randomized control trial comparing medical and surgical treatment for early pregnancy failure. Human Reproduction, 16 (2), 365-369.

Gillis, C., Li, C., Lee, L., Awasthi, R., Augustin, B., Gamsa, A., … & Carli, F. (2014). Prehabilitation versus rehabilitation: a randomized control trial in patients undergoing colorectal resection for cancer. Anesthesiology, 121 (5), 937-947.

Globas, C., Becker, C., Cerny, J., Lam, J. M., Lindemann, U., Forrester, L. W., … & Luft, A. R. (2012). Chronic stroke survivors benefit from high-intensity aerobic treadmill exercise: a randomized control trial. Neurorehabilitation and Neural Repair, 26 (1), 85-95.

Guyatt, G. (1991). A randomized control trial of right-heart catheterization in critically ill patients. Journal of Intensive Care Medicine, 6 (2), 91-95.

MediLexicon International. (n.d.). Randomized controlled trials: Overview, benefits, and limitations. Medical News Today. Retrieved from https://www.medicalnewstoday.com/articles/280574#what-is-a-randomized-controlled-trial

Wilson, B. A., Emslie, H., Quirk, K., Evans, J., & Watson, P. (2005). A randomized control trial to evaluate a paging system for people with traumatic brain injury. Brain Injury, 19 (11), 891-894.

Print Friendly, PDF & Email

Frequently asked questions

What’s the difference between a control group and an experimental group.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Frequently asked questions: Methodology

Quantitative observations involve measuring or counting something and expressing the result in numerical form, while qualitative observations involve describing something in non-numerical terms, such as its appearance, texture, or color.

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Scope of research is determined at the beginning of your research process , prior to the data collection stage. Sometimes called “scope of study,” your scope delineates what will and will not be covered in your project. It helps you focus your work and your time, ensuring that you’ll be able to achieve your goals and outcomes.

Defining a scope can be very useful in any research project, from a research proposal to a thesis or dissertation . A scope is needed for all types of research: quantitative , qualitative , and mixed methods .

To define your scope of research, consider the following:

  • Budget constraints or any specifics of grant funding
  • Your proposed timeline and duration
  • Specifics about your population of study, your proposed sample size , and the research methodology you’ll pursue
  • Any inclusion and exclusion criteria
  • Any anticipated control , extraneous , or confounding variables that could bias your research if not accounted for properly.

Inclusion and exclusion criteria are predominantly used in non-probability sampling . In purposive sampling and snowball sampling , restrictions apply as to who can be included in the sample .

Inclusion and exclusion criteria are typically presented and discussed in the methodology section of your thesis or dissertation .

The purpose of theory-testing mode is to find evidence in order to disprove, refine, or support a theory. As such, generalisability is not the aim of theory-testing mode.

Due to this, the priority of researchers in theory-testing mode is to eliminate alternative causes for relationships between variables . In other words, they prioritise internal validity over external validity , including ecological validity .

Convergent validity shows how much a measure of one construct aligns with other measures of the same or related constructs .

On the other hand, concurrent validity is about how a measure matches up to some known criterion or gold standard, which can be another measure.

Although both types of validity are established by calculating the association or correlation between a test score and another variable , they represent distinct validation methods.

Validity tells you how accurately a method measures what it was designed to measure. There are 4 main types of validity :

  • Construct validity : Does the test measure the construct it was designed to measure?
  • Face validity : Does the test appear to be suitable for its objectives ?
  • Content validity : Does the test cover all relevant parts of the construct it aims to measure.
  • Criterion validity : Do the results accurately measure the concrete outcome they are designed to measure?

Criterion validity evaluates how well a test measures the outcome it was designed to measure. An outcome can be, for example, the onset of a disease.

Criterion validity consists of two subtypes depending on the time at which the two measures (the criterion and your test) are obtained:

  • Concurrent validity is a validation strategy where the the scores of a test and the criterion are obtained at the same time
  • Predictive validity is a validation strategy where the criterion variables are measured after the scores of the test

Attrition refers to participants leaving a study. It always happens to some extent – for example, in randomised control trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analysing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Construct validity refers to how well a test measures the concept (or construct) it was designed to measure. Assessing construct validity is especially important when you’re researching concepts that can’t be quantified and/or are intangible, like introversion. To ensure construct validity your test should be based on known indicators of introversion ( operationalisation ).

On the other hand, content validity assesses how well the test represents all aspects of the construct. If some aspects are missing or irrelevant parts are included, the test has low content validity.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

Construct validity has convergent and discriminant subtypes. They assist determine if a test measures the intended notion.

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Reproducibility and replicability are related terms.

  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.
  • Reproducing research entails reanalysing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalisations – often the goal of quantitative research . As such, a snowball sample is not representative of the target population, and is usually a better fit for qualitative research .

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones. 

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extra-marital affairs)

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection , using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

When your population is large in size, geographically dispersed, or difficult to contact, it’s necessary to use a sampling method .

This allows you to gather information from a smaller part of the population, i.e. the sample, and make accurate statements by using statistical analysis. A few sampling methods include simple random sampling , convenience sampling , and snowball sampling .

The two main types of social desirability bias are:

  • Self-deceptive enhancement (self-deception): The tendency to see oneself in a favorable light without realizing it.
  • Impression managemen t (other-deception): The tendency to inflate one’s abilities or achievement in order to make a good impression on other people.

Response bias refers to conditions or factors that take place during the process of responding to surveys, affecting the responses. One type of response bias is social desirability bias .

Demand characteristics are aspects of experiments that may give away the research objective to participants. Social desirability bias occurs when participants automatically try to respond in ways that make them seem likeable in a study, even if it means misrepresenting how they truly feel.

Participants may use demand characteristics to infer social norms or experimenter expectancies and act in socially desirable ways, so you should try to control for demand characteristics wherever possible.

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information – for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Peer review is a process of evaluating submissions to an academic journal. Utilising rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication.

For this reason, academic journals are often considered among the most credible sources you can use in a research project – provided that the journal itself is trustworthy and well regarded.

In general, the peer review process follows the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field.

It acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analysing the data.

Blinding is important to reduce bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behaviour in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a die to randomly assign participants to groups.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalisability of your results, while random assignment improves the internal validity of your study.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardisation and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyse, detect, modify, or remove ‘dirty’ data to make your dataset ‘clean’. Data cleaning is also called data cleansing or data scrubbing.

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimise or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Observer bias occurs when a researcher’s expectations, opinions, or prejudices influence what they perceive or record in a study. It usually affects studies when observers are aware of the research aims or hypotheses. This type of research bias is also called detection bias or ascertainment bias .

The observer-expectancy effect occurs when researchers influence the results of their own study through interactions with participants.

Researchers’ own beliefs and expectations about the study results may unintentionally influence participants through demand characteristics .

You can use several tactics to minimise observer bias .

  • Use masking (blinding) to hide the purpose of your study from all observers.
  • Triangulate your data with different data collection methods or sources.
  • Use multiple observers and ensure inter-rater reliability.
  • Train your observers to make sure data is consistently recorded between them.
  • Standardise your observation procedures to make sure they are structured and clear.

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

Naturalistic observation is a qualitative research method where you record the behaviours of your research subjects in real-world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as ‘people watching’ with a purpose.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

You can organise the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomisation can minimise the bias from order effects.

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or by post. All questions are standardised so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment
  • Random assignment of participants to ensure the groups are equivalent

Depending on your study topic, there are various other methods of controlling variables .

A true experiment (aka a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation (‘ x affects y because …’).

A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses. In a well-designed study , the statistical hypotheses correspond logically to the research hypothesis.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess. It should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations, and statistical analysis of data).

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data are available for analysis; other times your research question may only require a cross-sectional study to answer it.

Cross-sectional studies cannot establish a cause-and-effect relationship or analyse behaviour over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a ‘cross-section’) in the population
Follows in participants over time Provides of society at a given point

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups . Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with ‘yes’ or ‘no’ (questions that start with ‘why’ or ‘how’ are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

Social desirability bias is the tendency for interview participants to give responses that will be viewed favourably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias in research can also occur in observations if the participants know they’re being observed. They might alter their behaviour accordingly.

A focus group is a research method that brings together a small group of people to answer questions in a moderated setting. The group is chosen due to predefined demographic traits, and the questions are designed to shed light on a topic of interest. It is one of four types of interviews .

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order.
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualise your initial thoughts and hypotheses
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when:

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyse your data quickly and efficiently
  • Your research question depends on strong parity between participants, with environmental conditions held constant

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g., understanding the needs of your consumers or user testing your website).
  • You can control and standardise the process for high reliability and validity (e.g., choosing appropriate measurements and sampling methods ).

However, there are also some drawbacks: data collection can be time-consuming, labour-intensive, and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organisations.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

If something is a mediating variable :

  • It’s caused by the independent variable
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g., the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g., water volume or weight).

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

You want to find out how blood sugar levels are affected by drinking diet cola and regular cola, so you conduct an experiment .

  • The type of cola – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of cola.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control, and randomisation.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomisation , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalisation .

In statistics, ordinal and nominal variables are both considered categorical variables .

Even though ordinal data can sometimes be numerical, not all mathematical operations can be performed on them.

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

‘Controlling for a variable’ means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

There are 4 main types of extraneous variables :

  • Demand characteristics : Environmental cues that encourage participants to conform to researchers’ expectations
  • Experimenter effects : Unintentional actions by researchers that influence study outcomes
  • Situational variables : Eenvironmental variables that alter participants’ behaviours
  • Participant variables : Any characteristic or aspect of a participant’s background that could affect study results

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

The term ‘ explanatory variable ‘ is sometimes preferred over ‘ independent variable ‘ because, in real-world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so ‘explanatory variables’ is a more appropriate term.

On graphs, the explanatory variable is conventionally placed on the x -axis, while the response variable is placed on the y -axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called ‘independent’ because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation)

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it ‘depends’ on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalisation : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalisation: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity, and criterion validity to achieve construct validity.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity: The extent to which your measure is unrelated or negatively related to measures of distinct constructs

Attrition bias can skew your sample so that your final sample differs significantly from your original sample. Your sample is biased because some groups from your population are underrepresented.

With a biased final sample, you may not be able to generalise your findings to the original population that you sampled from, so your external validity is compromised.

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment, and situation effect.

The two types of external validity are population validity (whether you can generalise to other groups of people) and ecological validity (whether you can generalise to other situations and settings).

The external validity of a study is the extent to which you can generalise your findings to different groups of people, situations, and measures.

Attrition bias is a threat to internal validity . In experiments, differential rates of attrition between treatment and control groups can skew results.

This bias can affect the relationship between your independent and dependent variables . It can make variables appear to be correlated when they are not, or vice versa.

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction, and attrition .

A sampling error is the difference between a population parameter and a sample statistic .

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 × 5 = 15 subgroups.

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method .

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

In multistage sampling , you can use probability or non-probability sampling methods.

For a probability sample, you have to probability sampling at every stage. You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data are then collected from as large a percentage as possible of this random subset.

Sampling bias occurs when some members of a population are systematically more likely to be selected in a sample than others.

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from county to city to neighbourhood) to create a sample that’s less expensive and time-consuming to collect data from.

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling , and quota sampling .

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

Advantages:

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.

Disadvantages:

  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes
  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomisation. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference between this and a true experiment is that the groups are not randomly assigned.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Triangulation can help:

  • Reduce bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labour-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analysing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.

To design a successful experiment, first identify:

  • A testable hypothesis
  • One or more independent variables that you will manipulate
  • One or more dependent variables that you will measure

When designing the experiment, first decide:

  • How your variable(s) will be manipulated
  • How you will control for any potential confounding or lurking variables
  • How many subjects you will include
  • How you will assign treatments to your subjects

Exploratory research explores the main aspects of a new or barely researched question.

Explanatory research explains the causes and effects of an already widely researched question.

The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.

An observational study could be a good fit for your research if your research question is based on things you observe. If you have ethical, logistical, or practical concerns that make an experimental design challenging, consider an observational study. Remember that in an observational study, it is critical that there be no interference or manipulation of the research subjects. Since it’s not an experiment, there are no control or treatment groups either.

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analysed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analysed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualise your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analysed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organisation to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organise your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyse data (e.g. experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Ask our team

Want to contact us directly? No problem. We are always here for you.

Support team - Nina

Our support team is here to help you daily via chat, WhatsApp, email, or phone between 9:00 a.m. to 11:00 p.m. CET.

Our APA experts default to APA 7 for editing and formatting. For the Citation Editing Service you are able to choose between APA 6 and 7.

Yes, if your document is longer than 20,000 words, you will get a sample of approximately 2,000 words. This sample edit gives you a first impression of the editor’s editing style and a chance to ask questions and give feedback.

How does the sample edit work?

You will receive the sample edit within 24 hours after placing your order. You then have 24 hours to let us know if you’re happy with the sample or if there’s something you would like the editor to do differently.

Read more about how the sample edit works

Yes, you can upload your document in sections.

We try our best to ensure that the same editor checks all the different sections of your document. When you upload a new file, our system recognizes you as a returning customer, and we immediately contact the editor who helped you before.

However, we cannot guarantee that the same editor will be available. Your chances are higher if

  • You send us your text as soon as possible and
  • You can be flexible about the deadline.

Please note that the shorter your deadline is, the lower the chance that your previous editor is not available.

If your previous editor isn’t available, then we will inform you immediately and look for another qualified editor. Fear not! Every Scribbr editor follows the  Scribbr Improvement Model  and will deliver high-quality work.

Yes, our editors also work during the weekends and holidays.

Because we have many editors available, we can check your document 24 hours per day and 7 days per week, all year round.

If you choose a 72 hour deadline and upload your document on a Thursday evening, you’ll have your thesis back by Sunday evening!

Yes! Our editors are all native speakers, and they have lots of experience editing texts written by ESL students. They will make sure your grammar is perfect and point out any sentences that are difficult to understand. They’ll also notice your most common mistakes, and give you personal feedback to improve your writing in English.

Every Scribbr order comes with our award-winning Proofreading & Editing service , which combines two important stages of the revision process.

For a more comprehensive edit, you can add a Structure Check or Clarity Check to your order. With these building blocks, you can customize the kind of feedback you receive.

You might be familiar with a different set of editing terms. To help you understand what you can expect at Scribbr, we created this table:

Types of editing Available at Scribbr?


This is the “proofreading” in Scribbr’s standard service. It can only be selected in combination with editing.


This is the “editing” in Scribbr’s standard service. It can only be selected in combination with proofreading.


Select the Structure Check and Clarity Check to receive a comprehensive edit equivalent to a line edit.


This kind of editing involves heavy rewriting and restructuring. Our editors cannot help with this.

View an example

When you place an order, you can specify your field of study and we’ll match you with an editor who has familiarity with this area.

However, our editors are language specialists, not academic experts in your field. Your editor’s job is not to comment on the content of your dissertation, but to improve your language and help you express your ideas as clearly and fluently as possible.

This means that your editor will understand your text well enough to give feedback on its clarity, logic and structure, but not on the accuracy or originality of its content.

Good academic writing should be understandable to a non-expert reader, and we believe that academic editing is a discipline in itself. The research, ideas and arguments are all yours – we’re here to make sure they shine!

After your document has been edited, you will receive an email with a link to download the document.

The editor has made changes to your document using ‘Track Changes’ in Word. This means that you only have to accept or ignore the changes that are made in the text one by one.

It is also possible to accept all changes at once. However, we strongly advise you not to do so for the following reasons:

  • You can learn a lot by looking at the mistakes you made.
  • The editors don’t only change the text – they also place comments when sentences or sometimes even entire paragraphs are unclear. You should read through these comments and take into account your editor’s tips and suggestions.
  • With a final read-through, you can make sure you’re 100% happy with your text before you submit!

You choose the turnaround time when ordering. We can return your dissertation within 24 hours , 3 days or 1 week . These timescales include weekends and holidays. As soon as you’ve paid, the deadline is set, and we guarantee to meet it! We’ll notify you by text and email when your editor has completed the job.

Very large orders might not be possible to complete in 24 hours. On average, our editors can complete around 13,000 words in a day while maintaining our high quality standards. If your order is longer than this and urgent, contact us to discuss possibilities.

Always leave yourself enough time to check through the document and accept the changes before your submission deadline.

Scribbr is specialised in editing study related documents. We check:

  • Graduation projects
  • Dissertations
  • Admissions essays
  • College essays
  • Application essays
  • Personal statements
  • Process reports
  • Reflections
  • Internship reports
  • Academic papers
  • Research proposals
  • Prospectuses

Calculate the costs

The fastest turnaround time is 24 hours.

You can upload your document at any time and choose between four deadlines:

At Scribbr, we promise to make every customer 100% happy with the service we offer. Our philosophy: Your complaint is always justified – no denial, no doubts.

Our customer support team is here to find the solution that helps you the most, whether that’s a free new edit or a refund for the service.

Yes, in the order process you can indicate your preference for American, British, or Australian English .

If you don’t choose one, your editor will follow the style of English you currently use. If your editor has any questions about this, we will contact you.

experimental group and controlled

Extract insights from customer & stakeholder interviews. At Scale.

Experimental vs. control group explained.

Insight7

Home » Experimental vs. Control Group Explained

Group Comparison Analysis plays a pivotal role in experimental research. By examining the differences between experimental and control groups, researchers can draw meaningful conclusions about specific interventions. This process helps in determining whether observed effects are indeed attributable to the treatment or merely due to chance.

In any experiment, understanding how participants respond to different conditions is crucial. Group Comparison Analysis allows scientists to tease apart these responses, yielding insights that can inform various fields. Ultimately, this analytical approach not only enhances the validity of research findings but also supports the development of effective strategies based on empirical evidence.

The Basics of Experimental Groups

In research, understanding the distinction between experimental groups is essential for accurate findings. An experimental group consists of participants exposed to a variable being tested, while a control group serves as the baseline for comparison. This design enhances the reliability of results by isolating the effects of the independent variable. To conduct a thorough group comparison analysis, researchers need to ensure that both groups are similar in characteristics, minimizing biases.

The selection of participants plays a crucial role in the integrity of the study. Random assignment helps to ensure that individuals in both groups do not display pre-existing differences. This allows researchers to draw valid conclusions regarding the impact of the experimental treatment. Analyzing data from both groups provides insights into whether the intervention produces the expected changes. Effective comparison between these groups is foundational for advancing scientific knowledge. Understanding these basics will guide you through interpreting research outcomes with confidence.

Definition and Purpose

Understanding the experimental and control groups is essential in any Group Comparison Analysis. The experimental group receives the treatment or intervention, while the control group serves as a baseline for comparison. This structure is pivotal in determining the effectiveness of a given treatment and minimizes bias, ensuring the results are reliable.

The purpose of utilizing these groups lies in establishing a clear cause-and-effect relationship. By comparing outcomes from both groups, researchers can identify any significant differences attributable to the treatment. This comparison not only enhances the validity of findings but also influences data-driven decisions in various fields, including healthcare and marketing. Ultimately, the insight gained from this method fosters informed strategies that can lead to improved outcomes, whether in product development or user experience.

Designing an Experimental Group: Group Comparison Analysis

Designing an experimental group involves carefully planning each aspect to ensure valid results through group comparison analysis. This analysis is crucial for distinguishing the effects of a treatment or intervention from the natural variability found in any population. To effectively design your experimental group, you need to determine the characteristics that will make it comparable to the control group.

A proper comparison requires selection criteria such as age, gender, and baseline characteristics. This helps ensure that differences in outcomes arise solely from the intervention rather than from pre-existing variances. Next, consider randomization; randomly assigning participants reduces bias and enhances the study's reliability. Lastly, maintaining consistency in treatment delivery is essential. This ensures that everyone in the experimental group receives the same intervention, thus allowing for an accurate analysis of effects. By following these principles, your group comparison analysis can yield insightful and actionable outcomes.

The Role of Control Groups in Research

Control groups play a vital role in research by providing a benchmark to which experimental groups can be compared. Through group comparison analysis, researchers can discern the effects of an intervention by measuring outcomes against the control group that does not receive the treatment. This approach ensures that any observed changes in the experimental group can be more confidently attributed to the treatment rather than other external factors.

Moreover, control groups help minimize bias and variability in research outcomes. By allowing researchers to assess how participants behave under standard conditions, it becomes easier to isolate the impact of the experimental variable. Understanding these dynamics improves the reliability of results, making findings more valid and generalizable. Therefore, incorporating control groups in studies is essential for achieving accurate and trustworthy conclusions that can inform future practices or theories.

Definition and Purpose of Control Groups in Group Comparison Analysis

Control groups are essential in group comparison analysis, serving as benchmarks for experimental outcomes. These groups consist of participants who do not receive the treatment or intervention under investigation, allowing researchers to isolate the impact of specific variables. By comparing the results from the experimental group against the control group, researchers can determine the effectiveness of the intervention in a more precise manner.

The purpose of control groups is to minimize biases and ensure valid conclusions. They help in identifying whether observed changes in the experimental group are genuinely caused by the treatment or merely due to external factors. Additionally, control groups enable replication of studies, which is vital for affirming findings and fostering scientific credibility. In summary, control groups are indispensable tools in group comparison analysis, providing clarity and enhancing the reliability of research outcomes.

Examples of Control Group Usage

Control groups are essential in various fields, enabling researchers to validate their findings by providing a baseline for comparison. For instance, in a clinical trial assessing a new medication, one group receives the drug while a control group receives a placebo. This setup allows for a clearer understanding of the drug's effectiveness versus no treatment at all.

In market research, control groups allow analysts to examine consumer behavior under different conditions. A common example is testing two marketing strategies: one group receives traditional ads, while the control group is exposed to digital campaigns. Group comparison analysis reveals which method resonates better with the audience, helping to refine marketing approaches and optimize future campaigns. Through these examples, it's evident that control groups are invaluable in ensuring scientific rigor and making informed decisions across various domains.

Conclusion: The Importance of Group Comparison Analysis in Research

Group Comparison Analysis serves as a critical tool for researchers, allowing them to discern the differences between experimental and control groups. By methodically comparing these groups, researchers can assess the effectiveness of interventions or treatments. This type of analysis provides vital insights, facilitating a deeper understanding of how variables impact outcomes.

Furthermore, the importance of this analysis extends beyond mere statistical significance. It fosters evidence-based decision-making, ensuring that findings are reliable and applicable in real-world settings. Ultimately, understanding the dynamics between different groups equips researchers with the knowledge to make informed conclusions, driving advancements in various fields of study.

Turn interviews into actionable insights

On this Page

Random Sampling Definition in Research

You may also like, ai ml use cases for business innovation.

Insight7

Top Companies Using AI in 2024

Ai computer program solutions for businesses.

Unlock Insights from Interviews 10x faster

experimental group and controlled

  • Request demo
  • Get started for free

Experimental study on different phytoremediation of heavy metal pollution in HDS sediment of copper mines

  • Research Article
  • Published: 17 August 2024

Cite this article

experimental group and controlled

  • Zhuyu Zhao   ORCID: orcid.org/0000-0003-2397-1144 1 , 2 , 5 ,
  • Ruoyan Cai 2 , 3 ,
  • Jinchun Xue   ORCID: orcid.org/0000-0001-8519-4534 2 , 4 ,
  • Li Tan 6 &
  • Chuanliang Yan 1 , 5  

34 Accesses

Explore all metrics

HDS sediment is a type of solid waste produced when the high-concentration mud method (HDS) is adopted to treat acid wastewater from copper mines. It can rationally utilize sediment resources by using phytoremediation, which plays a role in the ecological restoration of mines.

To reveal the effect of different phytoremediation on the heavy metal, enrichment capacity and microbial diversity of the HDS sediments of copper mines, in this experiment, the HDS sediments of a copper mine without phytoremediation were selected as the control group, while the sediments of black locust ( Robinia pseudoacacia ), slash pine ( Pinus elliottii Engelmann ) and Chinese white poplar ( Populus tomentosa Carr. ) were used as test groups to analyze the physical and chemical properties, heavy metal pollution and bioaccumulation capacity of HDS sediments under three phytoremediation.

The results show that different phytoremediation can reduce the sediment's conductivity and adjust the sediment’s pH value to the range suitable for plant growth. The BCF Shoot and BTF values of Chinese white poplar to Cd and Zn and slash pine to Pb were both greater than 1.

Conclusions

As discovered from the bioconcentration coefficient and biotransport coefficient results, Chinese white poplar is a Cd-enriched and Zn-enriched plant, while slash pine is a Pb-enriched plant.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

experimental group and controlled

Explore related subjects

  • Environmental Chemistry

Data availability

The data that has been used is confidential.

Afonso S, Arrobas M, Rodrigues MN (2020) Soil and plant analyses to diagnose hop fields irregular growth. J Soil Sci Plant Nut 20:1999–2013. https://doi.org/10.1007/s42729-020-00270-6

Article   CAS   Google Scholar  

Afonso TF, Demarco CF, Pieniz S, Camargo F, Andreazza R (2019) Potential of Solanum viarum Dunal in use for phytoremediation of heavy metals to mining areas, southern Brazil. Environ Sci Pollut R 26:24132–24142. https://doi.org/10.1007/s11356-019-05460-z

Ali H, Khan E, Sajad MA (2013) Phytoremediation of heavy metals: concepts and applications. Chemosphere 91:869–881. https://doi.org/10.1016/j.chemosphere.2013.01.075

Article   PubMed   CAS   Google Scholar  

Bai Y, Zhou Y, Gong J (2021) Physiological mechanisms of the tolerance response to manganese stress exhibited by Pinus massoniana , a candidate plant for the phytoremediation of Mn-contaminated soil. Environ Sci Pollut R 28:45422–45433. https://doi.org/10.1007/s11356-021-13912-8

Baker AJM, Mcgrath SP, Sidoli C, Reeves RD (1994) In situ remediation of metal-contaminated soils using crops of hyperaccumulator plants: potentials and future prospects. Avril C, Impens R (eds) Proceedings Workshop on Soil Remediation, Paris, 1992. pp 88–94

Buayam N, Davey MP, Smith AG, Pumas C (2019). Effects of copper and pH on the growth and physiology of Desmodesmus sp. AARLG074. Metabolites 9:84. https://doi.org/10.3390/metabo9050084

Cai RY, Xue JC, Tan L, Zhao ZY, Zhang ZY (2021) Study on physicochemical properties of high-density sludge sediment in copper mine after phytoremediation. J Soil Sci Plant Nut. https://doi.org/10.1007/s42729-021-00575-0

Article   Google Scholar  

Cases JM (1997) Mechanism of adsorption and desorption of water vapor by homoionic montmorillonite: 3. the Mg 2+ , Ca 2+ , Sr 2+ and Ba 2+ exchanged forms. Clay Clay Miner 45:8–22. https://doi.org/10.1346/CCMN.1997.0450102

Chen T, Lei M, Wan X, Yang J, Zhou X (2018) Arsenic hyperaccumulator Pteris vittata L. and its application to the field. Twenty Years Res Dev on Soil Pollut Remediation in China. Springer Singapore, Singapore, pp 465–476

Cheng S, Chen T, Xu WB, Huang J, Yan B (2020) Application research of biochar for the remediation of soil heavy metals contamination: a weview. Molecules 25:3167. https://doi.org/10.3390/molecules25143167

Article   PubMed   PubMed Central   CAS   Google Scholar  

Cuevas JG, Quiroz M (2019) Screening of native and exotic tree species in Chile for element absorption from dairy slurry. J Soil Sci Plant Nut. https://doi.org/10.1007/s42729-018-0002-8

Emenike CU, Barasarathi J, Pariatamby A, Hamid FS (2018) Biotransformation and removal of heavy metals: a review of phyto and microbial remediation assessment on contaminated soil. Environ Rev 26. https://doi.org/10.1139/er-2017-0045

Fernández S, Poschenrieder C, Marcenò C, Gallego JR, Jiménez-Gámez D, Bueno A, Afif E (2017) Phytoremediation capability of native plant species living on Pb-Zn and Hg-As mining wastes in the Cantabrian range, north of Spain. J Geochem Explor 174:10–20. https://doi.org/10.1016/j.gexplo.2016.05.015

Fu Y, Mason AS, Zhang Y, Lin B, Yu H (2019) MicroRNA-mRNA expression profiles and their potential role in cadmium stress response in Brassica napus . BMC Plant Biol 19. https://doi.org/10.1186/s12870-019-2189-9

Hammad DM (2011) Cu, Ni and Zn phytoremediation and translocation by water hyacinth plant at different aquatic environments. Aust J Basic Applied Sci 5:11–22

CAS   Google Scholar  

Jacob JM, Karthik C, Saratale RG, Kumar SS, Prabakar D, Kadirvelu K, Pugazhendhi A (2018) Biological approaches to tackle heavy metal pollution: a survey of literature. Journal Environ Manage 217:56–70. https://doi.org/10.1016/j.jenvman.2018.03.077

Jampasri K, Pokethitiyook P, Poolpak T, Kruatrachue M, Kumsopa A (2019) Bacteria-assisted phytoremediation of fuel oil and lead co-contaminated soil in the salt-stressed condition by chromolaena odorata and Micrococcus luteus . Int J Phytoremediat 22:1–12. https://doi.org/10.1080/15226514.2019.1663482

Kalin M, Fyson A, Wheeler WN (2006) The chemistry of conventional and alternative treatment systems for the neutralization of acid mine drainage. Sci Total Environ 366:395–408. https://doi.org/10.1016/j.scitotenv.2005.11.015

Karimyan K, Alimohammadi M, Maleki A, Yunesian M, Foroushani AR (2020) The mobility of arsenic from highly polluted farmlands to wheat: soil-plant transfer model and health risk assessment. Land Degrad Dev. https://doi.org/10.1002/ldr.3552

Kruse J, Eckhardt KU, Regier T, Leinweber P (2011) TG–FTIR, LC/MS, XANES and Py-FIMS to disclose the thermal decomposition pathways and aromatic N formation during dipeptide pyrolysis in a soil matrix. J Anal Appl Pyrol 90:164–173. https://doi.org/10.1016/j.jaap.2010.12.002

Kumari A, Lal B, Rai UN (2015) Assessment of native plant species for phytoremediation of heavy metals growing in the vicinity of NTPC sites, Kahalgaon, India. Int J Phytoremediat 18:592–597. https://doi.org/10.1080/15226514.2015.1086301

Li Y, Luo C, Lin X, Li K, Pu WF (2020) Characteristics and properties of coke formed by low-temperature oxidation and thermal pyrolysis during in situ combustion. Ind Eng Chem Res 59:2171–2180. https://doi.org/10.1021/acs.iecr.9b05635

Li Y, Qin Y, Xu W, Chai Y, Feng D (2019) Differences of Cd uptake and expression of MT family genes and NRAMP2 in two varieties of ryegrasses. Environ Sci Pollut R 26:13738–13745. https://doi.org/10.1007/s11356-018-2649-z

Ma J, Quan Z, Sun Y, Du J, Liu B (2020) Excess sulfur and Fe elements drive changes in soil and vegetation at abandoned coal gangues, Guizhou China. Sci Rep-UK 10. https://doi.org/10.1038/s41598-020-67311-z

Mackie AL, Walsh ME (2015) Investigation into the use of cement kiln dust in high density sludge (HDS) treatment of acid mine water. Water Res 85:443–450. https://doi.org/10.1016/j.watres.2015.08.056

Magdziak Z, Gąsecka M, Goliński P, Mleczek M (2015) Phytoremediation and environmental factors. In: Ansari A, Gill S, Gill R, Lanza G, Newman L (eds) Phytoremediation. Springer, Cham. https://doi.org/10.1007/978-3-319-10395-2_4

Marrugo-Negrete J, Marrugo-Madrid S, Pinedo-Hernandez J, Durango-Hernandez J, Diez S (2016) Screening of native plant species for phytoremediation potential at a Hg-contaminated mining site. Sci Total Environ 542:809–816. https://doi.org/10.1016/j.scitotenv.2015.10.117

Mayer LM, Schick LL, Hardy KR, Wagai R, Mccarthy J (2004) Organic matter in small mesopores in sediments and soils. Geochim Cosmochim Acta 68:3863–3872

Mcgrath SP, Chang AC, Page AL, Witter E (1994) Land application of sewage sludge: scientific perspectives of heavy metal loading limits in Europe and the United States. Environ Rev 2:108–118. https://doi.org/10.1139/a94-006

Mellem JJ, Baijnath H, Odhav B (2009) Translocation and accumulation of Cr, Hg, As, Pb, Cu and Ni by Amaranthus dubius (Amaranthaceae) from contaminated sites. J Environ Sci Heal A 44:568–575. https://doi.org/10.1080/10934520902784583

Mellem JJ, Baijnath H, Odhav B (2012) Bioaccumulation of Cr, Hg, As, Pb, Cu and Ni with the ability for hyperaccumulation by Amaranthus dubius . Afr J Agr Res 7:591–596. https://doi.org/10.5897/AJAR11.1486

Mugica-Alvarez V, Cortés-Jiménez V, Vaca-Mier M, Domínguez-Soria V (2015) Phytoremediation of mine tailings Using Lolium Multiflorum . Int J Environ Sci Dev 6:246–251. https://doi.org/10.7763/IJESD.2015.V6.599

Naznin HA, Kimura M, Miyazawa M, Hyakumachi M (2013) Analysis of volatile organic compounds emitted by plant growth-promoting fungus Phoma sp. GS8-3 for growth promotion effects on tobacco. Microbes Environ 28:42–49. https://doi.org/10.1264/jsme2.ME12085

Article   PubMed   Google Scholar  

Opatokun SA, Yousef LF, Strezov V (2017) Agronomic assessment of pyrolysed food waste digestate for sandy soil management. J Environ Manage 187:24–30. https://doi.org/10.1016/j.jenvman.2016.11.030

Pan C, Liu C, Zhao H, Wang Y (2013) Changes of soil physico-chemical properties and enzyme activities in relation to grassland salinization. Eur J Soil Biol 55:13–19. https://doi.org/10.1016/j.ejsobi.2012.09.009

Pan LH, Wang ZQ, Yang Q, Huang RY (2018) Efficient removal of lead, copper and cadmium ions from water by a porous calcium alginate/graphene oxide composite serogel. Nanomaterials-Basel 8:957. https://doi.org/10.3390/nano8110957

Pérez R, Tapia Y, Antilén M, Casanova M, Cornejo P (2021) Rhizosphere management for phytoremediation of copper mine tailings. J Soil Sci Plant Nut. https://doi.org/10.7763/IJESD.2015.V6.599

Ruqaya J, Altaf A, Muhammad I (2009) Phytoremediation of heavy metals: physiological and molecular mechanisms. Bot Rev 75:339–364. https://doi.org/10.1007/s12229-009-9036-x

Saxena G, Purchase D, Mulla SI, Saratale GD, Bharagava RN (2019) Phytoremediation of heavy metal-contaminated sites: eco-environmental concerns, field studies, sustainability issues and future prospects. Rev Environ Contam T 249:71–131. https://doi.org/10.1007/398_2019_24

Schuttlefield JD, Cox D, Grassian VH (2007) An investigation of water uptake on clays minerals using ATR-FTIR spectroscopy coupled with quartz crystal microbalance measurements. J Geophys Res-Atmos 112. https://doi.org/10.1029/2007JD008973

Shehata EA, Liu YW, Feng Y, Cheng D, Li Z (2019) Changes in arsenic and copper bioavailability and oxytetracycline degradation during the composting process. Molecules 24:4240. https://doi.org/10.3390/molecules24234240

Shen J, Igathinathane C, Yu M, Pothula AK (2015) Biomass pyrolysis and combustion integral and differential reaction heats with temperatures using thermogravimetric analysis/differential scanning calorimetry. Bioresource Technol 185:89–98. https://doi.org/10.1016/j.biortech.2015.02.079

Shi X, Wang SF, Sun HJ, Chen YT, Wang DX, Pan HW, Zou YZ, Liu JF, Zheng LY, Zhao XL (2017) Comparative of Quercus spp. and Salix spp. for phytoremediation of Pb/Zn mine tailings. Environ Sci Pollut Res 24:3400–3411. https://doi.org/10.1007/s11356-016-7979-0

Souza S, Andrade S, Souza L, Schiavinato MA (2012) Lead tolerance and phytoremediation potential of Brazilian leguminous tree species at the seedling stage. J Environ Manage 110:299–307. https://doi.org/10.1016/j.jenvman.2012.06.015

Tang P, Zhao Y, Xia F (2008) Thermal behaviors and heavy metal vaporization of phosphatized tannery sludge in incineration process. J Environ Sci 20:1146–1152. https://doi.org/10.1016/S1001-0742(08)62162-2

Tangahu BV, Sheikh Abdullah SR, Basri H, Idris M, Anuar N, Mukhlisin M (2011) A review on heavy metals (As, Pb, and Hg) uptake by plants through phytoremediation. Int J Chem Eng 2011:1–31. https://doi.org/10.1155/2011/939161

Tikhonravova PI (2007) Effect of the water content on the thermal diffusivity of clay loams with different degrees of salinization in the Transvolga region. Eurasian Soil Sci + 40:47–50. https://doi.org/10.1134/S1064229307010073

Wpc A, Wvdsp A, Damg A, Ommt B, Cbda C, Arf A (2020) Phytoremediation potential of Khaya ivorensis and Cedrela fissilis in copper contaminated soil. J Environ Manag 268. https://doi.org/10.1016/j.jenvman.2020.110733

Xue JC, He M, Wu CF, Zhang ZY, Tan L (2020) Adaptable plants for acidic wastewater sediment of copper sulfide mines. Environ Eng Manag J 19:1475–1480. https://doi.org/10.30638/EEMJ.2020.137

Yamamoto T, Goto I, Kawaguchi O, Minagawa K, Ariyoshi E, Matsuda O (2008) Phytoremediation of shallow organically enriched marine sediments using benthic microalgae. Mar Pollut Bull 57:108–115. https://doi.org/10.1016/j.marpolbul.2007.10.006

Yoon J, Cao XD, Zhou QX, Ma LQ (2006) Accumulation of Pb, Cu, and Zn in native plants growing on a contaminated Florida site. Sci Total Environ 368:456–464. https://doi.org/10.1016/j.scitotenv.2006.01.016

Zacchini M, Pietrini F, Mugnozza GS, Iori V, Pietrosanti L, Massacci A (2009) Metal tolerance, accumulation and translocation in poplar and willow clones treated with cadmium in hydroponics. Water Air Soil Poll 197:23–34. https://doi.org/10.1007/s11270-008-9788-7

Zhao JM, Xia B, Meng Y, Yang ZF, Zhang XQ (2019) Transcriptome analysis to shed light on the molecular mechanisms of early responses to cadmium in roots and leaves of King grass ( Pennisetum americanum × P. purpureum ). Int J of Mol Sci 20:2532. https://doi.org/10.3390/ijms20102532

Zhao XL, Liu JF, Xia XL, Chu JM, Yuan W, Shi SQ, Chang EM, Yin WL, Jiang ZP (2014) The evaluation of heavy metal accumulation and application of a comprehensive bio-concentration index for woody species on contaminated sites in Hunan, China. Environ Sci Pollut R 21:5076–5085. https://doi.org/10.1007/s11356-013-2393-3

Zhu GG, Xiao HY, Guo QJ, Song B, Zheng GD, Zhang ZY, Zhao JJ, Okoli CP (2018) Heavy metal contents and enrichment characteristics of dominant plants in wasteland of the downstream of a lead-zinc mining area in Guangxi, Southwest China. Ecotox Environ Safe 151:266–271. https://doi.org/10.1016/j.ecoenv.2018.01.01

Download references

This work was funded by the National Natural Science Foundation of China (Grants 51664016, 51664017), the Key R&D projects in Jiangxi Province (20212BBG73013), and Jiangxi Copper Company Limited Chengmenshan Copper Technology Projects (CTYJ2022006, CMS-23SCJS-07JS-01F).

Author information

Authors and affiliations.

School of Petroleum Engineering, China University of Petroleum (East China), Qingdao, 266580, China

Zhuyu Zhao & Chuanliang Yan

School of Energy and Mechanical Engineering, Jiangxi University of Science and Technology, Nanchang, 330013, China

Zhuyu Zhao, Ruoyan Cai & Jinchun Xue

Zhejiang Shangfeng High-Tech Specialized Wind Industrial Co, LTD, Shaoxing, 311231, China

Key Laboratory of Environmental Geotechnical and Engineering Hazard Control of Jiangxi Province, Ganzhou, 341000, China

Jinchun Xue

State Key Laboratory of Deep Oil and Gas, China University of Petroleum (East China), Qingdao, 266580, China

Emergency Management Administration of Haojiang District, Shantou, 515071, China

You can also search for this author in PubMed   Google Scholar

Contributions

Zhuyu Zhao and Ruoyan Cai performed the data analyses and wrote the manuscript; Jinchun Xue contributed to the conception of the study; Li Tan and Chuanliang Yan helped perform the analysis with constructive discussions.

Corresponding author

Correspondence to Jinchun Xue .

Ethics declarations

Ethics approval.

Not applicable’ for that section.

Consent to participate

Consent for publication, conflicts of interest/competing interests.

We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

Additional information

Responsible Editor: Juan Barcelo.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Zhuyu Zhao and Ruoyan Cai shared first-authorship reflecting equal contributions.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Zhao, Z., Cai, R., Xue, J. et al. Experimental study on different phytoremediation of heavy metal pollution in HDS sediment of copper mines. Plant Soil (2024). https://doi.org/10.1007/s11104-024-06886-2

Download citation

Received : 09 April 2024

Accepted : 29 July 2024

Published : 17 August 2024

DOI : https://doi.org/10.1007/s11104-024-06886-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Copper mine
  • HDS sediment
  • Phytoremediation
  • Heavy metal
  • Find a journal
  • Publish with us
  • Track your research
  • Open access
  • Published: 19 August 2024

A study on selected responses and immune structures of broiler chickens with experimental colibacillosis with or without florfenicol administration

  • Zahra Ghahramani 1 ,
  • Najmeh Mosleh 1 ,
  • Tahoora Shomali 2 ,
  • Saeed Nazifi 3 &
  • Azizollah Khodakaram-Tafti 4  

BMC Veterinary Research volume  20 , Article number:  371 ( 2024 ) Cite this article

Metrics details

Colibacillosis in broiler chickens is associated with economic loss and localized or systemic infection. Usually, the last resort is antibacterial therapy. Insight into the disease pathogenesis, host responses and plausible immunomodulatory effects of the antibacterials is important in choosing antibacterial agent and optimization of the treatment. Selected responses of broiler chickens experimentally infected with Escherichia coli ( E. coli) and also those treated with florfenicol are evaluated in this study. Chickens ( n  = 70, 5 weeks old) were randomly assigned to four groups. The control groups included normal control (NC) and intratracheal infection control (ITC) (received sterile bacterial medium). The experimental groups consisted of intratracheal infection (IT) that received bacterial suspension and intratracheal infection with florfenicol administration (ITF) group.

Florfenicol reversed the decreased albumin/globulin ratio to the level of control groups ( p  > 0.05). Serum interleukin 10 (IL-10) and interferon‐gamma (IFN-γ) concentrations decreased in IT birds as compared to NC group. Florfenicol decreased the serum interleukin 6 (IL-6) concentration as compared to IT group. Milder signs of inflammation, septicemia, and left shift were observed in the leukogram of the ITF group. Florfenicol decreased the severity of histopathological lesions in lungs and liver. Depletion of lymphoid tissue was detected in spleen, thymus and bursa of IT group but was absent in ITF birds. The number of colony forming units of E. coli in liver samples of ITF group was only slightly lower than IT birds.

Conclusions

Experimental E. coli infection of chickens by intratracheal route is associated with remarkable inflammatory responses as shown by changes in biochemical and hematological parameters. Histopathological lesions in lymphoid organs (especially in the spleen) were also prominent. Florfenicol has positive immunomodulatory effects and improves many of the lesions before the full manifestation of its antibacterial effects. These effects of florfenicol should be considered in pharmacotherapy decision-making process.

Peer Review reports

Colibacillosis which is caused by avian pathogenic Escherichia coli (APEC) is commonly diagnosed in broiler chicken flocks already afflicted with viral diseases or other stressors. The bacteria can also be considered as a primary pathogen. The disease is associated with appreciable economic loss and adversely affects the birds’ welfare. Unfortunately, efficient vaccines are not currently available and usually the last resort is antibacterial therapy of infected flocks [ 18 ,  11 ].

Gaining insight into the host responses against colibacillosis and antibacterial therapy is important in tailoring control and treatment strategies. Controlled experimental studies with APEC as the primary pathogen are mainly used for evaluation of immune responses in chickens with colibacillosis. According to these experiments, the host defense against colibacillosis comprises a strong early innate immune response followed by humoral and cell-mediated adaptive responses. Many different factors related to bacteria, host and even sampling are shown to be associated with changes in host responses to APEC bacteria. Moreover; discrepancies are observed between results from ex vivo and in vivo studies [ 1 ].

Florfenicol is a broad spectrum antibacterial agent and a derivative of chloramphenicol with the advantage of applicability in food producing animals. The drug has an indication for use in broiler flocks with colibacillosis according to label instructions and shows good therapeutic effects. Florfenicol also shows immunomodulatory and anti-inflammatory properties in veterinary species [ 22 ]. In normal chickens, florfenicol has shown inconsistent immunomodulatory effects. Khalifeh et al., [ 12 ], reported that florfenicol administration to layers suppresses Newcastle disease (ND) antibody production measured by both HI and ELISA. In a study by Han et al., [ 7 ] administration of florfenicol to 1-day-old broilers for 5 consecutive days resulted in increased antibody titer against ND vaccine. The latter researchers found no change in total white blood cell (WBC) count and WBC subsets as well as serum interferon‐gamma (IFN-γ) and interleukin 4 (IL-4) in the treated broilers at the age of 21 and 42 days. Serum interleukin 2 (IL-2) content was decreased while peripheral lymphocyte transformation rate was increased in 42-day-old treated birds as compared to control group in this study. More recently, [ 16 ], reported immunotoxic effects of florfenicol administered for 6 consecutive days to 3-day-old chickens with regards to assessment of serum ND antibody titer, cytokine levels and histological features of bursa of Fabricius [ 16 ].

Available reports on the effects of florfenicol on immune responses of broilers with colibacillosis are scarce. In a study by [ 8 ], relatively high dosages (30 mg/kg or 60 mg/kg) of florfenicol were administered to broilers experimentally infected with E. coli O78 by intratracheal route. These authors found that 60 mg/kg florfenicol increases hemagglutination inhibition (HI) and enzyme-linked immunosorbent assay (ELISA) antibody titers against ND vaccination. Moreover, gene expression of interferon-inducible genes in the spleen tissue of birds increased by both dosages. In histopathological examination of bone marrow, moderate atrophy of hematopoietic lineage and increased fat cells in both florfenicol-treated groups were observed while no specific change in spleen was reported. Unfortunately, in the above mentioned study, most of the assays were performed very lately and after confirmation of E. coli clearance of the body. Therefore, the authors could not precisely relate the observed effects to immunomodulatory properties of florfenicol and deduced that the antibacterial effects of florfenicol may be involved in at least some of the observed effects.

In the present study, we evaluated selected immune responses of broilers experimentally infected with E.coli and also those treated with routine clinical dosages of florfenicol at early days after infection by considering cytokine levels, hematological changes and histopathological examination of the immune organs (spleen, thymus and bursa of Fabricius).

Materials and methods

Bacteria and experimental infection.

The bacterium which was used in the study was an APEC from O2 serotype originally isolated from broiler chickens. The bacteria were provided by Razi Vaccine and Serum Research Institute, Iran. According to the result of disk diffusion testing, the bacteria were not resistant to florfenicol. A bacterial suspension was made in tryptic soy broth (TSB) medium and bacterial dose for administration to broilers was chosen based on our previous study [ 19 ]. Birds were infected with 1 mL of the bacterial suspension at the concentration of 7.1 × 10 8 CFU/mL by intratracheal route as described by Kromann et al., [ 13 ].

Experimental design

Seventy, 1-day-old Cobb chicks from both sexes were included in the study. Chickens were purchased from a commercial local hatchery for this specific study. Birds were reared in similar conditions according to Cobb Broiler Management Guide. No vaccination or drug administration was performed during the rearing period. Biosecurity practices were followed to prevent infectious diseases. At the age of 5 weeks, birds were randomly assigned to four groups including two control groups ( n  = 15 each) and two experimental groups ( n  = 20 each). Birds in each group were allocated into 5 replicates. The control groups included normal control (NC) group which was comprised of normal birds with no specific treatment, and intratracheal infection control (ITC) group where birds received 1 mL of sterile TSB medium by intratracheal route. The experimental groups included intratracheal infection (IT) group that received 1 mL of the bacterial suspension inoculated by intratracheal route, and intratracheal infection with florfenicol administration (ITF) group that in addition to being infected was treated with florfenicol (Fluorfen®, 10% solution; Rooyan Darou Pharmaceutical Co., Tehran, Iran) at a dosage of 1 mL/L of drinking water (20 mg/kg per day) according to label instructions. Administration of florfenicol was started after overt manifestation of clinical signs (roughly 12 h. post inoculation) and lasted for 3 consecutive days.

At the end of antibacterial treatment period, 10 birds from each group were randomly selected for blood sampling from wing vein. About 4 mL blood was collected from each bird in plain and citrate sodium coated vacutainer tubes (2 mL each). Five birds from each group were euthanatized by cervical dislocation after concussion. Under aseptic conditions, right lobe of the liver was removed and transferred to sterile containers for total bacterial count. Moreover, samples of liver, lung, spleen, thymus and bursa of Fabricius of these birds were collected in 10% neutral buffered formalin for histopathological examination.

All procedures used in this study were conducted in accordance with European Union commission legislation on the protection of animals used for scientific purposes [ 6 ] and were approved by an institutional ethical review committee (code number: 1GCB3M163773).

Serum total protein, albumin and total globulin assays

For serum collection, blood samples in plain tubes were centrifuged for 5 min at 2500 rpm. Harvested sera were kept at -20 °C until use.

Serum total protein was determined by photometric test according to biuret method. Determination of albumin level was performed by using bromocresol green spectrophotometric method. Both kits were provided by Pars Azmun Co., Tehran, Iran. All methods were performed according to kit protocols. The total globulin fraction was determined by subtracting the albumin from the total protein.

Evaluation of serum cytokines levels

Chicken specific ELISA kits were used for the assay of IFN-γ, interleukin 6 (IL-6) and IL-10 in sera. All kits were provided by ZellBio GmbH, Ulm, Germany. All kits were based on one-step biotin double antibody sandwich ELISA method with intra assay and inter assay coefficients of variation (CVs) of < 10% and < 12%, respectively. All assays were performed according to manufacturer’s instructions.

Determination of total WBC, WBC subsets and thrombocytes in blood

Total WBC count and thrombocyte count were determined by manual technique. To determine the differential leukocyte counts (heterophil, lymphocyte, monocyte, and immature white blood cells), a drop of blood was thinly spread over a glass slide, air-dried, and stained with the Giemsa staining technique. One hundred cells are then counted and classified. Then absolute number of WBC subsets was calculated by using their percentage and total WBC count [ 21 ,  24 ].

Histopathological examination of lymphoid organs, lung and liver

After fixation, samples were routinely processed and embedded in paraffin. Five μm-thick sections from paraffin blocks were made and stained with hematoxylin and eosin for examination under light microscope [ 3 ]. Different histopathological lesions were determined in each tissue (Nakamura et al., [ 17 ] and Usman et al. [ 23 ] with modifications). Lesions in all tissues were semi quantitatively scored from 0–3. The scoring system was as follows: 0: no lesion, 1: mild, 2: moderate and 3: severe lesion.

Total bacterial count of liver

Liver samples were immediately used for bacterial count. Samples were weighed and then were placed in boiling normal saline for 4 s for surface sterilization. Then samples were transferred to sterile bags and sterile normal saline was added to each bag at 9:1 w/w ratio. After mechanical homogenization, serial dilutions (10 –2 to 10 –6 ) were made in sterile micro tubes. Ten µL of each dilution was transferred to MacConkey agar plates and incubated for 18–24 h at 37 °C. Plates containing less than 250 colonies were used for colony count and total bacterial CFU count/g tissue was calculated (Adzitey and Yildiz [ 2 ] with modifications).

Statistical analysis

Shapiro-Wilks’s normality test was performed on all data sets. Based on the results, data were analyzed by one-way ANOVA followed by Tukey’s multiple comparison test or Kruskal–Wallis test followed by Dunn's multiple comparisons test where appropriate. P  < 0.05 was considered as the level of significance for statistical analysis.

It should be noted that data related to clinical signs, mortality, gross pathology, pathogenesis, etc. are reported in our previously published paper [ 19 ].

Serum total protein, albumin and total globulin levels

Birds in IT group showed significantly increased serum levels of total protein and total globulin as compared to control birds in NC group ( p  < 0.01 and p  < 0.0001, respectively). Although a slight decrease was observed in serum albumin levels of IT group in comparison with control groups, the change was not significant ( p  > 0.05). Florfenicol administration to birds in ITF group was associated with a significant decrease in serum total protein and globulin concentrations as compared to untreated birds in IT group ( p  < 0.01 and p  < 0.0001, respectively). No significant difference was observed in these parameters among ITF and control groups ( p  > 0.05). The albumin/globulin (A/G) ratio was significantly decreased in IT group as compared to control groups ( p  < 0.05 for both comparisons). Birds in ITF group showed statistically the same A/G ratio as compared to control groups and IT birds ( p  > 0.05) (Fig.  1 ).

figure 1

Serum levels (mean and SD) of total protein, albumin, globulin and A/G ratio in different groups. NC: normal control, normal birds with no specific treatment, ITC: intratracheal infection control, birds received sterile medium by intratracheal route; IT: Intratracheal infection group, birds received bacterial suspension inoculated by intratracheal route and ITF: intratracheal infection with florfenicol administration group, in addition to being infected, birds were treated with florfenicol. Values in columns without a common letter are significantly different at p  < 0.05

Serum levels of cytokines

Induction of colibacillosis in birds of IT group was associated with significantly lower serum concentrations of IL-10 and IFN-γ as compared to normal birds of NC group ( p  < 0.01 for both comparisons). IL-6 levels remained statistically similar between these two groups ( p  > 0.05). Administration of florfenicol to birds with colibacillosis (ITF group) resulted in appreciable decrease in serum concentration of IL-6 as compared to birds in IT group ( p  < 0.05). Antibiotic therapy of birds in ITF group had no significant effect on serum IL-10 or IFN-γ as compared to IT birds ( p  > 0.05) (Fig.  2 ).

figure 2

Serum levels (mean and SD) of cytokines in different groups. NC: normal control, normal birds with no specific treatment, ITC: intratracheal infection control, birds received sterile medium by intratracheal route; IT: Intratracheal infection group, birds received bacterial suspension inoculated by intratracheal route and ITF: intratracheal infection with florfenicol administration group, in addition to being infected, birds were treated with florfenicol. MS: missed samples. Values in columns without a common letter are significantly different at p  < 0.05

White Blood cells and thrombocytes

Birds in IT group showed a significant increase in number of WBCs as compared to control groups ( p  < 0.0001 for both comparisons). Birds in ITF group had significantly lower number of WBCs as compared to IT group ( p  < 0.05), however this parameter value was statistically higher in ITF birds than NC or ITC groups ( p  < 0.01 for both comparisons).

Although the percentage of heterophils was statistically the same among groups, birds in IT and ITF groups showed significantly higher number of heterophils as compared to birds in NC group ( p  < 0.0001 and p  < 0.001, respectively). Florfenicol administration resulted in an appreciable decrease in heterophils count as compared to IT group ( p  < 0.05).

Lymphocytes counts in birds of IT and ITF groups were significantly higher than NC birds ( p  < 0.001 and p  < 0.05, respectively). Birds in IT and ITF groups showed statistically the same counts of lymphocytes ( p  > 0.05). The percentage of lymphocytes in IT and ITF groups was lower than NC birds ( p  < 0.001 and p  < 0.05, respectively). The values of this parameter were not significantly different between IT and ITF groups ( p  > 0.05).

Regarding monocytes, birds in IT and ITF groups showed significantly higher numbers of these cells as compared to NC birds ( p  < 0.0001 and p  < 0.01, respectively). Birds in ITF group had lower number of monocytes in comparison with birds in IT group ( p  < 0.001). Percentage of monocytes in IT and ITF groups was also higher than NC birds ( p  < 0.0001 and p  < 0.01). Birds in ITF group showed lower percentage of monocytes as compared to IT birds ( p  < 0.05).

Birds in IT group had significantly higher counts and percentage of immature white blood cells as compared to NC group ( p  < 0.0001 for both comparisons). Number and percentage of these cells in ITF group were statistically the same as NC birds ( p  > 0.05) and significantly lower than IT group ( p  < 0.0001 for both comparisons).

Number of thrombocytes was statistically the same among all groups ( p  > 0.05).

Data related to blood cells are summarized in Table  1 .

Descriptive parameters of blood cells

Blood cells in NC and ITC groups showed completely normal appearances. In IT group, signs of severe acute inflammation and septicemia were present. Toxic heterophils with vacuolated cytoplasm were abundantly observed. Severe left shift with presence of toxic myelocytes, metamyelocytes and band heterophils was detected in blood smears. Polychromatophilic erythrocytes were detected relatively more in IT group than ITF birds. The severity of changes was generally lower in ITF group as shown by the presence of some band heterophils and heterophils that only showed mildly vacuolated cytoplasms. Red blood cells were normal. Figure  3 represents some of these changes in IT and ITF groups.

figure 3

Representative photomicrographs of birds in intratracheal infection group (IT) (A and B) and intratracheal infection with florfenicol administration group (ITF) (C and D). Short thin arrow: Toxic heterophil with vacuolated cytoplasm; Long thin arrow: Toxic metamyelocyte, vacuoles and dark toxic granules are present in cytoplasm; Star: Polychromatophilic erythrocyte; Thick arrow: Toxic myelocyte, vacuoles and dark toxic granules are present in cytoplasm; Curved arrow: heterophil with mildly vacuolated cytoplasm; #: Normal monocytes. Giemsa staining, Magnification: 1000X

Histopathological findings of liver and lung

Lungs and livers of birds in NC and ITC groups did not show any lesions and looked normal in histopathological examination. Infiltration of inflammatory cells, presence of necrotic foci, accumulation of eosinophilic substances in parabronchi and intravascular fibrin thrombi were the most profound lesions that were observed in lungs of birds in IT group. Hemorrhage was not detected in lungs of these birds. Except congestion, the severity of lesions was lower in birds of ITF group as compared to IT group although the only parameter that showed significantly lower scores was the accumulation of eosinophilic substances in parabronchi ( p  < 0.05).

Fatty changes, intravascular fibrin thrombi, perihepatitis, accumulation of heterophils around portal areas and congestion were detected in livers of birds in IT group. The only parameter that showed significantly lower scores in ITF group compared to IT birds was accumulation of heterophils around portal areas ( p  < 0.05) (Table  2 ).

Selected lesions in lungs and livers of birds are shown in Fig.  4 .

figure 4

Representative photomicrographs of liver ( A ) and lung ( B ) of birds experimentally infected with E. coli by intratracheal route (IT group). Long arrow: perihepatitis; #: Lymphatic cells accumulation; Star: accumulation of eosinophilic substances in parabronchi; Short arrow: Intravascular thrombus, hematoxylin and eosin staining

Histopathological findings of immune organs

Spleen, bursa of Fabricius and thymus of birds in NC and ITC groups were normal without any considerable lesions. Among the three immune organs that were histopathologically examined, spleen was the most affected organ of birds in IT group where almost all of the assayed parameters showed a median score of 3 (the highest severity score). Congestion, hemorrhage, intravascular fibrin thrombi, heterophil accumulation foci, depletion of lymphoid cells in white pulp and focal areas of necrosis were profoundly detected in the spleens of IT birds. The scores of all these parameters were statistically lower in ITF birds as compared to IT group ( p  < 0.05) and were reversed to normal values of NC group ( p  > 0.05). Moreover, birds in ITF group showed hyperplasia in white pulp which was not observed in any other groups.

Congestion, intravascular fibrin thrombi, heterophil accumulation, depletion of lymphoid tissues and edema were detected in thymi of birds in IT group. Thymi of birds in ITF group showed normal structural features without detectable lesions in histopathological examination.

Depletion of lymphoid cells, cyst formation, interfollicular edema and distended lymphatic vessels were observed in bursa of Fabricius of birds in IT group. Except for interfollicular edema, the bursas of birds in ITF group looked almost normal in histopathological evaluation.

Figure  5 shows selected lesions in lymphoid organs of birds in IT group.

figure 5

Representative photomicrographs of spleen ( A ), thymus ( B ) and bursa of Fabricius ( C ) of birds experimentally infected with E. coli by intratracheal route (IT group). Long arrow: Focal necrosis; #: Lymphatic cells depletion; *: inter follicular edema, hematoxylin and eosin staining

Table 3 summarizes the scores of histopathological findings in different groups.

E. coli count in liver

As shown in Fig.  6 , no E. coli growth was observed in liver samples collected from control groups (NC and ITC). Incubation of liver samples from both infected groups (IT and ITF) resulted in E. coli bacterial growth. The number of colony forming units (CFUs) of E. coli in liver samples of ITF group was only slightly lower than IT birds ( p  > 0.05).

figure 6

E. coli count (mean and SD) in liver samples of birds in different groups. NC: normal control, normal birds with no specific treatment, ITC: intratracheal infection control, birds received sterile medium by intratracheal route; IT: Intratracheal infection group, birds received bacterial suspension inoculated by intratracheal route and ITF: intratracheal infection with florfenicol administration group, in addition to being infected, birds were treated with florfenicol. Values in columns without a common letter are significantly different at p  < 0.05

This study is focused on certain aspects of responses of chickens experimentally infected with APEC via intra tracheal route with or without florfenicol treatment.

In acute or chronic inflammatory conditions, total protein may increase due to elevated globulin fraction. In these situations, albumin concentrations often decrease. The combined effect of these changes is a decrease in the A/G ratio [ 15 ]. Consistently, in the present study, hyperproteinemia and decreased A/G ratio was observed in birds of IT group which was due to increased serum globulins. Hyperglobulinemia in chickens with colibacillosis has been reported by other investigators [ 20 ,  5 ]. As previously stated, in the present study birds in ITF group showed statistically similar levels of serum globulin and A/G ratio compared to control groups which can be related to a suppressed inflammatory condition following florfenicol administration to these birds.

In 2024, Usman et al., observed that serum concentration of IL-6 significantly increases in chickens that were inoculated via intra nasal route by O78:K80 E. coli three days post inoculation. In a study by Elnagar et al., [ 4 ]; broiler chickens which were orally infected with E. coli O78, O26, O55, or O44 showed increased mRNA expression of IL-6 in ileal tissue two days post infection. Conversely, in our study serum level of IL-6 was not significantly changed in IT birds. It is worth to mention that in the study performed by Elnagar et al., the level of increase in mRNA expression of IL-6 cytokine was different between the E. coli strains. Therefore, the reason for the observed discrepancy, might be the difference in E. coli strain used as well as time of sampling, inoculation route and type of the sample.

We observed that ITF birds show significantly lower levels of IL-6 as compared to IT group. The suppressive effects of florfenicol on IL-6 serum levels has also been previously reported in mice challenged with LPS [ 27 ]. These researchers also showed that florfenicol inhibits the translocation of LPS-induced nuclear factor-κB (NF-κB) from cytoplasm into the nucleus in RAW 264.7 macrophages. Therefore, they suggested that the effects of florfenicol on early cytokine responses can be due to blocking of NF-κB pathway.

Interleukin 6 is a multifunctional cytokine in chickens with major roles in immune responses including activating B and T lymphocytes and encouraging macrophage production [ 25 ]. As an important cytokine in innate immune responses, IL-6 alerts the immune system about the presence of the pathogen. However, improper over production of this molecule may also be damaging [ 4 ]. On the other hand, suppressed production of this cytokine may help to the spread of infection as shown for  Salmonella gallinarum  [ 10 ] . Therefore, the suppressive effect of florfenicol on this cytokine level in chickens with colibacillosis should be considered conservatively.

It is well stablished that IL-10 is an inducible feedback regulator of immune response in chickens and acts as an anti-inflammatory cytokine [ 26 ]. It is reported that IL-10 mRNA expression decreases in ileal tissue of chickens with colibacillosis [ 4 ]. Consistently, in the present study, we observed decreased serum levels of IL-10 in birds of IT group. These birds also showed decreased serum levels of IFN-γ. It has been shown that administration of IFN-γ to chickens with colibacillosis enhances immune responses against the disease although it does not mitigate the development of air sac lesions [ 9 ]. Therefore the decreased serum level of IFN-γ may negatively affect the immune responses of chickens with colibacillosis.

In a study by Zhang et al., [ 27 ], florfenicol prolonged IL-10 expression in serum of mice challenged with LPS while had no effect on IL-10 production by LPS-induced RAW 264.7 cells in vitro. Administration of florfenicol to SPF chicks at the age of 3 days for six consecutive days has been associated with decreased serum levels of IFN-γ compared to control group in the early stages of drug withdrawal [ 16 ]. In contrast, in the present study, florfenicol administration had no effect on the levels of IL-10 or IFN-γ in chickens with colibacillosis. The differences in the nature and conditions of the mentioned studies might have a role in the discrepancy observed in the results.

In the present study, florfenicol improved the hematological profile of birds with colibacillosis as shown by milder signs of inflammation, toxemia and left shift presented by WBCs. Leukocytosis and monocytosis were also ameliorated by florfenicol administration. Moreover, florfenicol decreased the severity of some of the lesions observed in lung (accumulation of eosinophilic substances in parabronchi) and liver (congestion and heterophil accumulation in portal area). Although it was not addressed in the present study, these effects of florfenicol at cellular level may improve organ function and subsequently expedite the recovery and escalate health status of the bird. Regarding the lymphoid organs, administration of florfenicol resulted in remarkable decrease of lesion severity especially with regard to the spleen which was the most affected lymphoid organ in IT birds. Depletion of lymphoid tissue was observed in spleens, thymus and bursa of Fabricius of birds in IT group. Lymphocytic depletion of bursa and thymus of chickens infected with E. coli has been previously reported by Nakamura et al., [ 17 ]. Interestingly, florfenicol administration protected these organs against lymphoid tissue depletion and even resulted in mild hyperplasia of white pulp in the spleen of chickens in ITF group. Consistently, in a study by Lis et al. [ 14 ], florfenicol increased percentage and absolute number of T lymphocytes in mesenteric lymph nodes of mice.

An important question is that whether the observed effects of florfenicol in this study are related to its antibacterial effect (reversal of lesions and changes after bacterial clearance) and/or its plausible immunomodulatory effects? As it was confirmed by the results related to bacterial count performed on liver samples, birds in both infected groups were still afflicted with systemic colibacillosis. Interestingly, although the bacteria were sensitive to florfenicol (based on sensitivity test), administration of this drug was not associated with a drastic decrease in bacterial load at the time of sampling. This can be related to the fact that florfenicol administration in this study was continued for relatively short period (3 days) before sampling and a very high load of bacteria was used for inoculation as it is a routine procedure in studies that use experimental models of colibacillosis. Therefore, there is a high chance that the observed beneficial effects are related to the immunomodulatory effects of the drug, although we cannot completely rule out the possibility that antibacterial effects of the drug could be still involved since we could not count bacteria in all afflicted organs.

In conclusion, experimental E. coli infection of chickens by intratracheal route results in remarkable inflammatory responses associated with changes in serum cytokine levels (IL-10 and IFN-γ) as well as in biochemical (decreased A/G ratio) and hematological (severe left shit with presence of toxic myelocytes, leukocytosis and monocytosis) parameters. Histopathological lesions in lymphoid organs (especially in spleen) were also prominent in these birds. Florfenicol administration ameliorated inflammatory responses and improved many of the lesions when it has not yet dominated the bacteria. These anti-inflammatory and beneficial effects of florfenicol should be considered in pharmacotherapy decision-making process and might help clinicians select a more effective antimicrobial agent among options to which bacteria may be susceptible. Of course, effects of florfenicol (suppressive or stimulant) on other response parameters that have a role in host defense mechanisms and the outcome of chickens with colibacillosis need to be clarified in future studies.

Availability of data and materials

The data that support the findings of this study are not openly available due to reasons of sensitivity and are available from the corresponding author upon reasonable request.

Alber A, Stevens MP, Vervelde L. The bird’s immune response to avian pathogenic Escherichia coli. Avian Pathol. 2021Oct;50(5):382–91. https://doi.org/10.1080/03079457.2021.1873246 .

Article   CAS   PubMed   Google Scholar  

Adzitey F, Yildiz, F. Incidence and antimicrobial susceptibility of Escherichia coli isolated from beef (meat muscle, liver and kidney) samples in Wa Abattoir, Ghana. Cogent Food & Agriculture, (2020). 6(1). https://doi.org/10.1080/23311932.2020.1718269 .

Bancroft JD, Layton C. The haematoxylins and eosin in Bancroft’s Theory and Practice of Histological Techniques. (eds. Suvarna SK, Layton C and Bancroft JD), 8th edition. Elsevier; 2019:126–38. 

Elnagar R, Elkenany R, Younis G. Interleukin gene expression in broiler chickens infected by different Escherichia coli serotypes. Vet World. 2021;14(10):2727–34. https://doi.org/10.14202/vetworld.2021.2727-2734 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

El-Tahawy AO, Said AA, Shams GA, Hassan HM, Hassan AM, Amer SA, El-Nabtity SM. Evaluation of cefquinome’s efficacy in controlling avian colibacillosis and detection of its residues using high performance liquid chromatography (HPLC). Saudi J Biol Sci. 2022;29(5):3502–10. https://doi.org/10.1016/j.sjbs.2022.02.029 .

European Parliament. Council of the European Union Directive 2010/63/EU of Sep 22, 2010, on the protection of animals used for scientific purposes, Document 32010L0063. Off J Eur Union. 2010;L276:33–79.

Google Scholar  

Han C, Wang X, Zhang D, Wei Y, Cui Y, Shi W, Bao Y. Synergistic use of florfenicol and Salvia miltiorrhiza polysaccharide can enhance immune responses in broilers. Ecotoxicol Environ Saf. 2021;1(210):111825.

Article   Google Scholar  

Hassanin O, Abdallah F, Awad A. Effects of florfenicol on the immune responses and the interferon-inducible genes in broiler chickens under the impact of E. coli infection. Vet Res Commun. 2014;38(1):51–8. https://doi.org/10.1007/s11259-013-9585-7 .

Janardhana V, Ford ME, Bruce MP, Broadway MM, O'Neil TE, Karpala AJ, Asif M, Browning GF, Tivendale KA, Noormohammadi AH, Lowenthal JW, Bean AG. IFN-gamma enhances immune responses to E. coli infection in the chicken. J Interferon Cytokine Res. 2007;27(11):937–46. https://doi.org/10.1089/jir.2007.0020 . PMID: 18052728.

Kaiser P, Rothwell L, Galyov EE, Barrow PA, Burnside J, Wigley P. Differential cytokine expression in avian cells in response to invasion by Salmonella typhimurium. Salmonella enteritidis and Salmonella gallinarum Microbiology (Reading). 2000;146(Pt 12):3217–26. https://doi.org/10.1099/00221287-146-12-3217 .

Kathayat D, Lokesh D, Ranjit S, Rajashekara G. Avian Pathogenic Escherichia coli (APEC): An Overview of Virulence and Pathogenesis Factors, Zoonotic Potential, and Control Strategies. Pathogens. 2021;10(4):467. https://doi.org/10.3390/pathogens10040467 .

Khalifeh MS, Amawi MM, Abu-Basha EA, Yonis IB. Assessment of humoral and cellular-mediated immune response in chickens treated with tilmicosin, florfenicol, or enrofloxacin at the time of Newcastle disease vaccination. Poult Sci. 2009;88(10):2118–24. https://doi.org/10.3382/ps.2009-00215 . PMID: 19762865.

Kromann S, Olsen RH, Bojesen AM, Jensen HE, Thøfner I. Development of an aerogenous Escherichia coli infection model in adult broiler breeders. Sci Rep. 2021;11(1):19556.

Lis M, Szczypka M, Suszko A, Switała M, Obmińska-Mrukowicz B. The effects of florfenicol on lymphocyte subsets and humoral immune response in mice. Pol J Vet Sci. 2011;14(2):191–8. https://doi.org/10.2478/v10181-011-0029-4 .

Lumeij JT, Chapter 28 - Avian Clinical Biochemistry, Editor(s): J. Jerry Kaneko, John W. Harvey, Michael L. Bruss, Clinical Biochemistry of Domestic Animals (Sixth Edition), Academic Press, 2008, Pages 839–872, ISBN 9780123704917, https://doi.org/10.1016/B978-0-12-370491-7.00030-1 .

Meng FL, Liu KH, Shen YS, Li PX, Wang TL, Zhao YR, Liu SD, Liu MD, Gang WA. Florfenicol can inhibit chick growth and lead to immunosuppression1. J Integr Agric. 2023.  https://doi.org/10.1016/j.jia.2023.11.040 .

Nakamura K, Imada Y, Maeda M. Lymphocytic depletion of bursa of Fabricius and thymus in chickens inoculated with Escherichia coli. Vet Pathol. 1986;23(6):712–7. https://doi.org/10.1177/030098588602300610 .

Nolan LK, Vaillancourt JP, Barbieri NL, Logue CM. “Colibacillosis,” In: Swayne DE, ed Diseases of Poultry. Hoboken, NJ: Wiley-Blackwell (2020). p. 770–830. https://doi.org/10.1002/9781119371199.ch18 .

Saberi A, Mosleh N, Shomali T, Naziri Z. A Comparative Study on Two Infection Models of Colibacillosis in Broilers: Clinical Features, Pathogenesis, and Response to Therapy. Avian Dis. 2023;67(3):261–8.

Sharma V, Jakhar KK, Nehra V, Kumar S. Biochemical studies in experimentally Escherichia coli infected broiler chicken supplemented with neem (Azadirachta indica) leaf extract. Vet World. 2015;8(11):1340–5. https://doi.org/10.14202/vetworld.2015.1340-1345 .

Thrall MA, Weiser G, Allison RW, Campbell TW. Veterinary Hematology and Clinical Chemistry. John Wiley & Sons; 2012.

Trif E, Cerbu C, Olah D, Zăblău SD, Spînu M, Potârniche AV, Pall E, Brudașcă F. Old Antibiotics Can Learn New Ways: A Systematic Review of Florfenicol Use in Veterinary Medicine and Future Perspectives Using Nanotechnology. Animals (Basel). 2023;13(10):1695. https://doi.org/10.3390/ani13101695 .

Article   PubMed   Google Scholar  

Usman S, Anjum A, Usman M, Imran MS, Ali M, Moustafa M, et al. Antibiotic resistance pattern and pathological features of avian pathogenic Escherichia coli O78:K80 in chickens. Braz J Biol. 2024;84:e257179.

Voigt GL, Swist SL. Hematology Techniques and Concepts for Veterinary Technicians. John Wiley & Sons; 2011.

Wigley P, Kaiser P. Avian cytokines in health and disease. Brazilian Journal of Poultry Science. 2003;5:1–4.

Wu Z, Hu T, Rothwell L, Vervelde L, Kaiser P, Boulton K, Nolan MJ, Tomley FM, Blake DP, Hume DA. Analysis of the function of IL-10 in chickens using specific neutralising antibodies and a sensitive capture ELISA. Dev Comp Immunol. 2016;63:206–12. https://doi.org/10.1016/j.dci.2016.04.016 .

Zhang X, Song Y, Ci X, An N, Fan J, Cui J, Deng X. Effects of florfenicol on early cytokine responses and survival in murine endotoxemia. Int Immunopharmacol. 2008;8(7):982–8. https://doi.org/10.1016/j.intimp.2008.02.015 .

Download references

Acknowledgements

This work was supported by the Shiraz University under Grant number 1GCB3M163773.

Author information

Authors and affiliations.

Avian Diseases Research Center, Department of Clinical Sciences, School of Veterinary Medicine, Shiraz University, Shiraz, Iran

Zahra Ghahramani & Najmeh Mosleh

Division of Pharmacology and Toxicology, Department of Basic Sciences, School of Veterinary Medicine, Shiraz University, P.O. Box 71441-69155, Shiraz, Iran

Tahoora Shomali

Department of Clinical Sciences, School of Veterinary Medicine, Shiraz University, Shiraz, Iran

Saeed Nazifi

Department of Pathobiology, School of Veterinary Medicine, Shiraz University, Shiraz, Iran

Azizollah Khodakaram-Tafti

You can also search for this author in PubMed   Google Scholar

Contributions

T. Sh and N. Mosleh conceptualization, data analysis, supervision, T. Sh prepared the draft, Z. Gh, S. Nazifi and A. Kh. T data acquisition and methodology, all authors have read and approved the manuscript.

Corresponding author

Correspondence to Tahoora Shomali .

Ethics declarations

Ethics approval and consent to participate.

All procedures used in this study were approved by the Shiraz University, School of Veterinary Medicine ethical committee and were compatible with Directive 2010/63/EU on the protection of animals used for scientific purposes.

Consent for publication

Not applicable.

Competing interests

The authors report there are no competing interests to declare.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Ghahramani, Z., Mosleh, N., Shomali, T. et al. A study on selected responses and immune structures of broiler chickens with experimental colibacillosis with or without florfenicol administration. BMC Vet Res 20 , 371 (2024). https://doi.org/10.1186/s12917-024-04232-3

Download citation

Received : 25 May 2024

Accepted : 12 August 2024

Published : 19 August 2024

DOI : https://doi.org/10.1186/s12917-024-04232-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Colibacillosis
  • Florfenicol
  • Lymphoid organs

BMC Veterinary Research

ISSN: 1746-6148

experimental group and controlled

  • Systematic Review
  • Open access
  • Published: 16 August 2024

Emergency infection prevention and control training in fragile, conflict-affected or vulnerable settings: a scoping review

  • Julii Brainard 1 ,
  • Isabel Catalina Swindells 2 ,
  • Joanna Wild 3 ,
  • Charlotte Christiane Hammer 4 ,
  • Emilio Hornsey 5 ,
  • Hibak Osman Mahamed 6 &
  • Victoria Willet 6  

BMC Health Services Research volume  24 , Article number:  937 ( 2024 ) Cite this article

47 Accesses

Metrics details

It is uncertain what could be the best training methods for infection prevention and control when an infectious disease threat is active or imminent in especially vulnerable or resource-scarce settings.

A scoping review was undertaken to find and summarise relevant information about training modalities, replicability and effectiveness of IPC training programmes for clinical staff as reported in multiple study designs. Eligible settings were conflict-affected or in countries classified as low-income or lower-middle income (World Bank 2022 classifications). Search terms for LILACS and Scopus were developed with input of an expert working group. Initially found articles were dual-screened independently, data were extracted especially about infection threat, training outcomes, needs assessment and teaching modalities. Backwards and forwards citation searches were done to find additional studies. Narrative summary describes outcomes and aspects of the training programmes. A customised quality assessment tool was developed to describe whether each study could be informative for developing specific future training programmes in relevant vulnerable settings, based on six questions about replicability and eight questions about other biases.

Included studies numbered 29, almost all ( n  = 27) were pre-post design, two were trials. Information within the included studies to enable replicability was low (average score 3.7/6). Nearly all studies reported significant improvement in outcomes suggesting that the predominant study design (pre-post) is inadequate to assess improvement with low bias, that any and all such training is beneficial, or that publication bias prevented reporting of less successful interventions and thus a informative overview.

It seems likely that many possible training formats and methods can lead to improved worker knowledge, skills and / or practice in infection prevention and control. Definitive evidence in favour of any specific training format or method is hard to demonstrate due to incomplete descriptions, lack of documentation about unsuccessful training, and few least-biased study designs (experimental trials). Our results suggest that there is a significant opportunity to design experiments that could give insights in favour of or against specific training methods. “Sleeping” protocols for randomised controlled trials could be developed and then applied quickly when relevant future events arise, with evaluation for outcomes such as knowledge, practices, skills, confidence, and awareness.

Peer Review reports

A survey of health and care workers in low or lower middle countries in 2017–18 suggested that infection prevention and control (IPC) training while in post was unusual in many countries (reported in 54% of respondent countries [ 1 ]). Moreover, such training may only happen when there is already a defined infectious threat present or likely to arrive imminently. A highly responsive strategy in developing and delivering IPC training means opportunity to customise training formats and methods for local workforce contexts and curricula with regard to very specific pathogens and transmission pathways. However, the context of needing to deliver training urgently with little advance notice of specific pathogen or local context means that such training may be designed and delivered hurriedly, and with minimal setting-specific needs assessment and little evaluation for effectiveness.

As part of past pandemic recovery and future pandemic preparedness, it is useful to collate evidence about which IPC training methods have been applied in specific settings or contexts. Evidence would be especially useful that could be used to inform ongoing development of best training delivery guidelines in settings that may be described as fragile, conflict-affected or otherwise vulnerable (FCV). Best quality evidence may be defined with regard to completeness of reporting (if the training methods are replicable) as well as evidence of effectiveness (desired outcomes). We searched on Google Scholar and Prospero in August 2023 for completed or registered systematic or scoping reviews addressing the topic of emergency IPC training in vulnerable settings. The most similar and comprehensive existing systematic review (Nayahangan et al. 2021; [ 2 ]) described medical and/or nursing training (delivered for any clinical training purpose, not just IPC) delivered during viral epidemics (only). The search date for the Nayahangan et al. review was April 2020, more than 3 years before our own study commenced. Systematic literature reviews may be considered ‘out of date’ by two years after their most recent search date [ 3 ]. Nayahangan et al. included clinical settings in any country and was not confined to training delivered in emergency or urgent contexts (readiness or response phases [ 4 , 5 ]). Nayahangan et. al . performed quality assessment using the Educational Interventions Checklist [ 6 ] which focuses on replicability and mapping of reported teaching methods in the primary research, but only indirectly addresses effectiveness. Nayahangan et. al . concluded that previous studies had used a variety of training methods and settings but few training methods had been related to specific patient or other epidemic outcomes. Another somewhat similar previous systematic review was Barrera-Cancedda et al. [ 7 ] which described and assessed IPC training strategies in sub-Saharan Africa for nurses. Most of the strategies they found and described were during “business as usual” conditions, rather than readiness or response phases of an outbreak or epidemic presenting imminent threat. Their quality assessment tools were for assessing bias in effectiveness rather than replicability. Their focus was narrowly on nurses in a specific geographic region. Their conclusions arose from considering evidence that went far beyond staff training methods. Barrera-Cancedda et al. concluded that creating good future guidelines for evidence-based practice required that additional primary research to be undertaken from an implementation science-specific perspective.

A challenge in emergency IPC training manifest during the Covid-19 pandemic is inherent to other emerging diseases: early in an outbreak situation there is often uncertainty about the best IPC practices. The actual best practices may vary according to predominant disease transmission pathway(s) that are not yet well-understood. There is merit in considering evidence according to what disease(s) are being prepared for.

This study aimed to provide an updated evidence summary about IPC training formats and apparent effectiveness in a scoping review design. We collected and summarised evidence about IPC training formats and methods as delivered in FCV settings when there was an active infectious disease present (response phase) or the infection arrival was fairly imminent (expected within 6 months, readiness phase) [ 4 , 5 ]. We undertook a scoping review of IPC training programmes reported in peer reviewed scientific literature to summarise which training formats or methods had been described in FCV settings, and to describe how often such training was associated with success in these settings. Key effectiveness outcomes were: knowledge, skills, compliance, case counts or case mortality while training delivery was summarised according to key features such as format, duration and delivery mode.

PROSPERO registration number is CRD42023472400. We originally planned to undertake a systematic review but later realised that answering our research question was better suited to a scoping review format, where evidence is summarised narratively with respect to creating a comprehensive overview of evidence rather than obtaining evidence to be evaluated for effectiveness. There were two other notable deviations from protocol: we did not use the Covidence platform and we decided to develop and apply a customised quality assessment (QA) checklist instead of originally listed QA instruments. This article is one of several outputs arising from the same protocol.

Training programmes had to take place in FCV settings or for staff about to be deployed to FCV settings. Fragile or vulnerable settings were defined as being in countries that were designated as low income or lower-middle income by the (World Bank 2022 classification; [ 8 ]). Conflicted-affected settings were determined using reader judgement for individual studies, and had to feature concurrent with the training and care delivery, high threat of armed violence or civil unrest. Participants had to be health care professionals (HCPs), social care staff, student or trainee HCPs or trainee social care staff working in an FCV setting. If in doubt about whether the participants qualified, we deferred to World Health Organisation occupational definitions [ 9 ]. Voluntary carers such as family members or community hygiene champions as targets were excluded. Eligible interventions could be described as training or education related to any aspect of IPC outcomes.

Intervention

The training programme could be any training or education that was delivered in a response phase (when there was a concurrently present infectious disease threat) or in the readiness phase [ 5 ], when there was high risk that the infectious threat would become present in the clinical environment within six months, such as soon after Covid-19 was declared to be a public health threat of international concern in January 2020.

Comparators were either the same cohort measured at baseline or a contemporaneous cohort in same setting who did not receive IPC training.

Effectiveness outcomes

Changes in individual knowledge, skills, adherence (compliance or practice), case counts or mortality related to infection were primary effectiveness outcomes. These were chosen because preliminary searches suggested they were commonly reported outcomes in the likely literature. Most of these were immediate benefits that could result as soon as training was completed. We also included case incidence and infection-related mortality as primary outcomes because we knew from preliminary literature searches that these were often the only specific outcomes reported after IPC training. Secondary outcomes (data only collected from articles with at least one primary outcome) were attitudes, acceptability of the training, self-efficacy, confidence, trust in IPC, awareness, index-of-suspicion, ratings for value or relevance of the training, objectives of the training, lessons learned about training needs or recommendations about training needs to be addressed in similar subsequent training programmes.

Outcomes could be objectively- or self-assessed. We wanted to extract outcomes that could be most comparable between studies (not adjusted for heterogenous covariates) and that were objectively assessed rather than self-reported, if possible. Hence, objectively assessed outcomes were extracted and are reported if both objectively- and self-assessed outcomes were available, else self-reported outcomes were extracted and are reported. We extracted and report unadjusted outcomes where available, but adjusted results after post-processing (such as using regression models) were extracted if no unadjusted results were reported.

Inventory and description of training methods

Specific aspects of how training was delivered were key to understanding the potential that each training programme might have to achieve replicable results elsewhere. We used an iterative process with an expert working group giving advice to develop a list of training features such as setting, duration, target participants and programme design (see list below). These categorisations are not presented as definitive but rather they were pragmatically determined attributes for what information could be gathered in the eligible studies and that directly inform how replicable each education programme was, and how generalisable its results might be in other settings/with other target participants. We extracted information from the studies to categorise the training that they described according to the below features. Multiple answers were possible for many of these features. “Unclear” or “Mixture” were possible answers, too.

Where (location) : Off-site without real patients; in house but not while caring for patients; on the job training (during patient care).

Length of the training session(s): such as 1 h on one day, or 6 sessions over 8 weeks, etc.

When (timing with respect to possible threat) : Pre-deployment to clinical environment; in post or as continuing professional development.

Mode (of delivery) : 3 options which were: face to face; blended (a mix of face to face and online) or hybrid (face to face with opportunity for some participants to join in remotely); only digital: e.g. digital resources uploaded to an USB stick or online via an online platform, either synchronous or asynchronous.

Broad occupational category receiving the training : Clinical frontline staff; trainers who were expected to directly train others; programme overseers or senior managers.

Specific occupations receiving the training : Nurses, doctors/physicians, others.

Learning group size : Individual or group.

Format : Workshops; courses; seminars/webinars; mentoring/shadowing; e-learning; e-resources, other.

Methods : Didactic instruction/lectures/audio-visual presentations; demonstrations/modelling; discussion/debate; case studies or scenarios; role play or clinical practice simulations; assessment or exams with formative assessment; hands-on practice / experience; games; field trips or site visits; virtual reality or immersive learning; repeated training; shadowing; other.

Additional inclusion and exclusion criteria

We included scientific studies with concurrent comparison groups (CCT or RCT) where post-training outcomes were reported for both arms and pre-post studies where both baseline and post-training measurements of a primary effectiveness outcome were reported. Clinical cases, case reports, cross-sectional studies, letters to the editor, editorials, commentaries, perspectives, technical notes, and review summaries were excluded unless they reported baseline and post-training eligible effectiveness outcomes. Studies must have been published in 2000 or later. Infectious biological entities could be bacteria, viruses, protozoa or funghi, but not complex multicellular organisms (like mites or lice).

Studies could be published in any language that members of the team could read or translate to coherent English using Google Translate. Training in infection prevention and control had to be applicable to a clinical or social care environment for humans. Non-residential care settings (such as daily childcare facilities) were excluded. Studies about controlling infection risks from or to animals or risk reduction in non-clinical environments (such as removing mosquito breeding sites) were excluded.

We wanted to focus on IPC training that related to individual action and could result in immediate benefits and in clinical not community environments. For this reason, we excluded interventions or outcomes that related to: forms of patient care (e.g., anti-viral treatment) that might hasten end of infectious period; vaccination programmes; surveillance; availability of personal protective equipment (PPE) or other resources that reflect institutional will and opportunity as much as any individual action; testing strategies or protocols or actions to speed up test results or screening patients for infection. Also excluded were training programmes in environmental management outside of the clinical/care environment with exception for waste management generated within clinic and managed on site which might include some outdoor/away from clinic/care location handling and disposal decisions.

Eligible studies had to report at least one of our primary outcomes so that we could summarise the evidence base about which training methods linked to evidence of effectiveness. To focus on the response and readiness phase of emergencies, we excluded studies where the primary outcome was only measured > 12 months after training started (i.e., quality improvement reports).

MEDLINE, Scopus, LILACS were searched on 9 October 2023 with the search phrase (Scopus syntax):

(“infection-control”[Title/Abstract] or “transmission”[Title/Abstract] or.

“prevent-infectio*”[Title/Abstract]).

(“emergency”[Title/Abstract] or “epidemic”[Title/Abstract] or “outbreak”[Title/Abstract]).

(“training”[Title/Abstract] or “educat*”[Title/Abstract] or “teach*”[Title/Abstract]).

Included studies in a recent and highly relevant systematic review [ 2 ] were also screened. Initially included studies from those search strategy steps were then subjected to forward and backward citation searches to look for additional primary studies.

After deduplication, two authors independently screened all studies found by the search strategy, recording decisions on MS Excel spreadsheets. All studies selected by at least one author had full text review for final decision about inclusion.

Quality assessment

We assess quality indicatively and with regard to usefulness of the studies to inform development of future IPC training programmes in relevant settings. The focus was on two broad domains that informed A) how replicable the training programme was, as described; B) how biased its results were likely to be. Our protocol planned to apply the Cochrane Risk of Bias 1.0 for trials (ROB.1) and Newcastle Ottawa Scale (NOS) tools to undertake quality assessment for pre-post study designs. However, we realised that neither of these tools captured whether the original research had reported sufficient details to make the original training programme replicable. Another problem is that the judgements arising from the RoB.1 and NOS would not be strictly comparable, given the different assessment criteria. Other existing quality checklists that we are aware of that were suitable for each of trials, cohorts or pre-post study designs had the shortcomings of only capturing replicability or bias in apparent effectiveness (not both), and tending to be suitable for only one study design. Some checklists (eg The Cochrane Risk of Bias 2.0 tool [ 10 ] or Mixed Methods Appraisal Tool [ 11 ]) require more resources to operationalise than we had or that was required for a scoping review. Instead, we devised and applied an indicative quality checklist that comprised 14 simple questions with possible answers that were “yes, no or uncertain” using specific predefined criteria for deciding each answer. Our checklist is available as File S 1 . These questions were modified from suggested questions in the USA National Institutes of Health assessment checklist for pre-post designs [ 12 ]. Applying a single quality assessment tool across multiple study designs had the further advantage of facilitating comparability with regard to identifying relative informativeness for future effectiveness evaluation and training programme design. The answers were scored as 1 point per yes answer, so maximum score (for least biased and most replicable studies) would be 14. We interpret the overall quality assessment results as follows: ≥ 11/14 = most informative, 8–10 = somewhat informative, ≤ 7/14 least informative. The quality assessment results are reported quantitatively and narratively. Subdomains for replicability and other bias (generalisability) scores are reported separately.

Data extraction and interpretation (selection and coding)

These data were extracted: author of the study, year of publication, study country, study design, sample size in comparator arms, relevant infectious diseases (that author identified), primary outcomes, secondary outcomes. With regard to training delivered, we also extracted information about any needs assessment that was undertaken, training objectives and any statements about lessons learned or what should be addressed in future design of such programmes or in research. One author extracted data which was confirmed by a second author. Results are reported quantitatively (counts of studies with any particular training aspect) and narratively for needs assessment, objectives and lessons learned.

To interpret likely usefulness, we prioritise higher scores (for informativeness), but also consider study design, with trials presumed to have less biased results with regard to effectiveness outcomes. We address potential differences that were monitored or observed between knowledge, skills or practices with respect to the training attributes. For instance, were outcomes assessed immediately after training (within 1 day) as opposed to (ideally) observed and assessed independently at least three weeks later, which would suggest knowledge, skills and/or practice retention. We also highlight when training applicable to conflict-affected settings was delivered in that same conflicted-affected setting or prior to entry to the setting (such as for military personnel deployed overseas).

Figure  1 shows the study selection process. 29 studies were included. Extracted data for each study are in File S 2 . Almost all ( n  = 27) were pre-post design; 2 were experimental studies [ 13 , 14 ]. Table 1 lists summary information about the included studies. Seven reports described training delivered in single low-income countries, 19 studies described training in single lower middle income countries. Two articles described IPC training for staff in context of conflict-affected settings, either in the USA prior to military deployment [ 15 ] or in the affected setting during a period of civil unrest (in Haiti in 2010; [ 16 ]). Two studies [ 17 , 18 ] described training using a common core curriculum in multiple African countries (mix of low and lower middle income). The most represented countries were India (4 studies) and Nigeria (6 studies). Nine studies were about Ebola disease, 14 related to controlling Covid-19. Other studies addressed cholera ( n  = 2), antimicrobial resistant organisms ( n  = 3) and tuberculosis ( n  = 1). Clinical environments were most commonly described as hospitals ( n  = 9) while twelve studies described programmes for staff working in multiple types of health care facilities. 21 studies were undertaken in response phase, two in readiness phase and six in mixed readiness/response phases. Nurses were the most commonly specified type of health care worker (mentioned in 24 studies). In Table  1 , higher scores for knowledge, attitudes, practices or skills were the better clinical outcomes unless otherwise stated. Some additional outcome information for LN Patel, S Kozikott, R Ilboudo, M Kamateeka, M Lamorde, M Subah, F Tsiouris, A Vorndran, CT Lee and C of Practice [ 18 ] and N Zafar, Z Jamal and M Mujeeb Khan [ 19 ] are in the original studies but could not be concisely repeated in Table  1 . Most articles reported statistically significant (at p  < 0.05) improvements in outcomes after training. A notable exception is OO Odusanya, A Adeniran, OQ Bakare, BA Odugbemi, OA Enikuomehin, OO Jeje and AC Emechebe [ 20 ] who attributed a lack of improvement after training to very good baseline knowledge, attitudes and practices.

figure 1

Selection procedure for eligible studies

Outcomes were assessed immediately after training ended in 14 studies; assessment point was unclear in two studies. Other outcome assessments ( n  = 13 studies) took place between 1 week and 6 months after training finished (especially with respect to case counts or mortality). Because almost all studies reported outcome benefits, studies with delayed assessment cannot be said to have achieved greater benefits.

Needs assessment was described in most studies ( n  = 27). For instance, C Carlos, R Capistrano, CF Tobora, MR delos Reyes, S Lupisan, A Corpuz, C Aumentado, LL Suy, J Hall and J Donald [ 32 ] stated that “Although briefings for health care workers (HCWs) in Ebola treatment centres have been published, we were unable to locate a course designed to prepare clinicians for imported Ebola virus disease in developing country settings.” HM Soeters, L Koivogui, L de Beer, CY Johnson, D Diaby, A Ouedraogo, F Touré, FO Bangoura, MA Chang and N Chea [ 38 ] cited widespread evidence that there was a high transmission rate to health care workers within Ebola Treatment centres to justify the need for IPC training in these settings. S Ahmed, PK Bardhan, A Iqbal, RN Mazumder, AI Khan, MS Islam, AK Siddique and A Cravioto [ 41 ], A Das, R Garg, ES Kumar, D Singh, B Ojha, HL Kharchandy, BK Pathak, P Srikrishnan, R Singh and I Joshua [ 21 ] and MO Oji, M Haile, A Baller, N Trembley, N Mahmoud, A Gasasira, V Ladele, C Cooper, FN Kateh and T Nyenswah [ 35 ] describe that expert observers identified deficiencies in existing IPC practices and developed training based on those observations. Independent observations of training needs were formalised as a cross-sectional survey of dental student IPC knowledge in A Etebarian, S Khoramian Tusi, Z Momeni and K Hejazi [ 22 ], and by applying a validated IPC checklist in L Kabego, M Kourouma, K Ousman, A Baller, J-P Milambo, J Kombe, B Houndjo, FE Boni, C Musafiri and S Molembo [ 34 ].

All studies stated specific training objectives and gave at least some information about the specific topics and curriculum. Objectives statements mentioned improvement ( n  = 10 studies), knowledge ( n  = 7), safety ( n  = 6), attitudes ( n  = 3), increasing capacity or skills ( n  = 6), and development ( n  = 1). Examples of other objectives statements were to “teach the basics” [ 41 ] or “to cover the practical essentials” [ 16 ]. Training content and delivery were often highly adapted for local delivery [ 23 , 24 , 25 , 28 , 29 , 32 , 33 , 36 , 38 , 41 ]. Training materials were entirely or mostly derived from published guidance in some studies [ 16 , 19 , 34 , 35 , 37 ]. F Tsiouris, K Hartsough, M Poimboeuf, C Raether, M Farahani, T Ferreira, C Kamanzi, J Maria, M Nshimirimana and J Mwanza [ 17 ] and LN Patel, S Kozikott, R Ilboudo, M Kamateeka, M Lamorde, M Subah, F Tsiouris, A Vorndran, CT Lee and C of Practice [ 18 ] both report that training delivery methods were highly adapted and variable, but developed using the same core course content about Covid-19 in 11 or 22 African countries. Other studies were unclear about how much of their programme was original and how much relied on previously published guidance and recommendations [ 13 , 15 , 20 , 21 , 24 , 26 , 27 , 30 , 31 , 39 , 40 ].

Counts of training locations were: ten off-site; seven on-site but not during patient care; nine were a mix of learning locations; three had unclear locations relative to clinical facility location. Among the 21 studies that described the specific cumulative duration of training sessions, median training duration was 24 h (typically delivered over 3 consecutive days), ranging from about 15 min to 8 full days. Most studies ( n  = 21) described training where it was clear that many or most participants were in post, 3 studies clearly described training being provided prior to deployment, another 5 training programmes had mixed or unclear timing with regard to deployment. Twelve studies described training that was delivered only in person, 9 studies described purely digital delivery, 7 were blended delivery and 1 programme was unclear whether the training was delivered digitally or in person. In terms of IPC roles, all studies included at least some frontline workers. In addition, six studies were explicitly designed to train people who would educate others about IPC, seven studies reported including facility managers or supervisors among the trainees. 23 studies mentioned nurses specifically among the trainees, 17 studies specifically mentioned doctors or physicians. Other professionals mentioned were cleaners, porters, paramedics, midwives, anaesthesiologists, hygienists, housekeeping staff, lab technicians, medical technologists and pharmacists. Almost half ( n  = 14) of studies were group education; purely individual learning was specified in just one study and others ( n  = 14) were unclear or could be either individual or group learning.

Often training formats or teaching methods were described unclearly. With regard to formats that were described clearly, counts were workshop ( n  = 10), course (22), seminar or webinar (1), mentoring or shadowing (4), e-learning (13) and inclusion of e-resources (14). Counts of studies using specific teaching methods that were described clearly were didactic (23), demonstrations (17), discussion or debate (8), case-studies or scenarios (6), role play or simulations (9), formative assessment (3), hands-on practice (12), site visits (2), repeat or refresher training (5), shadowing (3). Additional teaching methods described specifically were poster reminders, monitoring (active and passive as well as observation), re-enforcement (updating procedure documents, re-assessing, more training), brainstorming, small group work and other visual aids. Many articles described multiple formats or teaching methods that were used as part of the same training programme, hence these categorisations sum up to more than the total count of included studies.

Most studies ( n  = 25) provided some commentary that could be interpreted as “lessons learned” about training methods and delivery. That success of such programmes depends as much on improving mindset or attitude about IPC as teaching other skills or habits was mentioned by at least 6 studies [ 13 , 14 , 20 , 22 , 32 , 39 ]. The merits of capacity building were explicitly reiterated in concluding commentary in seven studies [ 21 , 26 , 29 , 30 , 31 , 35 ]. Other aspects repeatedly endorsed (at least three times) in concluding comments in the included studies were the value of IPC champions or leaders [ 21 , 34 , 35 ] the value of training relevant to specific job role [ 14 , 18 , 22 , 31 ]; advantages of digital not in-person learning [ 13 , 14 , 19 , 20 , 23 ]; value of refresher sessions [ 13 , 14 , 17 , 21 , 30 , 35 ] and merits of evaluation beyond the immediate end of the training programme to make sure that benefits were sustained [ 21 , 29 , 38 , 39 ]. Regarding lessons learned, Thomas et al. 2022 [ 29 ] and Otu et al. 2021 [ 24 ] (both Nigerian studies) gave specific details about challenges and benefits of mobile phone digital training delivery, for instance reliance on assumed e-literacy, uncertainty about consistent access to Internet or access to devices with suitable versions of the Android operating system. Four studies [ 14 , 25 , 29 , 38 ] listed benefits when training was delivered in participant’s native language(s).

Quality assessment scores are shown in Table  2 . Recall that the customised quality assessment evaluation addressed two broad domains: replicability and other biases (other potential for generalisability), with results interpreted as usefulness of the study to inform future design of similar IPC training programmes. The quality assessment found that replicability potential was not high overall, with an average score of 3.7/6. There was insufficient easily available information (score was < 4 of 6 replicability domains in QA checklist) to undertake the same intervention again for 11 studies, while replicability was relatively high (≥ 5/6) for 9 studies. The generalisability domain in the quality assessment checklist addressed other factors that may have biased the apparent effectiveness outcomes of each training programme . 22 studies scored < 5/8 for generalisability (suggesting they were likely to be at high risk of bias with regard to outcomes reported). Only one study was assessed to be of overall relatively higher quality (quality checklist score ≥ 11/14) and can be considered especially (“most”) useful for informing design of such IPC training in future. Shreshtha et al. [ 28 ] had a pre-post design and is especially thorough in describing training in intubation and triage protocols in Nepal to prevent Covid-19 transmission. The two controlled trials included in our review [ 13 , 14 ] both scored below 11 (10/14) in the quality assessment because they had unclear information about how many participants were assessed and did not provide specific training or assessment materials. There was minimal or no difference in most outcome improvements between arms in one of the trials (Jafree et al. 2022; [ 14 ]), but statistically significant greater improvement in outcomes, especially knowledge, in the active intervention arm, in the other trial. (Sharma et al. 2021; [ 13 ]). This number of experimental trials was small ( n  = 2) and they described fairly different format training programmes for different diseases.

The evidence available is difficult to interpret because of incomplete reporting and lack of specific descriptions. Training delivery was often vaguely described, or even explicitly described as highly diverse while relatively few pathogens were addressed. Only two moderate size ( n  = about 200 in each) experimental trials were found which is insufficient for making broad conclusions about effectiveness. It seems likely that many possible training methods can successfully improve HCW knowledge, skills, attitude, practices, etc. We note that there is unlikely to be definitive evidence in favour of or against specific training methods due to lack of thorough description of training methods in addition to lack of robust study designs (very few clinical trials). Lack of specificity about which aspects of training were least or most beneficial may hinder successful development of future training programmes. Lack of controlled trials and generally poor description of any training programmes that existed prior to implementation of the programmes described in pre-post studies means that we can’t discern if training was effective because of how it was delivered or because relevant training had never been given previously. It seems clear that there is huge opportunity for design of well-run controlled trials in IPC training delivery. A controlled trial could be designed and tested with a pre-specified curriculum for a common and recurring type of pathogen (e.g., influenza-like illness or for a specific common anti-microbial resistant organism), but with 2 or more delivery formats pre-approved with institutional review bodies, and thus ready to be implemented when a relevant crisis arose. Suitable outcomes to include in the trial design would measure aspects of knowledge, practices, skills, confidence and awareness. Complexity-informed evaluation strategies [ 42 ] are likely to be desirable in fragile, conflict-affected or vulnerable settings, too. (Nayahangan et al. 2021; [ 2 ]) recommended that medical training be more standardised during viral epidemics. We did not find evidence to show that universally formatted IPC training programmes are optimal in FCV settings. We have, however, provided information that can be used to begin to assess effectiveness of training programmes that are either universally formatted or more highly locally adapted.

Only two of our studies described training that was applied in conflict-affected settings; one of these [ 15 ] described training that was also delivered prior to worker arrival in the conflict-affected setting. We judge that these two studies are too few and too heterogenous to pool, so we cannot draw broad conclusions about training delivery and benefits in a conflict-affected area context or in a high resource setting prior to deployment.

Other researchers have systematically described many key issues that affect effectiveness of IPC training in low resource or conflict-affected settings. For instance, Qureshi et al. 2022 [ 43 ] undertook a scoping review of national guidelines for occupational IPC training. They audited how up to date such guidelines were. They identified key deficiencies, especially in LMIC countries with regard to the most recent best recommended practices in evaluation and adult learning principles. A global situational analysis undertaken in 2017–2018 [ 1 ] concluded that although nearly all countries audited had relevant national training guidelines in IPC, there was far less training of HCWs taking place, less surveillance and lower staffing levels in lower-middle and lower-income countries (World Bank classifications) than in upper-middle and high income countries.

Data and analyses have been undertaken to specifically describe challenges and potential strategies to meet those challenges, when undertaking IPC in conflict affected settings [ 44 ] or low and middle income countries dealing with a specific disease [e.g., tuberculosis; 45 ]. These studies are fundamentally qualitative in design and narrative, so while they provide insight, they do not lead to confident conclusions about which if any training methods are most likely to be successful. There is a dearth of experimental evidence in lower-middle and lower income countries. The Covid-19 pandemic especially focused interest on IPC guidelines for respiratory infection prevention. A review by Silva et al. 2021 [ 46 ] of randomised controlled trials that tried to improve adherence to IPC guidelines on preventing respiratory infections in healthcare workplaces included 14 interventions, only one of which was not in a high income setting [in Iran; 47 ], and all were in arguably undertaken in preparation phase (not response or readiness).

Limitations

Although we included incidence and mortality as primary outcomes, these outcomes are often not immediate benefits from good IPC training and thus are problematic indicators of IPC success. Case incidence is highly dependent on local community prevalence of relevant pathogen(s), while mortality rates often reflect quality of medical care available in addition to population awareness and subsequent timing of presentation. Our search strategy was not tested using eligible exemplar studies, nor did it include controlled vocabulary which might have found additional eligible studies. We did not rigorously determine risk of bias in each of the few trials available. We did not explicitly look for evidence of publication bias [ 48 ] in this evidence group, but we suspect that the near total absence of any information about failed interventions biases what we can say with confidence about truly successful training formats and methods.

A key limitation when we graded the studies for likely usefulness is that we did not attempt to contact primary study authors to obtain more information or specific training materials. Additional materials are likely to be available from most of the primary study authors and would boost their study replicability and apparent biases. However, such contact could also be a very demanding and not necessarily productive exercise. A broader review than ours could have collected all evidence about any training modalities when delivered in eligible contexts (readiness or response phase in FCV settings), regardless of whether effectiveness outcomes were reported. A review with similar such objectives was published in 2019 [ 7 ], which inventoried implementation strategies for IPC promotion in nurses in Sub-Saharan Africa.

We decline to adopt a broad inventorying approach because the information obtained would still lack evidence of effectiveness. We found some studies [e.g., 49 ] which provided a thorough description of training delivery, but without evaluation of our outcomes and therefore ineligible for inclusion in our review. A broader review than ours would have included grey literature and qualitative studies. Qualitative studies especially provide information about effective communication and leadership, acceptability of training delivery methods, incentives, accountability strategies, satisfaction ratings and barriers to learning [ 50 ]. While those are highly relevant outcomes to effective training in IPC, they were removed from the core outcome that is likely to matter most in achieving good IPC, which is consistency of desired practices.

Our conclusions are limited because of the mediocre quality of evidence available. Although existing evidence in favour of or against any specific training approach is far from definitive, there is much opportunity to design future studies which explicitly and robustly test specific training formats and strategies.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author upon reasonable request.

Tartari E, Tomczyk S, Pires D, Zayed B, Rehse AC, Kariyo P, Stempliuk V, Zingg W, Pittet D, Allegranzi B. Implementation of the infection prevention and control core components at the national level: a global situational analysis. J Hosp Infect. 2021;108:94–103.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Nayahangan LJ, Konge L, Russell L, Andersen S. Training and education of healthcare workers during viral epidemics: a systematic review. BMJ Open. 2021;11(5): e044111.

Article   PubMed   PubMed Central   Google Scholar  

Shojania KG, Sampson M, Ansari MT, Ji J, Doucette S, Moher D. How quickly do systematic reviews go out of date? A survival analysis. Ann Intern Med. 2007;147(4):224–33.

Article   PubMed   Google Scholar  

World Health Organization. Framework and toolkit for infection prevention and control in outbreak preparedness, readiness and response at the national level. 2021. p. 84. https://www.who.int/publications/i/item/9789240032729 .

Emergency cycle. 2024. https://www.who.int/europe/emergencies/emergency-cycle . Accessed 9 Jan 2024.

Meinema JG, Buwalda N, van Etten-Jamaludin FS, Visser MR, van Dijk N. Intervention descriptions in medical education: what can be improved? A systematic review and checklist. Acad Med. 2019;94(2):281–90.

Barrera-Cancedda AE, Riman KA, Shinnick JE, Buttenheim AM. Implementation strategies for infection prevention and control promotion for nurses in Sub-Saharan Africa: a systematic review. Implement Sci. 2019;14(1):1–41.

Article   Google Scholar  

New World Bank country classifications by income level: 2022–2023. 2023. https://blogs.worldbank.org/opendata/new-world-bank-country-classifications-income-level-2022-2023 . Accessed 10 Jan 2024.

Health Workforce-related terminology: terminology work carried out by the WHO Language department at the request of the Health Workforce department. 2021. https://cdn.who.int/media/docs/default-source/health-workforce/hwp/202100608-health-workforce-terminology.pdf . Accessed 9 Jan 2024.

Martimbianco ALC, Sá KMM, Santos GM, Santos EM, Pacheco RL, Riera R. Most Cochrane systematic reviews and protocols did not adhere to the Cochrane’s risk of bias 2.0 tool. Rev Assoc Med Brasi. 2023;69(3):469–72.

Pluye P, Hong QN. Combining the power of stories and the power of numbers: mixed methods research and mixed studies reviews. Annu Rev Public Health. 2014;35(1):29–45.

Study quality assessment tools. 2013. https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools . Accessed 9 Jan 2024.

Sharma SK, Mandal A, Mishra M. Effectiveness of m-learning on knowledge and attitude of nurses about the prevention and control of MDR TB: a quasi-randomized study. Indian J Tuberc. 2021;68(1):3–8.

Jafree SR, Zakar R, Rafiq N, Javed A, Durrani RR, Burhan SK, Hasnain Nadir SM, Ali F, Shahid A, Wrona KJ. WhatsApp-delivered intervention for continued learning for nurses in Pakistan during the COVID-19 pandemic: results of a randomized-controlled trial. Front Public Health. 2022;10:739761.

Crouch HK, Murray CK, Hospenthal DR. Development of a deployment infection control course. Mil Med. 2010;175(12):983–9.

Tauxe RV, Lynch M, Lambert Y, Sobel J, Domerçant JW, Khan A. Rapid development and use of a nationwide training program for cholera management, Haiti, 2010. Emerg Infect Dis. 2011;17(11):2094.

Tsiouris F, Hartsough K, Poimboeuf M, Raether C, Farahani M, Ferreira T, Kamanzi C, Maria J, Nshimirimana M, Mwanza J. Rapid scale-up of COVID-19 training for frontline health workers in 11 African countries. Hum Resour Health. 2022;20(1):43.

Patel LN, Kozikott S, Ilboudo R, Kamateeka M, Lamorde M, Subah M, Tsiouris F, Vorndran A, Lee CT, of Practice C. Safer primary healthcare facilities are needed to protect healthcare workers and maintain essential services: lessons learned from a multicountry COVID-19 emergency response initiative. BMJ Global Health. 2021;6(6):e005833.

Zafar N, Jamal Z, Mujeeb KM. Preparedness of the healthcare personnel against the coronavirus disease 2019 (COVID-19) outbreak: an audit cycle. Front Public Health. 2020;8:502.

Odusanya OO, Adeniran A, Bakare OQ, Odugbemi BA, Enikuomehin OA, Jeje OO, Emechebe AC. Building capacity of primary health care workers and clients on COVID-19: results from a web-based training. PLoS One. 2022;17(10):e0274750.

Das A, Garg R, Kumar ES, Singh D, Ojha B, Kharchandy HL, Pathak BK, Srikrishnan P, Singh R, Joshua I. Implementation of infection prevention and control practices in an upcoming COVID-19 hospital in India: an opportunity not missed. PLoS One. 2022;17(5): e0268071.

Etebarian A, Khoramian Tusi S, Momeni Z, Hejazi K. Impact of educational intervention regarding COVID-19 on knowledge, attitude, and practice of students before dental school re-opening. BMC Oral Health. 2023;23(1):1–6.

Otu A, Okuzu O, Effa E, Ebenso B, Ameh S, Nihalani N, Onwusaka O, Tawose T, Olayinka A, Walley J. Training health workers at scale in Nigeria to fight COVID-19 using the InStrat COVID-19 tutorial app: an e-health interventional study. Ther Adv Infect Dis. 2021;8:20499361211040704.

CAS   PubMed   PubMed Central   Google Scholar  

Otu A, Okuzu O, Ebenso B, Effa E, Nihalani N, Olayinka A, Yaya S. Introduction of mobile health tools to support COVID-19 training and surveillance in Ogun State Nigeria. Front Sustain Cities. 2021;3: 638278.

Perera N, Haldane V, Ratnapalan S, Samaraweera S, Karunathilake M, Gunarathna C, Bandara P, Kawirathne P, Wei X. Implementation of a coronavirus disease 2019 infection prevention and control training program in a low-middle income country. Int J Evid Based Healthc. 2022;20(3):228–35.

PubMed Central   Google Scholar  

Rao S, Rohilla KK, Kathrotia R, Naithani M, Varghese A, Bahadur A, Dhar P, Aggarwal P, Gupta M, Kant R. Rapid workforce development to combat the COVID-19 pandemic: experience from a tertiary health care centre in North India. Cureus. 2021;13(6):e15585.

PubMed   PubMed Central   Google Scholar  

Shehu N, Okwor T, Dooga J, Wele A, Cihambanya L, Okonkon I, Gadanya M, Sebastine J, Okoro B, Okafor O. Train-the-trainers intervention for national capacity building in infection prevention and control for COVID-19 in Nigeria. Heliyon. 2023;9(11).

Shrestha A, Shrestha A, Sonnenberg T, Shrestha R. COVID-19 emergency department protocols: experience of protocol implementation through in-situ simulation. Open Access Emerg Med. 2020;12:293–303.

Thomas MP, Kozikott S, Kamateeka M, Abdu-Aguye R, Agogo E, Bello BG, Brudney K, Manzi O, Patel LN, Barrera-Cancedda AE. Development of a simple and effective online training for health workers: results from a pilot in Nigeria. BMC Public Health. 2022;22(1):1–10.

Bemah P, Baller A, Cooper C, Massaquoi M, Skrip L, Rude JM, Twyman A, Moses P, Seifeldin R, Udhayashankar K. Strengthening healthcare workforce capacity during and post Ebola outbreaks in Liberia: an innovative and effective approach to epidemic preparedness and response. Pan Afr Med J. 2019;33(Suppl 2):9.

Bazeyo W, Bagonza J, Halage A, Okure G, Mugagga M, Musoke R, Tumwebaze M, Tusiime S, Ssendagire S, Nabukenya I. Ebola a reality of modern public health; need for surveillance, preparedness and response training for health workers and other multidisciplinary teams: a case for Uganda. Pan Afr Med J. 2015;20:20.

Carlos C, Capistrano R, Tobora CF, delos Reyes MR, Lupisan S, Corpuz A, Aumentado C, Suy LL, Hall J, Donald J. Hospital preparedness for Ebola virus disease: a training course in the Philippines. Western Pac Surveill Response J. 2015;6(1):33.

Jones-Konneh TEC, Murakami A, Sasaki H, Egawa S. Intensive education of health care workers improves the outcome of Ebola virus disease: lessons learned from the 2014 outbreak in Sierra Leone. Tohoku J Exp Med. 2017;243(2):101–5.

Kabego L, Kourouma M, Ousman K, Baller A, Milambo JP, Kombe J, Houndjo B, Boni FE, Musafiri C, Molembo S. Impact of multimodal strategies including a pay for performance strategy in the improvement of infection prevention and control practices in healthcare facilities during an Ebola virus disease outbreak. BMC Infect Dis. 2023;23(1):1–7.

Oji MO, Haile M, Baller A, Trembley N, Mahmoud N, Gasasira A, Ladele V, Cooper C, Kateh FN, Nyenswah T. Implementing infection prevention and control capacity building strategies within the context of Ebola outbreak in a” Hard-to-Reach” area of Liberia. Pan Afr Med J. 2018;31(1).

Otu A, Ebenso B, Okuzu O, Osifo-Dawodu E. Using a mHealth tutorial application to change knowledge and attitude of frontline health workers to Ebola virus disease in Nigeria: a before-and-after study. Hum Resour Health. 2016;14(1):1–9.

Ousman K, Kabego L, Talisuna A, Diaz J, Mbuyi J, Houndjo B, Ngandu JP, Omba G, Aruna A, Mossoko M. The impact of Infection Prevention and control (IPC) bundle implementationon IPC compliance during the Ebola virus outbreak in Mbandaka/Democratic Republic of the Congo: a before and after design. BMJ Open. 2019;9(9):e029717.

Soeters HM, Koivogui L, de Beer L, Johnson CY, Diaby D, Ouedraogo A, Touré F, Bangoura FO, Chang MA, Chea N. Infection prevention and control training and capacity building during the Ebola epidemic in Guinea. PLoS One. 2018;13(2): e0193291.

El-Sokkary RH, Negm EM, Othman HA, Tawfeek MM, Metwally WS. Stewardship actions for device associated infections: an intervention study in the emergency intensive care unit. J Infect Public Health. 2020;13(12):1927–31.

Wassef M, Mukhtar A, Nabil A, Ezzelarab M, Ghaith D. Care bundle approach to reduce surgical site infections in acute surgical intensive care unit, Cairo, Egypt. Infect Drug Resist. 2020;13:229–36.

Ahmed S, Bardhan PK, Iqbal A, Mazumder RN, Khan AI, Islam MS, Siddique AK, Cravioto A. The 2008 cholera epidemic in Zimbabwe: experience of the icddr,b team in the field. J Health Popul Nutr. 2011;29(5):541–5.

Carroll Á, Collins C, McKenzie J, Stokes D, Darley A. Application of complexity theory in health and social care research: a scoping review. BMJ Open. 2023;13(3): e069180.

Qureshi MO, Chughtai AA, Seale H. Recommendations related to occupational infection prevention and control training to protect healthcare workers from infectious diseases: a scoping review of infection prevention and control guidelines. BMC Health Serv Res. 2022;22(1):272.

Lowe H, Woodd S, Lange IL, Janjanin S, Barnett J, Graham W. Challenges and opportunities for infection prevention and control in hospitals in conflict-affected settings: a qualitative study. Confl Heal. 2021;15:1–10.

Google Scholar  

Tan C, Kallon II, Colvin CJ, Grant AD. Barriers and facilitators of tuberculosis infection prevention and control in low-and middle-income countries from the perspective of healthcare workers: a systematic review. PLoS One. 2020;15(10): e0241039.

Silva MT, Galvao TF, Chapman E, da Silva EN, Barreto JOM. Dissemination interventions to improve healthcare workers’ adherence with infection prevention and control guidelines: a systematic review and meta-analysis. Implement Sci. 2021;16(1):1–15.

Jeihooni AK, Kashfi SH, Bahmandost M, Harsini PA. Promoting preventive behaviors of nosocomial infections in nurses: the effect of an educational program based on health belief model. Invest Educ Enferm. 2018;36(1):e09.

Song F, Hooper L, Loke YK. Publication bias: what is it? How do we measure it? How do we avoid it? Open Access J Clin Trials. 2013;5:71.

Kessy SJ, Gon G, Alimi Y, Bakare WA, Gallagher K, Hornsey E, Sithole L, Onwekwe EVC, Okwor T, Sekoni A. Training a continent: a process evaluation of virtual training on infection prevention and control in Africa during COVID-19. Glob Health Sci Pract. 2023;11(2):e2200051.

Tomczyk S, Storr J, Kilpatrick C, Allegranzi B. Infection prevention and control (IPC) implementation in low-resource settings: a qualitative analysis. Antimicrob Resist Infect Control. 2021;10(1):1–11.

Download references

Acknowledgements

We thank members of the WHO Expert Working Group for comments and guidance.

This work was primarily funded by a grant from the World Health Organization (WHO) based on a grant from the United States Centers for Disease Control and Prevention (US CDC). JB and ICS were also supported by the UK NHIR Health Protection Research Unit (NIHR HPRU) in Emergency Preparedness and Response at King’s College London in partnership with the UK Health Security Agency (UKHSA) in collaboration with the University of East Anglia. EH is affiliated with the UK Public Health Rapid Support Team, funded by UK Aid from the Department of Health and Social Care and is jointly run by UK Health Security Agency and the London School of Hygiene & Tropical Medicine. The views expressed are those of the author(s) and not necessarily those of the WHO, NHS, NIHR, UEA, UK Department of Health, UKHSA or US CDC.

Author information

Authors and affiliations.

Norwich Medical School University of East, Anglia Norwich, UK

Julii Brainard

UCL Medical School, University College London, London, UK

Isabel Catalina Swindells

OL4All, Oxford, UK

Joanna Wild

Department of Veterinary Medicine, University of Cambridge, Cambridge, UK

Charlotte Christiane Hammer

London School of Hygiene & Tropical Medicine, UK Public Health Rapid Support Team, UK Health Security Agency, and , London, UK

Emilio Hornsey

Country Readiness Strengthening, World Health Organization, Geneva, Switzerland

Hibak Osman Mahamed & Victoria Willet

You can also search for this author in PubMed   Google Scholar

Contributions

Analysis plan: JB, CCH, JW, VW, HOM. Comments on draft manuscript: All. Conception: EH, JB, VW, HOM. Data acquisition and extraction: JB, ICS, JW. Data curation: JB. Data summary: JB. Funding: JB, CCH, VW, HOM. Interpretation: JB, CCH, VW, HOM. Research governance: JB. Screening: JB, ICS, CCH. Searches: JB, ICS. Writing first draft, assembling revisions: JB.

Corresponding author

Correspondence to Julii Brainard .

Ethics declarations

Ethics approval and consent to participate.

This research was exempt from needing ethics approval because the (anonymised and aggregated) information we analyse and describe was already in the public domain.

Consent for publication

Not applicable.

Competing interests

JW runs an educational consultancy that advises the WHO and other healthcare training delivery organisations.  VW and HM work for the World Health Organization who commissioned this research.  All other authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Brainard, J., Swindells, I.C., Wild, J. et al. Emergency infection prevention and control training in fragile, conflict-affected or vulnerable settings: a scoping review. BMC Health Serv Res 24 , 937 (2024). https://doi.org/10.1186/s12913-024-11408-y

Download citation

Received : 27 May 2024

Accepted : 06 August 2024

Published : 16 August 2024

DOI : https://doi.org/10.1186/s12913-024-11408-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Infection-control

BMC Health Services Research

ISSN: 1472-6963

experimental group and controlled

Optica Publishing Group

  • Keep it simple - don't use too many different parameters.
  • Example: (diode OR solid-state) AND laser [search contains "diode" or "solid-state" and laser]
  • Example: (photons AND downconversion) - pump [search contains both "photons" and "downconversion" but not "pump"]
  • Improve efficiency in your search by using wildcards.
  • Asterisk ( * ) -- Example: "elect*" retrieves documents containing "electron," "electronic," and "electricity"
  • Question mark (?) -- Example: "gr?y" retrieves documents containing "grey" or "gray"
  • Use quotation marks " " around specific phrases where you want the entire phrase only.
  • For best results, use the separate Authors field to search for author names.
  • Use these formats for best results: Smith or J Smith
  • Use a comma to separate multiple people: J Smith, RL Jones, Macarthur
  • Note: Author names will be searched in the keywords field, also, but that may find papers where the person is mentioned, rather than papers they authored.

Applied Optics

  • pp. 6456-6467
  • • https://doi.org/10.1364/AO.528431

Article Cover

Multi-plane imaging technology with constant imaging quality

Zhongsheng Zhai, Xiatian Yu, Zhen Zeng, Yi Zhang, Qinghua Lv, Da Liu, and Jun Tu

Author Affiliations

Zhongsheng Zhai, 1 Xiatian Yu, 1 Zhen Zeng, 1 Yi Zhang, 1 Qinghua Lv, 2, * Da Liu, 1 and Jun Tu 1

1 Hubei Key Laboratory of Modern Manufacturing Quantity Engineering, School of Mechanical Engineering, Hubei University of Technology, Wuhan, Hubei 430068, China

2 School of Science, Hubei University of Technology, Wuhan, Hubei 430068, China

* Corresponding author: [email protected]

  
  

Your library or personal account may give you access

  • Share with Facebook
  • X Share on X
  • Post on reddit
  • Share with LinkedIn
  • Add to Mendeley

Add to BibSonomy

  • Share with WeChat
  • Endnote (RIS)
  • Citation alert
  • Save article
  • Imaging Systems, Image Processing, and Displays
  • Imaging systems
  • Imaging techniques
  • Liquid lenses
  • Real time imaging
  • Spatial light modulators
  • Spatial resolution
  • Original Manuscript: April 25, 2024
  • Revised Manuscript: July 23, 2024
  • Manuscript Accepted: July 26, 2024
  • Published: August 16, 2024
  • Full Article
  • Figures ( 23 )
  • Data Availability
  • Equations ( 24 )
  • References ( 41 )
  • Back to Top

To realize three-dimensional microscopic imaging with high time resolution and high space resolution at the same time, a multi-plane imaging method with constant axial multi-plane imaging quality is proposed. The optical theory to ensure that different axial sections have consistent lateral resolution has been analyzed. In the system, it is proposed to superimpose a spatial light modulator with programmable ability and wavefront control function on the focal plane of the image square of the front group of the infinite tube length microscope objective and load a digital multiplexing lens with multi-focus and multi-diffraction angle to form a new combined imaging system. The system can clearly image any axial section or multiple target planes within a certain imaging range without compensating the imaging aberration of the axial section, so that each axial section has the same imaging quality. With the help of the USAF 1951 resolution chart, it is verified that different axial object planes have consistent lateral resolution up to 57.0 lp/mm. For samples with different thicknesses, multi-plane layer-by-layer imaging and multi-plane simultaneous imaging experiments were performed using single-focus lens, multi-focus Fresnel lens, and digital multiplexing lens phase grayscale images, respectively. Experimental results show that this scheme can achieve some degree of simultaneous multiplanar imaging with an axial spacing of up to 0.2 mm, which is potentially useful in research areas where samples should not be moved or where relative motion is not desirable.

© 2024 Optica Publishing Group. All rights, including for text and data mining (TDM), Artificial Intelligence (AI) training, and similar technologies, are reserved.

experimental group and controlled

Krzysztof Dobek Appl. Opt. 63 (18) 4959-4963 (2024)

experimental group and controlled

Antonín Mikš and Jiří Novák Appl. Opt. 63 (20) 5465-5471 (2024)

experimental group and controlled

Weiqi Wang, Li Gong, and Zhiwei Huang Photon. Res. 12 (7) 1548-1555 (2024)

experimental group and controlled

Zhen-Wei Qin, Yang Yang, Yan-Ling Ma, Ya-Bo Han, Xian-Long Liu, Hong-Yi Huang, Cheng-Shan Guo, and Qing-Yang Yue Opt. Express 32 (17) 29329-29343 (2024)

experimental group and controlled

Chuanxun Chen, Qun Hao, Lin Liu, Jie Cao, Yangkun Zhang, and Yang Cheng Opt. Express 32 (2) 1246-1256 (2024)

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Contact your librarian or system administrator or Login to access Optica Member Subscription

Figures (23)

Equations (24).

Gisele Bennett, Editor-in-Chief

Confirm Citation Alert

Field error.

  • Publishing Home
  • Conferences
  • Preprints (Optica Open)
  • Information for
  • Open Access Information
  • Open Access Statement and Policy
  • Terms for Journal Article Reuse
  • Other Resources
  • Optica Open
  • Optica Publishing Group Bookshelf
  • Optics ImageBank
  • Optics & Photonics News
  • Spotlight on Optics
  • Optica Home
  • About Optica Publishing Group
  • About My Account
  • Sign up for Alerts
  • Send Us Feedback
  • Go to My Account
  • Login to access favorites
  • Recent Pages

Login or Create Account

IMAGES

  1. Control Group Vs Experimental Group In Science

    experimental group and controlled

  2. PPT

    experimental group and controlled

  3. PPT

    experimental group and controlled

  4. Clinical Research, control versus experimental group 21790126 Vector

    experimental group and controlled

  5. PPT

    experimental group and controlled

  6. Control Group vs. Experimental Group: 5 Key Differences, Pros & Cons

    experimental group and controlled

COMMENTS

  1. Control Group Vs Experimental Group In Science

    In a controlled experiment, scientists compare a control group, and an experimental group is identical in all respects except for one difference - experimental manipulation.. Differences. Unlike the experimental group, the control group is not exposed to the independent variable under investigation. So, it provides a baseline against which any changes in the experimental group can be compared.

  2. The Difference Between Control Group and Experimental Group

    The control group and experimental group are compared against each other in an experiment. The only difference between the two groups is that the independent variable is changed in the experimental group. The independent variable is "controlled", or held constant, in the control group. A single experiment may include multiple experimental ...

  3. Control Groups and Treatment Groups

    A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn't receive the experimental treatment.. However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group's outcomes before and after a treatment (instead of comparing outcomes between different groups).

  4. Experimental & Control Group

    In this lesson, discover what is an experimental group, compare the difference between an experimental group and a control group, and examine two examples of experimental groups. Updated: 11/21/2023

  5. Controlled experiments (article)

    There are two groups in the experiment, and they are identical except that one receives a treatment (water) while the other does not. The group that receives the treatment in an experiment (here, the watered pot) is called the experimental group, while the group that does not receive the treatment (here, the dry pot) is called the control group.The control group provides a baseline that lets ...

  6. Understanding Experimental Groups

    An experimental group in a scientific experiment is the group on which the experimental procedure is performed. The independent variable is changed for the group and the response or change in the dependent variable is recorded. In contrast, the group that does not receive the treatment or in which the independent variable is held constant is ...

  7. The Experimental Group in Psychology Experiments

    In this experiment, the group of participants listening to no music while working out is the control group. They serve as a baseline with which to compare the performance of the other two groups. The other two groups in the experiment are the experimental groups. They each receive some level of the independent variable, which in this case is ...

  8. What Is a Controlled Experiment?

    In an experiment, the control is a standard or baseline group not exposed to the experimental treatment or manipulation.It serves as a comparison group to the experimental group, which does receive the treatment or manipulation. The control group helps to account for other variables that might influence the outcome, allowing researchers to attribute differences in results more confidently to ...

  9. Experimental Design: Types, Examples & Methods

    Three types of experimental designs are commonly used: 1. Independent Measures. Independent measures design, also known as between-groups, is an experimental design where different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants.

  10. What Is a Controlled Experiment?

    Published on April 19, 2021 by Pritha Bhandari . Revised on June 22, 2023. In experiments, researchers manipulate independent variables to test their effects on dependent variables. In a controlled experiment, all variables other than the independent variable are controlled or held constant so they don't influence the dependent variable.

  11. Control Groups & Treatment Groups

    A true experiment (aka a controlled experiment) always includes at least one control group that doesn't receive the experimental treatment.. However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group's outcomes before and after a treatment (instead of comparing outcomes between different groups).

  12. Control Group Definition and Examples

    A control group is not the same thing as a control variable. A control variable or controlled variable is any factor that is held constant during an experiment. Examples of common control variables include temperature, duration, and sample size. The control variables are the same for both the control and experimental groups.

  13. What Is a Control Group?

    Positive control groups: In this case, researchers already know that a treatment is effective but want to learn more about the impact of variations of the treatment.In this case, the control group receives the treatment that is known to work, while the experimental group receives the variation so that researchers can learn more about how it performs and compares to the control.

  14. What are Control Groups?

    A control group is typically thought of as the baseline in an experiment. In an experiment, clinical trial, or other sort of controlled study, there are at least two groups whose results are compared against each other. The experimental group receives some sort of treatment, and their results are compared against those of the control group ...

  15. What is the difference between a control group and an experimental group?

    A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn't receive the experimental treatment. However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group's outcomes before and after a treatment (instead of ...

  16. Experimental Group

    Experimental Group Definition. In a comparative experiment, the experimental group (aka the treatment group) is the group being tested for a reaction to a change in the variable. There may be experimental groups in a study, each testing a different level or amount of the variable. The other type of group, the control group, can show the effects ...

  17. Experimental Group

    The experimental groups and the control group were raised under the same environment. After a period of time, various activity indexes of the experimental groups and the controlled group were evaluated. If there were differences, it was considered that drug addiction had different effects. If there were differences between the two experimental ...

  18. Control group

    control group, the standard to which comparisons are made in an experiment. Many experiments are designed to include a control group and one or more experimental groups; in fact, some scholars reserve the term experiment for study designs that include a control group. Ideally, the control group and the experimental groups are identical in every ...

  19. Experimental Group

    Conclusion. Experimental treatment studies function in the way that they involve different groups, one of which serves as a control group to provide a baseline for the estimation of the treatment effect. The treatment therefore defines the group as independent variable, which is manipulated and therefore makes the investigation an experiment.

  20. Randomized Controlled Trial

    Definition. A study design that randomly assigns participants into an experimental group or a control group. As the study is conducted, the only expected difference between the control and experimental groups in a randomized controlled trial (RCT) is the outcome variable being studied.

  21. Controlled Experiments: Definition and Examples

    In controlled experiments, researchers use random assignment (i.e. participants are randomly assigned to be in the experimental group or the control group) in order to minimize potential confounding variables in the study. For example, imagine a study of a new drug in which all of the female participants were assigned to the experimental group and all of the male participants were assigned to ...

  22. Randomized Control Trial (RCT)

    A randomized control trial (RCT) is a type of study design that involves randomly assigning participants to either an experimental group or a control group to measure the effectiveness of an intervention or treatment. Randomized Controlled Trials (RCTs) are considered the "gold standard" in medical and health research due to their rigorous ...

  23. What's the difference between a control group and an experimental group?

    A true experiment (aka a controlled experiment) always includes at least one control group that doesn't receive the experimental treatment. However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group's outcomes before and after a treatment (instead of ...

  24. Experimental vs. Control Group Explained

    By comparing the results from the experimental group against the control group, researchers can determine the effectiveness of the intervention in a more precise manner. The purpose of control groups is to minimize biases and ensure valid conclusions. They help in identifying whether observed changes in the experimental group are genuinely ...

  25. Experimental study on different phytoremediation of heavy ...

    The conductivity of the control group, the black locust group, the slash pine group and the Chinese white poplar was 213 mS s −1, 241 mS s −1, 226 mS s −1, and 235 mS s −1, respectively, when there was an increase of 13.1%, 6.1%, and 10.3% accordingly, compared with the control group.

  26. A study on selected responses and immune structures of broiler chickens

    Experimental E. coli infection of chickens by intratracheal route is associated with remarkable inflammatory responses as shown by changes in biochemical and hematological parameters. ... at the age of 3 days for six consecutive days has been associated with decreased serum levels of IFN-γ compared to control group in the early stages of drug ...

  27. Emergency infection prevention and control training in fragile

    Figure 1 shows the study selection process. 29 studies were included. Extracted data for each study are in File S2.Almost all (n = 27) were pre-post design; 2 were experimental studies [13, 14].Table 1 lists summary information about the included studies. Seven reports described training delivered in single low-income countries, 19 studies described training in single lower middle income ...

  28. Multi-plane imaging technology with constant imaging quality

    To realize three-dimensional microscopic imaging with high time resolution and high space resolution at the same time, a multi-plane imaging method with constant axial multi-plane imaging quality is proposed. The optical theory to ensure that different axial sections have consistent lateral resolution has been analyzed. In the system, it is proposed to superimpose a spatial light modulator ...