Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Random Assignment in Experiments | Introduction & Examples

Random Assignment in Experiments | Introduction & Examples

Published on March 8, 2021 by Pritha Bhandari . Revised on June 22, 2023.

In experimental research, random assignment is a way of placing participants from your sample into different treatment groups using randomization.

With simple random assignment, every member of the sample has a known or equal chance of being placed in a control group or an experimental group. Studies that use simple random assignment are also called completely randomized designs .

Random assignment is a key part of experimental design . It helps you ensure that all groups are comparable at the start of a study: any differences between them are due to random factors, not research biases like sampling bias or selection bias .

Table of contents

Why does random assignment matter, random sampling vs random assignment, how do you use random assignment, when is random assignment not used, other interesting articles, frequently asked questions about random assignment.

Random assignment is an important part of control in experimental research, because it helps strengthen the internal validity of an experiment and avoid biases.

In experiments, researchers manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables. To do so, they often use different levels of an independent variable for different groups of participants.

This is called a between-groups or independent measures design.

You use three groups of participants that are each given a different level of the independent variable:

  • a control group that’s given a placebo (no dosage, to control for a placebo effect ),
  • an experimental group that’s given a low dosage,
  • a second experimental group that’s given a high dosage.

Random assignment to helps you make sure that the treatment groups don’t differ in systematic ways at the start of the experiment, as this can seriously affect (and even invalidate) your work.

If you don’t use random assignment, you may not be able to rule out alternative explanations for your results.

  • participants recruited from cafes are placed in the control group ,
  • participants recruited from local community centers are placed in the low dosage experimental group,
  • participants recruited from gyms are placed in the high dosage group.

With this type of assignment, it’s hard to tell whether the participant characteristics are the same across all groups at the start of the study. Gym-users may tend to engage in more healthy behaviors than people who frequent cafes or community centers, and this would introduce a healthy user bias in your study.

Although random assignment helps even out baseline differences between groups, it doesn’t always make them completely equivalent. There may still be extraneous variables that differ between groups, and there will always be some group differences that arise from chance.

Most of the time, the random variation between groups is low, and, therefore, it’s acceptable for further analysis. This is especially true when you have a large sample. In general, you should always use random assignment in experiments when it is ethically possible and makes sense for your study topic.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Random sampling and random assignment are both important concepts in research, but it’s important to understand the difference between them.

Random sampling (also called probability sampling or random selection) is a way of selecting members of a population to be included in your study. In contrast, random assignment is a way of sorting the sample participants into control and experimental groups.

While random sampling is used in many types of studies, random assignment is only used in between-subjects experimental designs.

Some studies use both random sampling and random assignment, while others use only one or the other.

Random sample vs random assignment

Random sampling enhances the external validity or generalizability of your results, because it helps ensure that your sample is unbiased and representative of the whole population. This allows you to make stronger statistical inferences .

You use a simple random sample to collect data. Because you have access to the whole population (all employees), you can assign all 8000 employees a number and use a random number generator to select 300 employees. These 300 employees are your full sample.

Random assignment enhances the internal validity of the study, because it ensures that there are no systematic differences between the participants in each group. This helps you conclude that the outcomes can be attributed to the independent variable .

  • a control group that receives no intervention.
  • an experimental group that has a remote team-building intervention every week for a month.

You use random assignment to place participants into the control or experimental group. To do so, you take your list of participants and assign each participant a number. Again, you use a random number generator to place each participant in one of the two groups.

To use simple random assignment, you start by giving every member of the sample a unique number. Then, you can use computer programs or manual methods to randomly assign each participant to a group.

  • Random number generator: Use a computer program to generate random numbers from the list for each group.
  • Lottery method: Place all numbers individually in a hat or a bucket, and draw numbers at random for each group.
  • Flip a coin: When you only have two groups, for each number on the list, flip a coin to decide if they’ll be in the control or the experimental group.
  • Use a dice: When you have three groups, for each number on the list, roll a dice to decide which of the groups they will be in. For example, assume that rolling 1 or 2 lands them in a control group; 3 or 4 in an experimental group; and 5 or 6 in a second control or experimental group.

This type of random assignment is the most powerful method of placing participants in conditions, because each individual has an equal chance of being placed in any one of your treatment groups.

Random assignment in block designs

In more complicated experimental designs, random assignment is only used after participants are grouped into blocks based on some characteristic (e.g., test score or demographic variable). These groupings mean that you need a larger sample to achieve high statistical power .

For example, a randomized block design involves placing participants into blocks based on a shared characteristic (e.g., college students versus graduates), and then using random assignment within each block to assign participants to every treatment condition. This helps you assess whether the characteristic affects the outcomes of your treatment.

In an experimental matched design , you use blocking and then match up individual participants from each block based on specific characteristics. Within each matched pair or group, you randomly assign each participant to one of the conditions in the experiment and compare their outcomes.

Sometimes, it’s not relevant or ethical to use simple random assignment, so groups are assigned in a different way.

When comparing different groups

Sometimes, differences between participants are the main focus of a study, for example, when comparing men and women or people with and without health conditions. Participants are not randomly assigned to different groups, but instead assigned based on their characteristics.

In this type of study, the characteristic of interest (e.g., gender) is an independent variable, and the groups differ based on the different levels (e.g., men, women, etc.). All participants are tested the same way, and then their group-level outcomes are compared.

When it’s not ethically permissible

When studying unhealthy or dangerous behaviors, it’s not possible to use random assignment. For example, if you’re studying heavy drinkers and social drinkers, it’s unethical to randomly assign participants to one of the two groups and ask them to drink large amounts of alcohol for your experiment.

When you can’t assign participants to groups, you can also conduct a quasi-experimental study . In a quasi-experiment, you study the outcomes of pre-existing groups who receive treatments that you may not have any control over (e.g., heavy drinkers and social drinkers). These groups aren’t randomly assigned, but may be considered comparable when some other variables (e.g., age or socioeconomic status) are controlled for.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Random Assignment in Experiments | Introduction & Examples. Scribbr. Retrieved June 18, 2024, from https://www.scribbr.com/methodology/random-assignment/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, guide to experimental design | overview, steps, & examples, confounding variables | definition, examples & controls, control groups and treatment groups | uses & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Randomized Block Design

  • Reference work entry
  • Cite this reference work entry

an experimental design where the experimental units are randomly assigned

9598 Accesses

4 Altmetric

A randomized block design is an experimental design where the experimental units are in groups called blocks. The treatments are randomly allocated to the experimental units inside each block. When all treatments appear at least once in each block, we have a completely randomized block design. Otherwise, we have an incomplete randomized block design.

This kind of design is used to minimize the effects of systematic error. If the experimenter focuses exclusively on the differences between treatments, the effects due to variations between the different blocks should be eliminated.

See experimental design .

A farmer possesses five plots of land where he wishes to cultivate corn. He wants to run an experiment since he has two kinds of corn and two types of fertilizer. Moreover, he knows that his plots are quite heterogeneous regarding sunshine, and therefore a systematic error could arise if sunshine does indeed facilitate corn cultivation.

The farmer divides the land into...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag

About this entry

Cite this entry.

(2008). Randomized Block Design. In: The Concise Encyclopedia of Statistics. Springer, New York, NY. https://doi.org/10.1007/978-0-387-32833-1_344

Download citation

DOI : https://doi.org/10.1007/978-0-387-32833-1_344

Publisher Name : Springer, New York, NY

Print ISBN : 978-0-387-31742-7

Online ISBN : 978-0-387-32833-1

eBook Packages : Mathematics and Statistics Reference Module Computer Science and Engineering

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Teach yourself statistics

Experimental Design for ANOVA

There is a close relationship between experimental design and statistical analysis. The way that an experiment is designed determines the types of analyses that can be appropriately conducted.

In this lesson, we review aspects of experimental design that a researcher must understand in order to properly interpret experimental data with analysis of variance.

What Is an Experiment?

An experiment is a procedure carried out to investigate cause-and-effect relationships. For example, the experimenter may manipulate one or more variables (independent variables) to assess the effect on another variable (the dependent variable).

Conclusions are reached on the basis of data. If the dependent variable is unaffected by changes in independent variables, we conclude that there is no causal relationship between the dependent variable and the independent variables. On the other hand, if the dependent variable is affected, we conclude that a causal relationship exists.

Experimenter Control

One of the features that distinguish a true experiment from other types of studies is experimenter control of the independent variable(s).

In a true experiment, an experimenter controls the level of the independent variable administered to each subject. For example, dosage level could be an independent variable in a true experiment; because an experimenter can manipulate the dosage administered to any subject.

What is a Quasi-Experiment?

A quasi-experiment is a study that lacks a critical feature of a true experiment. Quasi-experiments can provide insights into cause-and-effect relationships; but evidence from a quasi-experiment is not as persuasive as evidence from a true experiment. True experiments are the gold standard for causal analysis.

A study that used gender or IQ as an independent variable would be an example of a quasi-experiment, because the study lacks experimenter control over the independent variable; that is, an experimenter cannot manipulate the gender or IQ of a subject.

As we discuss experimental design in the context of a tutorial on analysis of variance, it is important to point out that experimenter control is a requirement for a true experiment; but it is not a requirement for analysis of variance. Analysis of variance can be used with true experiments and with quasi-experiments that lack only experimenter control over the independent variable.

Note: Henceforth in this tutorial, when we refer to an experiment, we will be referring to a true experiment or to a quasi-experiment that is almost a true experiment, in the sense that it lacks only experimenter control over the independent variable.

What Is Experimental Design?

The term experimental design refers to a plan for conducting an experiment in such a way that research results will be valid and easy to interpret. This plan includes three interrelated activities:

  • Write statistical hypotheses.
  • Collect data.
  • Analyze data.

Let's look in a little more detail at these three activities.

Statistical Hypotheses

A statistical hypothesis is an assumption about the value of a population parameter . There are two types of statistical hypotheses:

H 0: μ i = μ j

Here, μ i is the population mean for group i , and μ j is the population mean for group j . This hypothesis makes the assumption that population means in groups i and j are equal.

H 1: μ i ≠ μ j

This hypothesis makes the assumption that population means in groups i and j are not equal.

The null hypothesis and the alternative hypothesis are written to be mutually exclusive. If one is true, the other is not.

Experiments rely on sample data to test the null hypothesis. If experimental results, based on sample statistics , are consistent with the null hypothesis, the null hypothesis cannot be rejected; otherwise, the null hypothesis is rejected in favor of the alternative hypothesis.

Data Collection

The data collection phase of experimental design is all about methodology - how to run the experiment to produce valid, relevant statistics that can be used to test a null hypothesis.

Identify Variables

Every experiment exists to examine a cause-and-effect relationship. With respect to the relationship under investigation, an experimental design needs to account for three types of variables:

  • Dependent variable. The dependent variable is the outcome being measured, the effect in a cause-and-effect relationship.
  • Independent variables. An independent variable is a variable that is thought to be a possible cause in a cause-and-effect relationship.
  • Extraneous variables. An extraneous variable is any other variable that could affect the dependent variable, but is not explicitly included in the experiment.

Note: The independent variables that are explicitly included in an experiment are also called factors .

Define Treatment Groups

In an experiment, treatment groups are built around factors, each group defined by a unique combination of factor levels.

For example, suppose that a drug company wants to test a new cholesterol medication. The dependent variable is total cholesterol level. One independent variable is dosage. And, since some drugs affect men and women differently, the researchers include an second independent variable - gender.

This experiment has two factors - dosage and gender. The dosage factor has three levels (0 mg, 50 mg, and 100 mg), and the gender factor has two levels (male and female). Given this combination of factors and levels, we can define six unique treatment groups, as shown below:

Gender Dose
0 mg 50 mg 100 mg
Male Group 1 Group 2 Group 3
Female Group 4 Group 5 Group 6

Note: The experiment described above is an example of a quasi-experiment, because the gender factor cannot be manipulated by the experimenter.

Select Factor Levels

A factor in an experiment can be described by the way in which factor levels are chosen for inclusion in the experiment:

  • Fixed factor. The experiment includes all factor levels about which inferences are to be made.
  • Random factor. The experiment includes a random sample of levels from a much bigger population of factor levels.

Experiments can be described by the presence or absence of fixed or random factors:

  • Fixed-effects model. All of the factors in the experiment are fixed.
  • Random-effects model. All of the factors in the experiment are random.
  • Mixed model. At least one factor in the experiment is fixed, and at least one factor is random.

The use of fixed factors versus random factors has implications for how experimental results are interpreted. With a fixed factor, results apply only to factor levels that are explicitly included in the experiment. With a random factor, results apply to every factor level from the population.

For example, consider the blood pressure experiment described above. Suppose the experimenter only wanted to test the effect of three particular dosage levels - 0 mg, 50 mg, and 100 mg. He would include those dosage levels in the experiment, and any research conclusions would apply to only those particular dosage levels. This would be an example of a fixed-effects model.

On the other hand, suppose the experimenter wanted to test the effect of any dosage level. Since it is not practical to test every dosage level, the experimenter might choose three dosage levels at random from the population of possible dosage levels. Any research conclusions would apply not only to the selected dosage levels, but also to other dosage levels that were not included explicitly in the experiment. This would be an example of a random-effects model.

Select Experimental Units

The experimental unit is the entity that provides values for the dependent variable. Depending on the needs of the study, an experimental unit may be a person, animal, plant, product - anything. For example, in the cholesterol study described above, researchers measured cholesterol level (the dependent variable) of people; so the experimental units were people.

Note: When the experimental units are people, they are often referred to as subjects . Some researchers prefer the term participant , because subject has a connotation that the person is subservient.

If time and money were no object, you would include the entire population of experimental units in your experiment. In the real world, where there is never enough time or money, you will usually select a sample of experimental units from the population.

Ultimately, you want to use sample data to make inferences about population parameters. With that in mind, it is best practice to draw a random sample of experimental units from the population. This provides a defensible, statistical basis for generalizing from sample findings to the larger population.

Finally, it is important to consider sample size. The larger the sample, the greater the statistical power ; and the more confidence you can have in your results.

Assign Experimental Units to Treatments

Having selected a sample of experimental units, we need to assign each unit to one or more treatment groups. Here are two ways that you might assign experimental units to groups:

  • Independent groups design. Each experimental unit is randomly assigned to one, and only one, treatment group. This is also known as a between-subjects design .
  • Repeated measures design. Experimental units are assigned to more than one treatment group. This is also known as a within-subjects design .

Control for Extraneous Variables

Extraneous variables can mask effects of independent variables. Therefore, a good experimental design controls potential effects of extraneous variables. Here are a few strategies for controlling extraneous variables:

  • Randomization Assign subjects randomly to treatment groups. This tends to distribute effects of extraneous variables evenly across groups.
  • Repeated measures design. To control for individual differences between subjects (age, attitude, religion, etc.), assign each subject to multiple treatments. This strategy is called using subjects as their own control.
  • Counterbalancing. In repeated measures designs, randomize or reverse the order of treatments among subjects to control for order effects (e.g., fatigue, practice).

As we describe specific experimental designs in upcoming lessons, we will point out the strategies that are used with each design to control the confounding effects of extraneous variables.

Data Analysis

Researchers follow a formal process to determine whether to reject a null hypothesis, based on sample data. This process, called hypothesis testing, consists of five steps:

  • Formulate hypotheses. This involves stating the null and alternative hypotheses. Because the hypotheses are mutually exclusive, if one is true, the other must be false.
  • Choose the test statistic. This involves specifying the statistic that will be used to assess the validity of the null hypothesis. Typically, in analysis of variance studies, researchers compute a F ratio to test hypotheses.
  • Compute a P-value, based on sample data. Suppose the observed test statistic is equal to S . The P-value is the probability that the experiment would yield a test statistic as extreme as S , assuming the null hypothesis is true.
  • Choose a significance level. The significance level, denoted by α, is the probability of rejecting the null hypothesis when it is really true. Researchers often choose a significance level of 0.05 or 0.01.
  • Test the null hypothesis. If the P-value is smaller than the significance level, we reject the null hypothesis; if it is larger, we fail to reject.

A good experimental design includes a precise plan for data analysis. Before the first data point is collected, a researcher should know how experimental data will be processed to accept or reject the null hypotheses.

Test Your Understanding

In a well-designed experiment, which of the following statements is true?

I. The null hypothesis and the alternative hypothesis are mutually exclusive. II. The null hypothesis is subjected to statistical test. III. The alternative hypothesis is subjected to statistical test.

(A) I only (B) II only (C) III only (D) I and II (E) I and III

The correct answer is (D). The null hypothesis and the alternative hypothesis are mutually exclusive; if one is true, the other must be false. Only the null hypothesis is subjected to statistical test. When the null hypothesis is accepted, the alternative hypothesis is rejected. The alternative hypothesis is not tested explicitly.

In a true experiment, each subject is assigned to only one treatment group. What type of design is this?

(A) Independent groups design (B) Repeated measures design (C) Within-subjects design (D) None of the above (E) All of the above

The correct answer is (A). In an independent groups design, each experimental unit is assigned to one treatment group. In the other two designs, each experimental unit is assigned to more than one treatment group.

In a true experiment, which of the following does the experimenter control?

(A) How to manipulate independent variables. (B) How to assign subjects to treatment conditions. (C) How to control for extraneous variables. (D) None of the above (E) All of the above

The correct answer is (E). The experimenter chooses factors and factor levels for the experiment, assigns experimental units to treatment groups (often through a random process), and implements strategies (randomization, counterbalancing, etc.) to control the influence of extraneous variables.

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

Experimental Design: Definition and Types

By Jim Frost 3 Comments

What is Experimental Design?

An experimental design is a detailed plan for collecting and using data to identify causal relationships. Through careful planning, the design of experiments allows your data collection efforts to have a reasonable chance of detecting effects and testing hypotheses that answer your research questions.

An experiment is a data collection procedure that occurs in controlled conditions to identify and understand causal relationships between variables. Researchers can use many potential designs. The ultimate choice depends on their research question, resources, goals, and constraints. In some fields of study, researchers refer to experimental design as the design of experiments (DOE). Both terms are synonymous.

Scientist who developed an experimental design for her research.

Ultimately, the design of experiments helps ensure that your procedures and data will evaluate your research question effectively. Without an experimental design, you might waste your efforts in a process that, for many potential reasons, can’t answer your research question. In short, it helps you trust your results.

Learn more about Independent and Dependent Variables .

Design of Experiments: Goals & Settings

Experiments occur in many settings, ranging from psychology, social sciences, medicine, physics, engineering, and industrial and service sectors. Typically, experimental goals are to discover a previously unknown effect , confirm a known effect, or test a hypothesis.

Effects represent causal relationships between variables. For example, in a medical experiment, does the new medicine cause an improvement in health outcomes? If so, the medicine has a causal effect on the outcome.

An experimental design’s focus depends on the subject area and can include the following goals:

  • Understanding the relationships between variables.
  • Identifying the variables that have the largest impact on the outcomes.
  • Finding the input variable settings that produce an optimal result.

For example, psychologists have conducted experiments to understand how conformity affects decision-making. Sociologists have performed experiments to determine whether ethnicity affects the public reaction to staged bike thefts. These experiments map out the causal relationships between variables, and their primary goal is to understand the role of various factors.

Conversely, in a manufacturing environment, the researchers might use an experimental design to find the factors that most effectively improve their product’s strength, identify the optimal manufacturing settings, and do all that while accounting for various constraints. In short, a manufacturer’s goal is often to use experiments to improve their products cost-effectively.

In a medical experiment, the goal might be to quantify the medicine’s effect and find the optimum dosage.

Developing an Experimental Design

Developing an experimental design involves planning that maximizes the potential to collect data that is both trustworthy and able to detect causal relationships. Specifically, these studies aim to see effects when they exist in the population the researchers are studying, preferentially favor causal effects, isolate each factor’s true effect from potential confounders, and produce conclusions that you can generalize to the real world.

To accomplish these goals, experimental designs carefully manage data validity and reliability , and internal and external experimental validity. When your experiment is valid and reliable, you can expect your procedures and data to produce trustworthy results.

An excellent experimental design involves the following:

  • Lots of preplanning.
  • Developing experimental treatments.
  • Determining how to assign subjects to treatment groups.

The remainder of this article focuses on how experimental designs incorporate these essential items to accomplish their research goals.

Learn more about Data Reliability vs. Validity and Internal and External Experimental Validity .

Preplanning, Defining, and Operationalizing for Design of Experiments

A literature review is crucial for the design of experiments.

This phase of the design of experiments helps you identify critical variables, know how to measure them while ensuring reliability and validity, and understand the relationships between them. The review can also help you find ways to reduce sources of variability, which increases your ability to detect treatment effects. Notably, the literature review allows you to learn how similar studies designed their experiments and the challenges they faced.

Operationalizing a study involves taking your research question, using the background information you gathered, and formulating an actionable plan.

This process should produce a specific and testable hypothesis using data that you can reasonably collect given the resources available to the experiment.

  • Null hypothesis : The jumping exercise intervention does not affect bone density.
  • Alternative hypothesis : The jumping exercise intervention affects bone density.

To learn more about this early phase, read Five Steps for Conducting Scientific Studies with Statistical Analyses .

Formulating Treatments in Experimental Designs

In an experimental design, treatments are variables that the researchers control. They are the primary independent variables of interest. Researchers administer the treatment to the subjects or items in the experiment and want to know whether it causes changes in the outcome.

As the name implies, a treatment can be medical in nature, such as a new medicine or vaccine. But it’s a general term that applies to other things such as training programs, manufacturing settings, teaching methods, and types of fertilizers. I helped run an experiment where the treatment was a jumping exercise intervention that we hoped would increase bone density. All these treatment examples are things that potentially influence a measurable outcome.

Even when you know your treatment generally, you must carefully consider the amount. How large of a dose? If you’re comparing three different temperatures in a manufacturing process, how far apart are they? For my bone mineral density study, we had to determine how frequently the exercise sessions would occur and how long each lasted.

How you define the treatments in the design of experiments can affect your findings and the generalizability of your results.

Assigning Subjects to Experimental Groups

A crucial decision for all experimental designs is determining how researchers assign subjects to the experimental conditions—the treatment and control groups. The control group is often, but not always, the lack of a treatment. It serves as a basis for comparison by showing outcomes for subjects who don’t receive a treatment. Learn more about Control Groups .

How your experimental design assigns subjects to the groups affects how confident you can be that the findings represent true causal effects rather than mere correlation caused by confounders. Indeed, the assignment method influences how you control for confounding variables. This is the difference between correlation and causation .

Imagine a study finds that vitamin consumption correlates with better health outcomes. As a researcher, you want to be able to say that vitamin consumption causes the improvements. However, with the wrong experimental design, you might only be able to say there is an association. A confounder, and not the vitamins, might actually cause the health benefits.

Let’s explore some of the ways to assign subjects in design of experiments.

Completely Randomized Designs

A completely randomized experimental design randomly assigns all subjects to the treatment and control groups. You simply take each participant and use a random process to determine their group assignment. You can flip coins, roll a die, or use a computer. Randomized experiments must be prospective studies because they need to be able to control group assignment.

Random assignment in the design of experiments helps ensure that the groups are roughly equivalent at the beginning of the study. This equivalence at the start increases your confidence that any differences you see at the end were caused by the treatments. The randomization tends to equalize confounders between the experimental groups and, thereby, cancels out their effects, leaving only the treatment effects.

For example, in a vitamin study, the researchers can randomly assign participants to either the control or vitamin group. Because the groups are approximately equal when the experiment starts, if the health outcomes are different at the end of the study, the researchers can be confident that the vitamins caused those improvements.

Statisticians consider randomized experimental designs to be the best for identifying causal relationships.

If you can’t randomly assign subjects but want to draw causal conclusions about an intervention, consider using a quasi-experimental design .

Learn more about Randomized Controlled Trials and Random Assignment in Experiments .

Randomized Block Designs

Nuisance factors are variables that can affect the outcome, but they are not the researcher’s primary interest. Unfortunately, they can hide or distort the treatment results. When experimenters know about specific nuisance factors, they can use a randomized block design to minimize their impact.

This experimental design takes subjects with a shared “nuisance” characteristic and groups them into blocks. The participants in each block are then randomly assigned to the experimental groups. This process allows the experiment to control for known nuisance factors.

Blocking in the design of experiments reduces the impact of nuisance factors on experimental error. The analysis assesses the effects of the treatment within each block, which removes the variability between blocks. The result is that blocked experimental designs can reduce the impact of nuisance variables, increasing the ability to detect treatment effects accurately.

Suppose you’re testing various teaching methods. Because grade level likely affects educational outcomes, you might use grade level as a blocking factor. To use a randomized block design for this scenario, divide the participants by grade level and then randomly assign the members of each grade level to the experimental groups.

A standard guideline for an experimental design is to “Block what you can, randomize what you cannot.” Use blocking for a few primary nuisance factors. Then use random assignment to distribute the unblocked nuisance factors equally between the experimental conditions.

You can also use covariates to control nuisance factors. Learn about Covariates: Definition and Uses .

Observational Studies

In some experimental designs, randomly assigning subjects to the experimental conditions is impossible or unethical. The researchers simply can’t assign participants to the experimental groups. However, they can observe them in their natural groupings, measure the essential variables, and look for correlations. These observational studies are also known as quasi-experimental designs. Retrospective studies must be observational in nature because they look back at past events.

Imagine you’re studying the effects of depression on an activity. Clearly, you can’t randomly assign participants to the depression and control groups. But you can observe participants with and without depression and see how their task performance differs.

Observational studies let you perform research when you can’t control the treatment. However, quasi-experimental designs increase the problem of confounding variables. For this design of experiments, correlation does not necessarily imply causation. While special procedures can help control confounders in an observational study, you’re ultimately less confident that the results represent causal findings.

Learn more about Observational Studies .

For a good comparison, learn about the differences and tradeoffs between Observational Studies and Randomized Experiments .

Between-Subjects vs. Within-Subjects Experimental Designs

When you think of the design of experiments, you probably picture a treatment and control group. Researchers assign participants to only one of these groups, so each group contains entirely different subjects than the other groups. Analysts compare the groups at the end of the experiment. Statisticians refer to this method as a between-subjects, or independent measures, experimental design.

In a between-subjects design , you can have more than one treatment group, but each subject is exposed to only one condition, the control group or one of the treatment groups.

A potential downside to this approach is that differences between groups at the beginning can affect the results at the end. As you’ve read earlier, random assignment can reduce those differences, but it is imperfect. There will always be some variability between the groups.

In a  within-subjects experimental design , also known as repeated measures, subjects experience all treatment conditions and are measured for each. Each subject acts as their own control, which reduces variability and increases the statistical power to detect effects.

In this experimental design, you minimize pre-existing differences between the experimental conditions because they all contain the same subjects. However, the order of treatments can affect the results. Beware of practice and fatigue effects. Learn more about Repeated Measures Designs .

Assigned to one experimental condition Participates in all experimental conditions
Requires more subjects Fewer subjects
Differences between subjects in the groups can affect the results Uses same subjects in all conditions.
No order of treatment effects. Order of treatments can affect results.

Design of Experiments Examples

For example, a bone density study has three experimental groups—a control group, a stretching exercise group, and a jumping exercise group.

In a between-subjects experimental design, scientists randomly assign each participant to one of the three groups.

In a within-subjects design, all subjects experience the three conditions sequentially while the researchers measure bone density repeatedly. The procedure can switch the order of treatments for the participants to help reduce order effects.

Matched Pairs Experimental Design

A matched pairs experimental design is a between-subjects study that uses pairs of similar subjects. Researchers use this approach to reduce pre-existing differences between experimental groups. It’s yet another design of experiments method for reducing sources of variability.

Researchers identify variables likely to affect the outcome, such as demographics. When they pick a subject with a set of characteristics, they try to locate another participant with similar attributes to create a matched pair. Scientists randomly assign one member of a pair to the treatment group and the other to the control group.

On the plus side, this process creates two similar groups, and it doesn’t create treatment order effects. While matched pairs do not produce the perfectly matched groups of a within-subjects design (which uses the same subjects in all conditions), it aims to reduce variability between groups relative to a between-subjects study.

On the downside, finding matched pairs is very time-consuming. Additionally, if one member of a matched pair drops out, the other subject must leave the study too.

Learn more about Matched Pairs Design: Uses & Examples .

Another consideration is whether you’ll use a cross-sectional design (one point in time) or use a longitudinal study to track changes over time .

A case study is a research method that often serves as a precursor to a more rigorous experimental design by identifying research questions, variables, and hypotheses to test. Learn more about What is a Case Study? Definition & Examples .

In conclusion, the design of experiments is extremely sensitive to subject area concerns and the time and resources available to the researchers. Developing a suitable experimental design requires balancing a multitude of considerations. A successful design is necessary to obtain trustworthy answers to your research question and to have a reasonable chance of detecting treatment effects when they exist.

Share this:

an experimental design where the experimental units are randomly assigned

Reader Interactions

' src=

March 23, 2024 at 2:35 pm

Dear Jim You wrote a superb document, I will use it in my Buistatistics course, along with your three books. Thank you very much! Miguel

' src=

March 23, 2024 at 5:43 pm

Thanks so much, Miguel! Glad this post was helpful and I trust the books will be as well.

' src=

April 10, 2023 at 4:36 am

What are the purpose and uses of experimental research design?

Comments and Questions Cancel reply

previous episode

Rigor and reproducibility in experimental design, next episode, experimental designs.

Overview Teaching: 0 min Exercises: 0 min Questions What are my experimental units? How will treatments be assigned? What are some types of experimental design? Objectives Identify different sources of error to increase accuracy and minimize sources of error Select an appropriate experimental design for a specific study

A designed experiment is a strategic attempt to answer a research question or problem. Experiments give us a way to compare effects of two or more treatments of interest. When well-designed, experiments minimize any bias in this comparison. When we control experiments, that control gives us the ability to make stronger inferences about the differences we see in the experiment. Specifically, experiments allow us to make inferences about causation. Sample research questions that follow from these questions are given below.

  • Does affect condition X in humans?
  • Does affect phenotype Y in mice?

The research question suggests how an experiment might be carried out to find an answer. In the questions above, the treatments are either a drug or a diet. The experimental units are those things to which we apply treatments.

Discussion What are the experimental units in each of the research questions above? Solution 1). Research question: Does affect condition X in humans? Experimental unit: Individual person on drug or placebo 2). Research question: Does affect phenotype Y in mice? Experimental unit: Cage of mice on treatment or control diet

Defining the experimental unit is not necessarily straightforward. To define the experimental unit, consider that an experimental unit should be able to receive any treatment. In the second example, all mice in a cage must receive either the treatment or the control diet. In this case the cage is the experimental unit. Each individual mouse is a measurement unit since we would measure the response of each individual mouse to the diet. We don’t measure how the entire cage of mice responds to the diet as a whole, though. Individual mice are the measurement units, while the cage is an experimental unit since all mice in the cage receive the same treatment.

Well-designed experiments are characterized by three features: randomization, replication, and control. These features help to minimize the impact of experimental error and factors not under study.

Randomization

In a randomized experiment, the investigators randomly assign subjects to treatment and control groups in order to minimize bias and moderate experimental error. A random number table or generator can be used to assign random numbers to experimental units (the unit or subject tested upon) so that any experimental unit has equal chances of being assigned to treatment or control. The random number then determines to which group an experimental unit belongs. For example, odd-numbered experimental units could go in the treatment group, and even-numbered experimental units in the control group.

Here is an example of randomization using a random number generator. If the random number is even, the sample is assigned to the control group. If odd, the sample is assigned to the treatment group.

This might produce unequal numbers between treatment and control groups. It isn’t necessary to have equal numbers, however, sensitivity (the true positive rate, or ability to detect an effect when it truly exists) is maximized when sample numbers are equal.

To randomly assign samples to groups with equal numbers, you can do the following.

Discussion Why not assign treatment and control groups to samples in alphabetical order? Did we really need a random number generator to obtain randomized equal groups? Solution 1). Scenario: One technician processed samples A through M, and a different technician processed samples N through Z. 2). Another scenario: Samples A through M were processed on a Monday, and samples N through Z on a Tuesday. 3). Yet another scenario: Samples A through M were from one strain, and samples N through Z from a different strain. 4). Yet another scenario: Samples with consecutive ids were all sibling groups. For example, samples A, B and C were all siblings, and all assigned to the same treatment. All of these cases would have introduced an effect (from the technician, the day of the week, the strain, or sibling relationships) that would confound the results and lead to misinterpretation.

Replication

Replication can characterize variation or experimental error (“noise”) in an experiment. Experimental error can be classified into three general types: systematic, biological, and random. Systematic and biological are consistent error types; if you repeat an experiment, you’ll get the same error. Random error is inconsistent - it is unpredictable and has no pattern.

  • Systematic error can be characterized with technical replicates, which measure the same sample multiple times and estimates the variation caused by equipment or protocols.
  • Biological error can be characterized with biological replicates, which measure different biological samples in parallel to estimate the variation caused by the unique biology of the samples.
  • Random error cannot be explained by replication because it is not a consistent source of error.

The greater the number of replications, the greater the precision (the closeness of two or more measurements to each other). Having a large enough sample size to ensure high precision is necessary to ensure reproducible results.

Replication could use a question that could help check that individuals know the difference between types of errors.

Exercise 1: Which kind of error? A study used to determine the effect of a drug on weight loss could have the following sources of experimental error. Classify the following sources as either biological, systematic, or random error. 1). A scale is broken and provides inconsistent readings. 2). A scale is calibrated wrongly and consistently measures mice 1 gram heavier. 3). A mouse has an unusually high weight compared to its experimental group (i.e., it is an outlier). 4). Strong atmospheric low pressure and accompanying storms affect instrument readings, animal behavior, and indoor relative humidity. Solution to Exercise 1 1). random, because the scale is broken and provides any kind of random reading it comes up with (inconsistent reading) 2). systematic 3). biological 4). random or systematic; you argue which and explain why

These three sources of error can be mitigated by good experimental design. Systematic and biological error can be mitigated through adequate numbers of technical and biological replicates, respectively. Random error can also be mitigated by experimental design, however, replicates are not effective. By definition random error is unpredictable or unknowable. For example, an atmospheric low pressure system or a strong storm could affect equipment measurements, animal behavior, and indoor relative humidity, which introduces random error. We could assume that all random error will balance itself out, and that if we use a completely randomized design each sample will be equally subject to random error. A more precise way to mitigate random error is through blocking. Here are some ways to do that, presented by increasing level of complexity.

Local control

Local control refers to refinements in experimental design to control the impact of factors not addressed by replication or randomization (random error). Local control should not be confused with the control group, the group that does not receive treatment.

Completely randomized design

The completely randomized design is simple and common in controlled experiments. In a completely randomized design, each experimental unit (e.g. mouse) has an equal probability of assignment to any treatment. The following example demonstrates a completely randomized design for 4 treatment groups and 5 replicates of each treatment group, for a total of 20 experimental units.

By assigning 5 experimental units to each treatment group, the numbers in each group are equal. A completely randomized design will work with unequal numbers, though.

In a completely randomized design, any difference between experimental units under the same treatment is considered (biological, systematic, and/or random) experimental error. A completely randomized design is appropriate only for experiments with homogeneous experimental units (e.g., mice should be of same sex, strain, age, etc.) where environmental effects, such as light or temperature, are relatively easy to control.

Randomized complete block design

As an example of local control, if a rack of many mice cages is heterogeneous with respect to light exposure, then the rack of cages can be divided into smaller blocks such that cages within each block tend to be more homogeneous (have equal light exposure). This kind of homogeneity of cages (experimental units) ensures an unbiased comparison of treatment means (each block would receive all treatments instead of each block receiving only one or several), as otherwise it would be difficult to attribute the mean difference between treatments solely to differences between treatments when cage light exposures differences also persist. This type of local control to achieve homogeneity of experimental units will not only increase the accuracy of the experiment, but also help in arriving at valid conclusions.

The randomized complete block design is a popular experimental design suited for studies where a researcher is concerned with studying the effects of a single factor on a response of interest. Furthermore, the study includes variability from another factor that is not of particular interest; often referred to as nuisance factor. The primary distinguishing feature of the randomized complete block design is that the blocks are of equal size and contain all of the treatments, to control the effects of variables that are not of interest. For example, a block may refer to an area that receives a certain amount of light, and within one area (or block) the light doesn’t differ much but across areas (blocks) they may differ greatly. Blocking reduces (biological and systematic) experimental error by eliminating known sources of variation among experimental units.

If certain operations, such as data collection, cannot be completed for the whole experiment in one day, the task should be completed for all experimental units of the same block on the same day. This way the variation among days becomes part of block variation and is, thus, excluded from the experimental error. If more than one person takes measurements in a trial, the same person should be assigned to take measurements for the entire block. This way the variation among people (i.e. technicians) would become part of block variation instead of experimental error.

For example, if a rack of mouse cages (e.g., 6 rows by 3 columns) are to be used in an experiment possibly affected by light exposure, researchers may choose to use cages from several of the rows and columns so as to ensure that the effect of light exposure is minimized; the assumption being that cage position (top-to-bottom) in the rack corresponds to varying amounts of light exposure.

an experimental design where the experimental units are randomly assigned

In this example, there are three different treatments (A, B, and C). The number of rows (or blocks) will be set to the number of replicates. Since we are interested in how light exposure differs from top to bottom, we will want our blocks to convey that difference; hence blocks should correspond to rows in the rack as each row is believed to have a different amount of light exposure. It is not necessary that there be enough replicates so as to account for all combinations of the order of treatments, and there is no need for a replicate size greater than that which accounts for all combinations. In this example, we are using six replicates which happens to account for all possible combinations of the treatment groups.

The randomized block design controls a source of random variation (a random effect) which might otherwise confound the effect of a treatment, and is of no interest. This design will have one or more treatments (fixed effects) which are of interest. The design is used to increase power by controlling variation from random effects, such as shelf height or illumination. It is also useful for breaking the experiment up into smaller, more convenient mini-experiments.

Latin Square Design

Latin square designs are unique in that they allow for (and require) two blocking factors. These designs are used to simultaneously control (or eliminate) two sources of nuisance variability while addressing the effect of (or variability caused by) one factor of interest. For a Latin square design to be created, each of the two blocking factors must have the same number of levels, and that number of levels must also be equal to the number of treatment (or factor of interest) levels.

For example, a Latin square design can be used if there was a study on the effect of five treatments that was done on five different days by five different technicians.

an experimental design where the experimental units are randomly assigned

The blocks in this example would be technician (column) and day (row). The five different treatments (the factor of interest) are denoted by the letters A-E. We can remove the variation from our measured response to treatment in both directions if we consider both rows (day) and columns (technician) as factors in our design.

The Latin Square Design gets its name from the fact that we can write it as a square with Latin letters to correspond to the treatments. The treatment factor levels are the Latin letters in the Latin square design. The number of rows and columns has to correspond to the number of treatment levels. So, if we have five treatments then we would need to have five rows and five columns in order to create a Latin square. This gives us a design where we have each of the treatments and in each row and in each column.

Exercise 2: True or false? A completely randomized design can have different numbers in each treatment group. Completely randomized designs tolerate environmental changes, such as lighting differences, over time or space. A randomized block design ensures that the environment is the same for each experimental unit. A randomized block design can be used when experimental units are heterogeneous in age or weight. Solution to Exercise 2 1). True. Numbers in each treatment group can differ, though sensitivity (true positive rate) could suffer. 2). 3). 4).
Exercise 3: Random assignment to diet Use this subset of data containing 20 males and 20 females and their baseline body weights to randomize to two different diets: high fat and regular chow. subset <- dat[dat$Sample[c(51:70, 475:494)],c("Sample", "Sex", "BW.3")] 1). Perform a complete randomization. 2). Perform a balanced randomization. 3). Check the sex ratio and difference in body weights. 4). Share the mean body weight for each group on the course etherpad. Solution to Exercise 3 This requires generation of random numbers. If diets are assigned in order, sample ID will be confounded with body weight if consecutive ID numbers were handled somehow by the same person or in the same way. 1). 2). 3). 4).
Key Points Good experimental design minimizes error in a study. Well-designed experiments are randomized, have adequate replicates, and feature local control of environmental variables. There are three types of error in experiments: systematic, biological, and random.

Experimental Design: Types, Examples & Methods

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Experimental design refers to how participants are allocated to different groups in an experiment. Types of design include repeated measures, independent groups, and matched pairs designs.

Probably the most common way to design an experiment in psychology is to divide the participants into two groups, the experimental group and the control group, and then introduce a change to the experimental group, not the control group.

The researcher must decide how he/she will allocate their sample to the different experimental groups.  For example, if there are 10 participants, will all 10 participants participate in both groups (e.g., repeated measures), or will the participants be split in half and take part in only one group each?

Three types of experimental designs are commonly used:

1. Independent Measures

Independent measures design, also known as between-groups , is an experimental design where different participants are used in each condition of the independent variable.  This means that each condition of the experiment includes a different group of participants.

This should be done by random allocation, ensuring that each participant has an equal chance of being assigned to one group.

Independent measures involve using two separate groups of participants, one in each condition. For example:

Independent Measures Design 2

  • Con : More people are needed than with the repeated measures design (i.e., more time-consuming).
  • Pro : Avoids order effects (such as practice or fatigue) as people participate in one condition only.  If a person is involved in several conditions, they may become bored, tired, and fed up by the time they come to the second condition or become wise to the requirements of the experiment!
  • Con : Differences between participants in the groups may affect results, for example, variations in age, gender, or social background.  These differences are known as participant variables (i.e., a type of extraneous variable ).
  • Control : After the participants have been recruited, they should be randomly assigned to their groups. This should ensure the groups are similar, on average (reducing participant variables).

2. Repeated Measures Design

Repeated Measures design is an experimental design where the same participants participate in each independent variable condition.  This means that each experiment condition includes the same group of participants.

Repeated Measures design is also known as within-groups or within-subjects design .

  • Pro : As the same participants are used in each condition, participant variables (i.e., individual differences) are reduced.
  • Con : There may be order effects. Order effects refer to the order of the conditions affecting the participants’ behavior.  Performance in the second condition may be better because the participants know what to do (i.e., practice effect).  Or their performance might be worse in the second condition because they are tired (i.e., fatigue effect). This limitation can be controlled using counterbalancing.
  • Pro : Fewer people are needed as they participate in all conditions (i.e., saves time).
  • Control : To combat order effects, the researcher counter-balances the order of the conditions for the participants.  Alternating the order in which participants perform in different conditions of an experiment.

Counterbalancing

Suppose we used a repeated measures design in which all of the participants first learned words in “loud noise” and then learned them in “no noise.”

We expect the participants to learn better in “no noise” because of order effects, such as practice. However, a researcher can control for order effects using counterbalancing.

The sample would be split into two groups: experimental (A) and control (B).  For example, group 1 does ‘A’ then ‘B,’ and group 2 does ‘B’ then ‘A.’ This is to eliminate order effects.

Although order effects occur for each participant, they balance each other out in the results because they occur equally in both groups.

counter balancing

3. Matched Pairs Design

A matched pairs design is an experimental design where pairs of participants are matched in terms of key variables, such as age or socioeconomic status. One member of each pair is then placed into the experimental group and the other member into the control group .

One member of each matched pair must be randomly assigned to the experimental group and the other to the control group.

matched pairs design

  • Con : If one participant drops out, you lose 2 PPs’ data.
  • Pro : Reduces participant variables because the researcher has tried to pair up the participants so that each condition has people with similar abilities and characteristics.
  • Con : Very time-consuming trying to find closely matched pairs.
  • Pro : It avoids order effects, so counterbalancing is not necessary.
  • Con : Impossible to match people exactly unless they are identical twins!
  • Control : Members of each pair should be randomly assigned to conditions. However, this does not solve all these problems.

Experimental design refers to how participants are allocated to an experiment’s different conditions (or IV levels). There are three types:

1. Independent measures / between-groups : Different participants are used in each condition of the independent variable.

2. Repeated measures /within groups : The same participants take part in each condition of the independent variable.

3. Matched pairs : Each condition uses different participants, but they are matched in terms of important characteristics, e.g., gender, age, intelligence, etc.

Learning Check

Read about each of the experiments below. For each experiment, identify (1) which experimental design was used; and (2) why the researcher might have used that design.

1 . To compare the effectiveness of two different types of therapy for depression, depressed patients were assigned to receive either cognitive therapy or behavior therapy for a 12-week period.

The researchers attempted to ensure that the patients in the two groups had similar severity of depressed symptoms by administering a standardized test of depression to each participant, then pairing them according to the severity of their symptoms.

2 . To assess the difference in reading comprehension between 7 and 9-year-olds, a researcher recruited each group from a local primary school. They were given the same passage of text to read and then asked a series of questions to assess their understanding.

3 . To assess the effectiveness of two different ways of teaching reading, a group of 5-year-olds was recruited from a primary school. Their level of reading ability was assessed, and then they were taught using scheme one for 20 weeks.

At the end of this period, their reading was reassessed, and a reading improvement score was calculated. They were then taught using scheme two for a further 20 weeks, and another reading improvement score for this period was calculated. The reading improvement scores for each child were then compared.

4 . To assess the effect of the organization on recall, a researcher randomly assigned student volunteers to two conditions.

Condition one attempted to recall a list of words that were organized into meaningful categories; condition two attempted to recall the same words, randomly grouped on the page.

Experiment Terminology

Ecological validity.

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the experiment. Extraneous variables should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of taking part in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

Print Friendly, PDF & Email

Related Articles

Discourse Analysis

Research Methodology

Discourse Analysis

Phenomenology In Qualitative Research

Phenomenology In Qualitative Research

Ethnography In Qualitative Research

Ethnography In Qualitative Research

Narrative Analysis In Qualitative Research

Narrative Analysis In Qualitative Research

Thematic Analysis: A Step by Step Guide

Thematic Analysis: A Step by Step Guide

Metasynthesis Of Qualitative Research

Metasynthesis Of Qualitative Research

  • Privacy Policy

Research Method

Home » Experimental Design – Types, Methods, Guide

Experimental Design – Types, Methods, Guide

Table of Contents

Experimental Research Design

Experimental Design

Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results.

Experimental design typically includes identifying the variables that will be manipulated or measured, defining the sample or population to be studied, selecting an appropriate method of sampling, choosing a method for data collection and analysis, and determining the appropriate statistical tests to use.

Types of Experimental Design

Here are the different types of experimental design:

Completely Randomized Design

In this design, participants are randomly assigned to one of two or more groups, and each group is exposed to a different treatment or condition.

Randomized Block Design

This design involves dividing participants into blocks based on a specific characteristic, such as age or gender, and then randomly assigning participants within each block to one of two or more treatment groups.

Factorial Design

In a factorial design, participants are randomly assigned to one of several groups, each of which receives a different combination of two or more independent variables.

Repeated Measures Design

In this design, each participant is exposed to all of the different treatments or conditions, either in a random order or in a predetermined order.

Crossover Design

This design involves randomly assigning participants to one of two or more treatment groups, with each group receiving one treatment during the first phase of the study and then switching to a different treatment during the second phase.

Split-plot Design

In this design, the researcher manipulates one or more variables at different levels and uses a randomized block design to control for other variables.

Nested Design

This design involves grouping participants within larger units, such as schools or households, and then randomly assigning these units to different treatment groups.

Laboratory Experiment

Laboratory experiments are conducted under controlled conditions, which allows for greater precision and accuracy. However, because laboratory conditions are not always representative of real-world conditions, the results of these experiments may not be generalizable to the population at large.

Field Experiment

Field experiments are conducted in naturalistic settings and allow for more realistic observations. However, because field experiments are not as controlled as laboratory experiments, they may be subject to more sources of error.

Experimental Design Methods

Experimental design methods refer to the techniques and procedures used to design and conduct experiments in scientific research. Here are some common experimental design methods:

Randomization

This involves randomly assigning participants to different groups or treatments to ensure that any observed differences between groups are due to the treatment and not to other factors.

Control Group

The use of a control group is an important experimental design method that involves having a group of participants that do not receive the treatment or intervention being studied. The control group is used as a baseline to compare the effects of the treatment group.

Blinding involves keeping participants, researchers, or both unaware of which treatment group participants are in, in order to reduce the risk of bias in the results.

Counterbalancing

This involves systematically varying the order in which participants receive treatments or interventions in order to control for order effects.

Replication

Replication involves conducting the same experiment with different samples or under different conditions to increase the reliability and validity of the results.

This experimental design method involves manipulating multiple independent variables simultaneously to investigate their combined effects on the dependent variable.

This involves dividing participants into subgroups or blocks based on specific characteristics, such as age or gender, in order to reduce the risk of confounding variables.

Data Collection Method

Experimental design data collection methods are techniques and procedures used to collect data in experimental research. Here are some common experimental design data collection methods:

Direct Observation

This method involves observing and recording the behavior or phenomenon of interest in real time. It may involve the use of structured or unstructured observation, and may be conducted in a laboratory or naturalistic setting.

Self-report Measures

Self-report measures involve asking participants to report their thoughts, feelings, or behaviors using questionnaires, surveys, or interviews. These measures may be administered in person or online.

Behavioral Measures

Behavioral measures involve measuring participants’ behavior directly, such as through reaction time tasks or performance tests. These measures may be administered using specialized equipment or software.

Physiological Measures

Physiological measures involve measuring participants’ physiological responses, such as heart rate, blood pressure, or brain activity, using specialized equipment. These measures may be invasive or non-invasive, and may be administered in a laboratory or clinical setting.

Archival Data

Archival data involves using existing records or data, such as medical records, administrative records, or historical documents, as a source of information. These data may be collected from public or private sources.

Computerized Measures

Computerized measures involve using software or computer programs to collect data on participants’ behavior or responses. These measures may include reaction time tasks, cognitive tests, or other types of computer-based assessments.

Video Recording

Video recording involves recording participants’ behavior or interactions using cameras or other recording equipment. This method can be used to capture detailed information about participants’ behavior or to analyze social interactions.

Data Analysis Method

Experimental design data analysis methods refer to the statistical techniques and procedures used to analyze data collected in experimental research. Here are some common experimental design data analysis methods:

Descriptive Statistics

Descriptive statistics are used to summarize and describe the data collected in the study. This includes measures such as mean, median, mode, range, and standard deviation.

Inferential Statistics

Inferential statistics are used to make inferences or generalizations about a larger population based on the data collected in the study. This includes hypothesis testing and estimation.

Analysis of Variance (ANOVA)

ANOVA is a statistical technique used to compare means across two or more groups in order to determine whether there are significant differences between the groups. There are several types of ANOVA, including one-way ANOVA, two-way ANOVA, and repeated measures ANOVA.

Regression Analysis

Regression analysis is used to model the relationship between two or more variables in order to determine the strength and direction of the relationship. There are several types of regression analysis, including linear regression, logistic regression, and multiple regression.

Factor Analysis

Factor analysis is used to identify underlying factors or dimensions in a set of variables. This can be used to reduce the complexity of the data and identify patterns in the data.

Structural Equation Modeling (SEM)

SEM is a statistical technique used to model complex relationships between variables. It can be used to test complex theories and models of causality.

Cluster Analysis

Cluster analysis is used to group similar cases or observations together based on similarities or differences in their characteristics.

Time Series Analysis

Time series analysis is used to analyze data collected over time in order to identify trends, patterns, or changes in the data.

Multilevel Modeling

Multilevel modeling is used to analyze data that is nested within multiple levels, such as students nested within schools or employees nested within companies.

Applications of Experimental Design 

Experimental design is a versatile research methodology that can be applied in many fields. Here are some applications of experimental design:

  • Medical Research: Experimental design is commonly used to test new treatments or medications for various medical conditions. This includes clinical trials to evaluate the safety and effectiveness of new drugs or medical devices.
  • Agriculture : Experimental design is used to test new crop varieties, fertilizers, and other agricultural practices. This includes randomized field trials to evaluate the effects of different treatments on crop yield, quality, and pest resistance.
  • Environmental science: Experimental design is used to study the effects of environmental factors, such as pollution or climate change, on ecosystems and wildlife. This includes controlled experiments to study the effects of pollutants on plant growth or animal behavior.
  • Psychology : Experimental design is used to study human behavior and cognitive processes. This includes experiments to test the effects of different interventions, such as therapy or medication, on mental health outcomes.
  • Engineering : Experimental design is used to test new materials, designs, and manufacturing processes in engineering applications. This includes laboratory experiments to test the strength and durability of new materials, or field experiments to test the performance of new technologies.
  • Education : Experimental design is used to evaluate the effectiveness of teaching methods, educational interventions, and programs. This includes randomized controlled trials to compare different teaching methods or evaluate the impact of educational programs on student outcomes.
  • Marketing : Experimental design is used to test the effectiveness of marketing campaigns, pricing strategies, and product designs. This includes experiments to test the impact of different marketing messages or pricing schemes on consumer behavior.

Examples of Experimental Design 

Here are some examples of experimental design in different fields:

  • Example in Medical research : A study that investigates the effectiveness of a new drug treatment for a particular condition. Patients are randomly assigned to either a treatment group or a control group, with the treatment group receiving the new drug and the control group receiving a placebo. The outcomes, such as improvement in symptoms or side effects, are measured and compared between the two groups.
  • Example in Education research: A study that examines the impact of a new teaching method on student learning outcomes. Students are randomly assigned to either a group that receives the new teaching method or a group that receives the traditional teaching method. Student achievement is measured before and after the intervention, and the results are compared between the two groups.
  • Example in Environmental science: A study that tests the effectiveness of a new method for reducing pollution in a river. Two sections of the river are selected, with one section treated with the new method and the other section left untreated. The water quality is measured before and after the intervention, and the results are compared between the two sections.
  • Example in Marketing research: A study that investigates the impact of a new advertising campaign on consumer behavior. Participants are randomly assigned to either a group that is exposed to the new campaign or a group that is not. Their behavior, such as purchasing or product awareness, is measured and compared between the two groups.
  • Example in Social psychology: A study that examines the effect of a new social intervention on reducing prejudice towards a marginalized group. Participants are randomly assigned to either a group that receives the intervention or a control group that does not. Their attitudes and behavior towards the marginalized group are measured before and after the intervention, and the results are compared between the two groups.

When to use Experimental Research Design 

Experimental research design should be used when a researcher wants to establish a cause-and-effect relationship between variables. It is particularly useful when studying the impact of an intervention or treatment on a particular outcome.

Here are some situations where experimental research design may be appropriate:

  • When studying the effects of a new drug or medical treatment: Experimental research design is commonly used in medical research to test the effectiveness and safety of new drugs or medical treatments. By randomly assigning patients to treatment and control groups, researchers can determine whether the treatment is effective in improving health outcomes.
  • When evaluating the effectiveness of an educational intervention: An experimental research design can be used to evaluate the impact of a new teaching method or educational program on student learning outcomes. By randomly assigning students to treatment and control groups, researchers can determine whether the intervention is effective in improving academic performance.
  • When testing the effectiveness of a marketing campaign: An experimental research design can be used to test the effectiveness of different marketing messages or strategies. By randomly assigning participants to treatment and control groups, researchers can determine whether the marketing campaign is effective in changing consumer behavior.
  • When studying the effects of an environmental intervention: Experimental research design can be used to study the impact of environmental interventions, such as pollution reduction programs or conservation efforts. By randomly assigning locations or areas to treatment and control groups, researchers can determine whether the intervention is effective in improving environmental outcomes.
  • When testing the effects of a new technology: An experimental research design can be used to test the effectiveness and safety of new technologies or engineering designs. By randomly assigning participants or locations to treatment and control groups, researchers can determine whether the new technology is effective in achieving its intended purpose.

How to Conduct Experimental Research

Here are the steps to conduct Experimental Research:

  • Identify a Research Question : Start by identifying a research question that you want to answer through the experiment. The question should be clear, specific, and testable.
  • Develop a Hypothesis: Based on your research question, develop a hypothesis that predicts the relationship between the independent and dependent variables. The hypothesis should be clear and testable.
  • Design the Experiment : Determine the type of experimental design you will use, such as a between-subjects design or a within-subjects design. Also, decide on the experimental conditions, such as the number of independent variables, the levels of the independent variable, and the dependent variable to be measured.
  • Select Participants: Select the participants who will take part in the experiment. They should be representative of the population you are interested in studying.
  • Randomly Assign Participants to Groups: If you are using a between-subjects design, randomly assign participants to groups to control for individual differences.
  • Conduct the Experiment : Conduct the experiment by manipulating the independent variable(s) and measuring the dependent variable(s) across the different conditions.
  • Analyze the Data: Analyze the data using appropriate statistical methods to determine if there is a significant effect of the independent variable(s) on the dependent variable(s).
  • Draw Conclusions: Based on the data analysis, draw conclusions about the relationship between the independent and dependent variables. If the results support the hypothesis, then it is accepted. If the results do not support the hypothesis, then it is rejected.
  • Communicate the Results: Finally, communicate the results of the experiment through a research report or presentation. Include the purpose of the study, the methods used, the results obtained, and the conclusions drawn.

Purpose of Experimental Design 

The purpose of experimental design is to control and manipulate one or more independent variables to determine their effect on a dependent variable. Experimental design allows researchers to systematically investigate causal relationships between variables, and to establish cause-and-effect relationships between the independent and dependent variables. Through experimental design, researchers can test hypotheses and make inferences about the population from which the sample was drawn.

Experimental design provides a structured approach to designing and conducting experiments, ensuring that the results are reliable and valid. By carefully controlling for extraneous variables that may affect the outcome of the study, experimental design allows researchers to isolate the effect of the independent variable(s) on the dependent variable(s), and to minimize the influence of other factors that may confound the results.

Experimental design also allows researchers to generalize their findings to the larger population from which the sample was drawn. By randomly selecting participants and using statistical techniques to analyze the data, researchers can make inferences about the larger population with a high degree of confidence.

Overall, the purpose of experimental design is to provide a rigorous, systematic, and scientific method for testing hypotheses and establishing cause-and-effect relationships between variables. Experimental design is a powerful tool for advancing scientific knowledge and informing evidence-based practice in various fields, including psychology, biology, medicine, engineering, and social sciences.

Advantages of Experimental Design 

Experimental design offers several advantages in research. Here are some of the main advantages:

  • Control over extraneous variables: Experimental design allows researchers to control for extraneous variables that may affect the outcome of the study. By manipulating the independent variable and holding all other variables constant, researchers can isolate the effect of the independent variable on the dependent variable.
  • Establishing causality: Experimental design allows researchers to establish causality by manipulating the independent variable and observing its effect on the dependent variable. This allows researchers to determine whether changes in the independent variable cause changes in the dependent variable.
  • Replication : Experimental design allows researchers to replicate their experiments to ensure that the findings are consistent and reliable. Replication is important for establishing the validity and generalizability of the findings.
  • Random assignment: Experimental design often involves randomly assigning participants to conditions. This helps to ensure that individual differences between participants are evenly distributed across conditions, which increases the internal validity of the study.
  • Precision : Experimental design allows researchers to measure variables with precision, which can increase the accuracy and reliability of the data.
  • Generalizability : If the study is well-designed, experimental design can increase the generalizability of the findings. By controlling for extraneous variables and using random assignment, researchers can increase the likelihood that the findings will apply to other populations and contexts.

Limitations of Experimental Design

Experimental design has some limitations that researchers should be aware of. Here are some of the main limitations:

  • Artificiality : Experimental design often involves creating artificial situations that may not reflect real-world situations. This can limit the external validity of the findings, or the extent to which the findings can be generalized to real-world settings.
  • Ethical concerns: Some experimental designs may raise ethical concerns, particularly if they involve manipulating variables that could cause harm to participants or if they involve deception.
  • Participant bias : Participants in experimental studies may modify their behavior in response to the experiment, which can lead to participant bias.
  • Limited generalizability: The conditions of the experiment may not reflect the complexities of real-world situations. As a result, the findings may not be applicable to all populations and contexts.
  • Cost and time : Experimental design can be expensive and time-consuming, particularly if the experiment requires specialized equipment or if the sample size is large.
  • Researcher bias : Researchers may unintentionally bias the results of the experiment if they have expectations or preferences for certain outcomes.
  • Lack of feasibility : Experimental design may not be feasible in some cases, particularly if the research question involves variables that cannot be manipulated or controlled.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Case Study Research

Case Study – Methods, Examples and Guide

Textual Analysis

Textual Analysis – Types, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Explanatory Research

Explanatory Research – Types, Methods, Guide

Focus Groups in Qualitative Research

Focus Groups – Steps, Examples and Guide

Research Methods

Research Methods – Types, Examples and Guide

  • Translators
  • Graphic Designers

Solve

Please enter the email address you used for your account. Your sign in information will be sent to your email address after it has been verified.

Completely Randomized Design: The One-Factor Approach

David Costello

Completely Randomized Design (CRD) is a research methodology in which experimental units are randomly assigned to treatments without any systematic bias. CRD gained prominence in the early 20th century, largely attributed to the pioneering work of statistician Ronald A. Fisher . His method addressed the inherent variability in experimental units by randomly assigning treatments, thus countering potential biases. Today, CRD serves as an indispensable tool in various domains, including agriculture, medicine, industrial engineering, and quality control analysis.

CRD is particularly favored in situations with limited control over external variables. By leveraging its inherent randomness, CRD neutralizes potentially confounding factors. As a result, each experimental unit has an equal likelihood of receiving any specific treatment, ensuring a level playing field. Such random allocation is pivotal in eliminating systematic bias and bolstering the validity of experimental conclusions.

While CRD may sometimes necessitate larger sample sizes , the improved accuracy and consistency it introduces to results often justify this requirement.

Understanding CRD

At its core, CRD is centered on harnessing randomness to achieve objective experimental outcomes. This approach effectively addresses unanticipated extraneous variables —those not included in the study design but that can still influence the response variable. In the context of CRD, these extraneous variables are expected to be uniformly distributed across treatments, thereby mitigating their potential influence.

A key aspect of CRD is the single-factor experiment. This means that the experiment revolves around changing or manipulating one primary independent variable (or factor) to ascertain its effect on the dependent variable . Consider these examples across different fields:

  • Medical: An experiment might be designed where the independent variable is the dosage of a new drug, and the dependent variable is the speed of patient recovery. Researchers would vary the drug dosage and observe its effect on recovery rates.
  • Agriculture: An agricultural study could alter the amount of water irrigation (independent variable) given to crops and measure the resulting crop yield (dependent variable) to determine the optimal irrigation level.
  • Psychology: A psychologist might introduce different intensities of a visual cue (independent variable) to participants and then measure their reaction times (dependent variable) to understand the cue's influence.
  • Environmental Science: Scientists might introduce different concentrations of a pollutant (independent variable) to a freshwater pond and measure the health and survival rate of aquatic life (dependent variable) in response.
  • Education: In an educational setting, researchers could change the duration of digital learning (independent variable) students receive daily and then observe its effect on test scores (dependent variable) at the end of the term.
  • Engineering: In material science, an experiment might adjust the temperature (independent variable) during the curing process of a polymer and then measure its resultant tensile strength (dependent variable).

For each of these scenarios, only one key factor or independent variable is intentionally varied, while any changes or outcomes in another variable (the dependent variable) are observed and recorded. This distinct focus on a single variable, while keeping all others constant or controlled, underscores the essence of the single-factor experiment in CRD.

Advantages of CRD

Understanding the strengths of Completely Randomized Design is pivotal for effectively applying this research tool and interpreting results accurately. Below is an exploration of the benefits of employing CRD in research studies.

  • Simplicity: One of the most appealing features of CRD is its straightforwardness. Focusing on a single primary factor, CRD is easier to understand and implement compared to more complex research designs.
  • Flexibility: CRD enhances versatility by allowing the inclusion of various experimental units and treatments through random assignment, enabling researchers to explore a range of variables.
  • Robustness: Despite its simplicity, CRD stands as a robust research tool. The consistent use of randomization minimizes biases and uniformly distributes the effects of uncontrolled variables across all groups, contributing to the reliability of the results.
  • Generalizability: Proper application of CRD enables the extension of research findings to a broader population. The minimization of selection bias , thanks to random assignment, increases the probability that the sample closely represents the larger population.

Disadvantages of CRD

While CRD is marked by simplicity, flexibility, robustness, and enhanced generalizability, it is essential to carefully consider its limitations. A thoughtful analysis of these aspects will guide researchers in making informed decisions about the applicability of CRD to their specific research context.

  • Ignoring Nuisance Variables: CRD operates primarily under the assumption that all treatments are equivalent aside from the independent variable. If strong nuisance factors vary systematically across treatments, this assumption becomes a limitation, making CRD less suitable for studies where nuisance variables significantly impact the results.
  • Need for Large Sample Size: The pooling of all experimental units into one extensive set necessitates a larger sample size, potentially leading to increased time, cost, and resource investment.
  • Inefficiency in Some Cases: CRD might demonstrate statistical inefficiency with significant within-treatment group variability . In such cases, other designs that account for this variability may offer enhanced efficiency.

Differentiating CRD from other research design methods

CRD stands out in the realm of research designs due to its foundational simplicity. While its essence lies in the random assignment of experimental units to treatments without any systematic bias, other designs introduce varying layers of complexity tailored to specific experimental needs.

For instance, consider the Randomized Block Design (RBD) . Unlike the straightforward approach of CRD, RBD divides experimental units into homogenous blocks, based on known sources of variability, before assigning treatments. This method is especially useful when there's an identifiable source of variability that researchers wish to control for. Similarly, the Latin Square Design , while also involving random assignment, operates on a grid system to simultaneously control for two lurking variables , adding another dimension of complexity not found in CRD.

Factorial Design investigates the effects and interactions of multiple independent variables. This design can reveal interactions that might be overlooked in simpler designs. Then there's the Crossover Design , often used in medical trials. Unlike CRD, where each unit experiences only one treatment, in Crossover Design, participants receive multiple treatments over different periods, allowing each participant to serve as their own control.

The choice of research design, whether it be CRD, RBD, Latin Square, or any of the other methods available, is fundamentally guided by the nature of the research question , the characteristics of the experimental units, and the specific objectives the study aims to achieve. However, it's the inherent simplicity and flexibility of CRD that often makes it the go-to choice, especially in scenarios with many units or treatments, where intricate stratification or blocking isn't necessary.

Let us further explore the advantages and disadvantages of each method.

Research DesignDescriptionKey FeaturesAdvantagesDisadvantages
Completely Randomized Design (CRD)Employs random assignment of experimental units to treatments without any systematic bias.Simple and flexible

Each unit experiences only one treatment
Simple structure makes it easy to implementDoes not control for any other variables; may require a larger sample size
Randomized Block Design (RBD)Divides experimental units into homogenous blocks based on known sources of variability before assigning treatments.Controls for one source of variability

More complex than CRD
Controls for known variability, potentially increasing the precision of the experimentMore complex to implement and analyze
Latin Square DesignUses a grid system to control for two lurking variables.Controls for two sources of variability

Adds complexity not found in CRD
Controls for two sources of variabilityComplex design; may not be practical for all experiments
Factorial DesignInvestigates the effects and interactions of multiple independent variables.Reveals interactions

More complex design
Can assess interactions between factorsComplex and may require a large sample size
Crossover DesignParticipants receive multiple treatments over different periods.Each participant serves as their own control

Often used in medical trials
Each participant can serve as their own control, potentially reducing variabilityPeriod effects and carryover effects can complicate results

While CRD's simplicity and flexibility make it a popular choice for many research scenarios, the optimal design depends on the specific needs, objectives, and contexts of the study. Researchers must carefully consider these factors to select the most suitable research design method.

The role of CRD in mitigating extraneous variables

Within the framework of experimental research, extraneous variables persistently challenge the validity of findings, potentially compromising the established relationship between independent and dependent variables . CRD is a methodological safeguard that systematically addresses these extraneous variables. Below, we describe specific types of extraneous variables and how CRD counteracts their potential influence:

  • Definition: Variables that induce variance in the dependent variable, yet are not of primary academic interest. While they don't muddle the relationship between the primary variables, their presence can augment within-group variability, reducing statistical power.
  • CRD's Countermeasure: Through the mechanism of random assignment, CRD ensures an equitably distributed influence of nuisance variables across all experimental conditions. This distribution, theoretically, leads to mutual nullification of their effects when assessing the efficacy of treatments.
  • Definition: Variables not explicitly incorporated within the study design but can influence its outcomes. Their impact often manifests post-hoc, rendering them alternative explanations for observed phenomena.
  • CRD's Countermeasure: Random assignment intrinsic to CRD assures a uniform distribution of these lurking variables across experimental conditions. This diminishes the probability of them systematically influencing one group, thus safeguarding the experiment's conclusions.
  • Definition: Variables that not only influence the dependent variable but also correlate with the independent variable. Their simultaneous influence can mislead interpretations of causality.
  • CRD's Countermeasure: The tenet of random assignment inherent in CRD ensures an equitable distribution of potential confounders among groups. This bolsters confidence in attributing observed effects predominantly to the experimental treatments.
  • Definition: Deliberately held constant to ensure that they do not introduce variability into the experiment. They are intentionally kept constant to preserve experimental integrity.
  • CRD's Countermeasure: While CRD focuses on randomization, the nature of the design inherently assumes that controlled variables remain constant across all experimental units. By maintaining these constants, CRD ensures that the focus remains solely on the treatment effects, further validating the experiment's findings.

The foundational principle underpinning the Completely Randomized Design—randomization—serves as a bulwark against the influences of extraneous variables. By uniformly distributing these variables across experimental conditions, CRD enhances the validity and reliability of experimental outcomes. However, researchers should exercise caution and continuously evaluate potential extraneous influences, even in randomized designs.

Selecting the independent variable

The selection of the independent variable is crucial for research design . This pivotal step not only shapes the direction and quality of the research but also underpins the understanding of causal relationships within the studied system, influencing the dependent variable or response. When choosing this essential component of experimental design , several critical considerations emerge:

  • Relevance: Paramount to the success of the experiment is the variable's direct relevance to the research query. For instance, in a botanical study of phototropism, the light's intensity or duration would naturally serve as the independent variable.
  • Measurability: The chosen variable should be quantifiable or categorizable, enabling distinctions between its varying levels or types.
  • Controllability: The research environment must allow for steadfast control over the variable, ensuring extraneous influences are kept at bay.
  • Ethical Considerations: In disciplines like social sciences or medical research, it's vital to consider the ethical implications . The chosen variable should withstand ethical scrutiny, safeguarding the well-being and rights of participants.

Identifying the independent variable necessitates a methodical and structured approach where each step aligns with the overarching research objective:

  • Review Literature: Thoroughly review existing literature to provide invaluable insights into past research and highlight unexplored areas.
  • Define the Scope: Clearly delineating research boundaries is crucial. For example, when studying dietary impacts on metabolic health, the variable could span from diet types (like keto, vegan, Mediterranean) to specific nutrients.
  • Determine Levels of the Variable: This involves understanding the various levels or categories the independent variable might have. In educational research, one might look beyond simply "innovative vs. conventional methods" to a broader range of teaching techniques.
  • Consider Potential Outcomes: Anticipating possible outcomes based on variations in the independent variable is beneficial. If potential outcomes seem too vast, the variable might need further refinement.

In academic discourse, while CRD is praised for its rigor and clarity, the effectiveness of the design relies heavily on the meticulous selection of the independent variable. Making this choice with thorough consideration ensures the research offers valuable insights with both academic and wider societal implications.

Applications of CRD

CRD has found wide and varied applications in several areas of research. Its versatility and fundamental simplicity make it an attractive option for scientists and researchers across a multitude of disciplines.

CRD in agricultural research

Agricultural research was among the earliest fields to adopt the use of Completely Randomized Design. The broad application of CRD within agriculture not only encompasses crop improvement but also the systematic analysis of various fertilizers, pesticides, and cropping techniques. Agricultural scientists leverage the CRD framework to scrutinize the effects on yield enhancement and bolstered disease resistance. The fundamental randomization in CRD effectively mitigates the influence of nuisance variables such as soil variations and microclimate differences, ensuring more reliable and valid experimental outcomes.

Additionally, CRD in agricultural research paves the way for robust testing of new agricultural products and methods. The unbiased allocation of treatments serves as a solid foundation for accurately determining the efficacy and potential downsides of innovative fertilizers, genetically modified seeds, and novel pest control methods, contributing to informed decision-making and policy formulation in agricultural development.

However, the limitations of CRD within the agricultural context warrant acknowledgment. While it offers an efficient and straightforward approach for experimental design, CRD may not always capture spatial variability within large agricultural fields adequately. Such unaccounted variations can potentially skew results, underscoring the necessity for employing more intricate experimental designs, such as the Randomized Complete Block Design (RCBD), where necessary. This adaptation enhances the reliability and generalizability of the research findings, ensuring their applicability to real-world agricultural challenges.

CRD in medical research

The fields of medical and health research substantially benefit from the application of Completely Randomized Design, especially in executing randomized control trials. Within this context, participants, whether patients or others, are randomly assigned to either the treatment or control groups. This structured random allocation minimizes the impact of extraneous variables, ensuring that the groups are comparable. It fortifies the assertion that any discernible differences in outcomes are genuinely attributable to the treatment being analyzed, enhancing the robustness and reliability of the research findings.

CRD's randomized nature in medical research allows for a more objective assessment of varied medical treatments and interventions. By mitigating the influence of extraneous variables, researchers can more accurately gauge the effectiveness and potential side effects of novel medical approaches, including pharmaceuticals and surgical techniques. This precision is crucial for the continual advancement of medical science, offering a solid empirical foundation for the refinement of treatments that improve health outcomes and patient quality of life.

However, like other fields, the application of CRD in medical research has its limitations. Despite its effectiveness in controlling various factors, CRD may not always consider the complexity of human health conditions where multiple variables often interact in intricate ways. Hence, while CRD remains a valuable tool for medical research, it is crucial to apply it judiciously and alongside other research designs to ensure comprehensive and reliable insights into medical treatments and interventions.

CRD in industrial engineering

In industrial engineering, Completely Randomized Design plays a significant role in process and product testing, offering a reliable structure for the evaluation and improvement of industrial systems. Engineers often employ CRD in single-factor experiments to analyze the effects of a particular factor on a certain outcome, enhancing the precision and objectivity of the assessment.

For example, to discern the impact of varying temperatures on the strength of a metal alloy, engineers might utilize CRD. In this scenario, the different temperatures represent the single factor, and the alloy samples are randomly allocated to be tested at each designated temperature. This random assignment minimizes the influence of extraneous variables, ensuring that the observed effects on alloy strength are primarily attributable to the temperature variations.

CRD's implementation in industrial engineering also assists in the optimization of manufacturing processes. Through random assignment and structured testing, engineers can effectively evaluate process parameters, such as production speed, material quality, and machine settings. By accurately assessing the influence of these factors on production efficiency and product quality, engineers can implement informed adjustments and enhancements, promoting optimal operational performance and superior product standards. This systematic approach, anchored by CRD, facilitates consistent and robust industrial advancements, bolstering overall productivity and innovation in industrial engineering.

Despite these advantages, it's crucial to acknowledge the limitations of CRD in industrial engineering contexts. The design is efficient for single-factor experiments but may falter with experiments involving multiple factors and interactions, common in industrial settings. This limitation underscores the importance of combining CRD with other experimental designs. Doing so navigates the complex landscape of industrial engineering research, ensuring insights are comprehensive, accurate, and actionable for continuous innovation in industrial operations.

CRD in quality control analysis

Completely Randomized Design is also beneficial in quality control analysis, where ensuring the consistency of products is paramount.

For instance, a manufacturer keen on minimizing product defects may deploy CRD to empirically assess the effectiveness of various inspection techniques. By randomly assigning different inspection methods to identical or similar production batches, the manufacturer can gather data regarding the most effective techniques for identifying and mitigating defects, bolstering overall product quality and consumer satisfaction.

Furthermore, the utility of CRD in quality control extends to the analysis of materials, machinery settings, or operational processes that are pivotal to final product quality. This design enables organizations to rigorously test and compare assorted conditions or settings, ensuring the selection of parameters that optimize both quality and efficiency. This approach to quality analysis not only bolsters the reliability and performance of products but also significantly augments the optimization of organizational resources, curtailing wastage and improving profitability.

However, similar to other CRD applications, it is crucial to understand its limitations. While CRD can significantly aid in the analysis and optimization of various aspects of quality control, its effectiveness may be constrained when dealing with multi-factorial scenarios with complex interactions. In such situations, other experimental designs, possibly in tandem with CRD, might offer more robust and comprehensive insights, ensuring that quality control measures are not only effective but also adaptable to evolving industrial and market demands.

Future applications and emerging fields for CRD

The breadth of applications for Completely Randomized Design continues to expand. Emerging fields such as data science, business analytics, and environmental studies are increasingly recognizing the value of CRD in conducting reliable and uncomplicated experiments. In the realm of data science, CRD can be invaluable in assessing the performance of different algorithms, models, or data processing techniques. It enables researchers to randomize the variables, minimizing biases and providing a clearer understanding of the real-world applicability and effectiveness of various data-centric solutions.

In the domain of business analytics, CRD is paving the way for robust analysis of business strategies and initiatives. Businesses can employ CRD to randomly assign strategies or processes across various departments or teams, allowing for a comprehensive assessment of their impact. The insights from such assessments empower organizations to make data-driven decisions, optimizing their operations, and enhancing overall productivity and profitability. This approach is particularly crucial in the business environment of today, characterized by rapid changes, intense competition, and escalating customer expectations, where informed and timely decision-making is a key determinant of success.

Moreover, in environmental studies, CRD is increasingly being used to evaluate the impact of various factors on environmental health and sustainability. For example, researchers might use CRD to study the effects of different pollutants, conservation strategies, or land use patterns on ecosystem health. The randomized design ensures that the conclusions drawn are robust and reliable, providing a solid foundation for the development of policies and initiatives. As environmental concerns continue to mount, the role of reliable experimental designs like CRD in facilitating meaningful research and informed policy-making cannot be overstated.

Planning and conducting a CRD experiment

A CRD experiment involves meticulous planning and execution, outlined in the following structured steps. Each phase, from the preparatory steps to data collection and analysis, plays a pivotal role in bolstering the integrity and success of the experiment, ensuring that the findings stand as a valuable contribution to scientific knowledge and understanding.

  • Selecting Participants in a Random Manner: The heart of a CRD experiment is randomness. Regardless of whether the subjects are human participants, animals, plants, or objects, their selection must be truly random. This level of randomness ensures that every participant has an equal likelihood of being assigned to any treatment group, which plays a crucial role in eliminating selection bias.
  • Understanding and Selecting the Independent Variable: This is the variable of interest – the one that researchers aim to manipulate to observe its effects. Identifying and understanding this factor is pivotal. Its selection depends on the experiment's primary research question or hypothesis , and its clear definition is essential to ensuring the experiment's clarity and success.
  • The Process of Random Assignment in Experiments: Following the identification of subjects and the independent variable, researchers must randomly allocate subjects to various treatment groups. This process, known as random assignment, typically involves using random number generators or other statistical tools , ensuring that the principle of randomness is upheld.
  • Implementing the Single-factor Experiment: After random assignment, researchers can launch the main experiment. At this stage, they introduce the independent variable to the designated treatment groups, ensuring that all other conditions remain consistent across groups. The goal is to make certain that any observed effect or change is attributed only to the manipulation of the independent variable.
  • Data Cleaning and Preparation: The first step post-collection is to prepare and clean the data . This process involves rectifying errors, handling missing or inconsistent data, and eradicating duplicates. Employing tools like statistical software or languages such as Python and R can be immensely helpful. Handling outliers and maintaining consistency throughout the dataset is essential for accurate subsequent analysis.
  • Statistical Analysis Methods: The next step involves analyzing the data using appropriate statistical methodologies, dependent on the nature of the data and research questions . Analysis can range from basic descriptive statistics to complex inferential statistics or even advanced statistical modeling.
  • Interpreting the Results: Analysis culminates in the interpretation of results, wherein researchers draw conclusions based on the statistical outcomes. This stage is crucial in CRD, as it determines if observed effects can be attributed to the independent variable's manipulation or if they occurred purely by chance. Apart from statistical significance, the practical implications and relevance of the results also play a vital role in determining the experiment's success and potential real-world applications.

Navigating common challenges in CRD

While the Completely Randomized Design offers numerous advantages, researchers often encounter specific challenges when implementing it in real-world experiments. Recognizing these challenges early and being prepared with strategies to address them can significantly improve the integrity and success of the CRD experiment. Let's delve into some of the most common challenges and explore potential solutions:

  • Lack of Homogeneity: One foundational assumption of CRD is the homogeneity of experimental units . However, in reality, there may be inherent variability among units. To mitigate this, researchers can use stratified sampling or consider employing a randomized block design.
  • Improper Randomization: The essence of CRD is randomization. However, it's not uncommon for some researchers to inadvertently introduce biases during the assignment. Utilizing computerized random number generators or statistical software can help ensure true randomization.
  • Limited Number of Experimental Units: Sometimes, the available experimental units might be fewer than required for a robust experiment. In such cases, using a larger number of replications can help, albeit at the cost of increased resources.
  • Extraneous Variables: These external factors can influence the outcome of an experiment. They make it hard to attribute observed effects solely to the independent variable. Careful experimental design, pre-experimental testing, and post-experimental analysis can help identify and control these extraneous variables.
  • Overlooking Practical Significance: Even if a CRD experiment yields statistically significant results, these might not always be practically significant. Researchers need to assess the real-world implications of their findings, considering factors like cost, feasibility, and the magnitude of observed effects.
  • Data-related Challenges: From missing data to outliers, data-related issues may skew results. Regular data cleaning, rigorous validation, and employing robust statistical methods can help address these challenges.

While CRD is a powerful tool in experimental research, its successful implementation hinges on the researcher's ability to anticipate, recognize, and navigate challenges that might arise. By being proactive and employing strategies to mitigate potential pitfalls, researchers can maximize the reliability and validity of their CRD experiments, ensuring meaningful and impactful results.

In summary, the Completely Randomized Design holds a pivotal place in the field of research owing to its simplicity and straightforward approach. Its essence lies in the unbiased random assignment of experimental units to various treatments, ensuring the reliability and validity of the results. Although it may not control for other variables and often requires larger sample sizes, its ease of implementation frequently outweighs these drawbacks, solidifying it as a preferred choice for researchers across many fields.

Looking ahead, the future of CRD remains bright. As research continues to evolve, we anticipate the integration of CRD with more sophisticated design techniques and advanced analytical tools. This synergy will likely enhance the efficiency and applicability of CRD in varied research contexts, perpetuating its legacy as a fundamental research design method. While other designs might offer more control and complexity, the fundamental simplicity of CRD will continue to hold significant value in the rapidly evolving research landscape.

Moving forward, it is imperative to champion continuous learning and exploration in the field of CRD. Engaging in educational opportunities, staying abreast of the latest research and advancements, and actively participating in pertinent discussions and forums can markedly enrich understanding and expertise in CRD. Embracing this ongoing learning journey will not only bolster individual research skills but also make a significant contribution to the broader scientific community, fueling innovation and discovery in numerous fields of study.

Header image by Alex Shuper .

  • Academic Writing Advice
  • All Blog Posts
  • Writing Advice
  • Admissions Writing Advice
  • Book Writing Advice
  • Short Story Advice
  • Employment Writing Advice
  • Business Writing Advice
  • Web Content Advice
  • Article Writing Advice
  • Magazine Writing Advice
  • Grammar Advice
  • Dialect Advice
  • Editing Advice
  • Freelance Advice
  • Legal Writing Advice
  • Poetry Advice
  • Graphic Design Advice
  • Logo Design Advice
  • Translation Advice
  • Blog Reviews
  • Short Story Award Winners
  • Scholarship Winners

Advance your scientific manuscript with expert editing

Advance your scientific manuscript with expert editing

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Engineering LibreTexts

14.3: Design of Experiments via Random Design

  • Last updated
  • Save as PDF
  • Page ID 22538

  • Peter Woolf et al.
  • University of Michigan

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Introduction

Random design is an approach to designing experiments. As the name implies, random experimental design involves randomly assigning experimental conditions. However, numbers should not be picked without any thought. This type of experimental design is surprisingly powerful and often results in a high probability to create a near optimal design.

The simplified steps for random design include the following:

  • Choose a number of experiments to run (NOTE: This may be tricky to pick a number because it is dependent upon the amount of signal recovery you want.)
  • Assign to each variable a state based on a uniform sample. For instance, if there are 5 states, each state has a probability of 20%.

Random designs typically work well for large systems with many variables, 50 or more. There should be few interactions between variables and very few variables that contribute significantly. Random design does not work very well with relatively smaller systems. Generally speaking, Taguchi and random designs often perform better than factorial designs depending on size and assumptions. When choosing the design for an experiment, it is important to determine an efficient design that helps optimize the process and determines factors that influence variability.

There is more than one type of random design, randomized block design and completely randomized design. Randomized block design involves blocking, which is arranging experimental units into groups so they have a common similarity. The blocking factor is usually not a primary source of variability. An example of a blocking factor may include eye color of a patient, so if this variability source is controlled, greater precision is achieved. Completely randomized design is where the groups are chosen at random.

In various technological fields, it is important to design experiments where a limited number of experiments is required. Random design is practical for many design applications. Extensive mathematical theory has been used to explore random experimental design. Examples of random design include areas of data compression and medical imaging. The research conducted to support the practical application of random design can be found at < http://groups.csail.mit.edu/drl/journal_club/papers/CS2-Candes-Romberg-05.pdf >.

Other research has been conducted recently on random design, and more information can be found at: ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1614066

Completely Randomized Design (CRD)

Description of design.

Completely randomized design (CRD) is the simplest type of design to use. The most important requirement for use of this design is homogeneity of experimental units.

Procedure for Randomization

  • Assign treatments to experimental units completely at random.
  • Verify that every experimental unit has the same probability of receiving any treatment.
  • Perform randomization by using a random number table, computer, program, etc.

Example of CRD

If you have 4 treatments (I, II, III, IV) and 5 replicates, how many experimental units do you have?

{I} {IV} {III} {II} {II} {III} {III} {II} {I} {III} {I} {IV} {III} {IV} {I} {IV} {II} {I} {II} {IV} =20 randomized experimental units

Randomized Block Design (RBD)

Randomized block design (RBD) takes advantage of grouping similar experimental units into blocks or replicates. One requirement of RBd is that the blocks of experimental units be as uniform as possible. The reason for grouping experimental units is so that the observed differences between treatments will be largely due to “true” differences between treatments and not random occurrences or chance.

  • Randomize each replicate separately.
  • Verify that each treatment has the same probability of being assigned to a given experimental unit within a replicate.
  • Have each treatment appear at least once per replicate.

Advantages of RBD

  • Generally more precise than the CRD.
  • Some treatments may be replicated more times than others.
  • Missing plots are easily estimated.
  • Whole treatments or entire replicates may be deleted from the analysis.

User Preferences

Content preview.

Arcu felis bibendum ut tristique et egestas quis:

  • Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
  • Duis aute irure dolor in reprehenderit in voluptate
  • Excepteur sint occaecat cupidatat non proident

Keyboard Shortcuts

3.3 - experimental design terminology.

In experimental design terminology, the " experimental unit " is randomized to the treatment regimen and receives the treatment directly. The " observational unit " has measurements taken on it. In most clinical trials, the experimental units and the observational units are one and the same, namely, the individual patient

One exception to this is a community intervention trial in which communities, e.g., geographic regions, are randomized to treatments. For example, communities (experimental units) might be randomized to receive different formulations of a vaccine, whereas the effects are measured directly on the subjects (observational units) within the communities. The advantages here are strictly logistical - it is simply easier to implement in this fashion. Another example occurs in reproductive toxicology experiments in which female rodents are exposed to a treatment (experimental units) but measurements are taken on the pups (observational units).

In experimental design terminology, factors are variables that are controlled and varied during the course of the experiment. For example, treatment is a factor in a clinical trial with experimental units randomized to treatment. Another example is pressure and temperature as factors in a chemical experiment.

Most clinical trials are structured as one-way designs , i.e., only one factor, treatment, with a few levels.

Temperature and pressure in the chemical experiment are two factors that comprise a two-way design in which it is of interest to examine various combinations of temperature and pressure. Some clinical trials may have a two-way factorial design , such as in oncology where various combinations of doses of two chemotherapeutic agents comprise the treatments. An incomplete factorial design may be useful if it is inappropriate to assign subjects to some of the possible treatment combinations, such as no treatment (double placebo). We will study factorial designs in a later lesson.

A parallel design refers to a study in which patients are randomized to a treatment and remain on that treatment throughout the course of the trial. This is a typical design. In contrast, with a crossover design patients are randomized to a sequence of treatments and they cross over from one treatment to another during the course of the trial. Each treatment occurs in a time period with a washout period in between. Crossover designs are of interest since with each patient serving as their own control, there is potential for reduced variability. However, there are potential problems with this type of design. There should be investigation into possible carry-over effects, i.e. the residual effects of the previous treatment affecting subject’s response in the later treatment period. In addition, only conditions that are likely to be similar in both treatment periods are amenable to crossover designs. Acute health problems that do not recur are not well-suited for a crossover study. We will study crossover design in a later lesson.

Randomization is used to remove systematic error (bias) and to justify Type I error probabilities in experiments. Randomization is recognized as an essential feature of clinical trials for removing selection bias.

Selection bias occurs when a physician decides treatment assignment and systematically selects a certain type of patient for a particular treatment.. Suppose the trial consists of an experimental therapy and a placebo. If the physician assigns healthier patients to the experimental therapy and the less healthy patients to the placebo, the study could result in an invalid conclusion that the experimental therapy is very effective.

Blocking and stratification are used to control unwanted variation. For example, suppose a clinical trial is structured to compare treatments A and B in patients between the ages of 18 and 65. Suppose that the younger patients tend to be healthier. It would be prudent to account for this in the design by stratifying with respect to age. One way to achieve this is to construct age groups of 18-30, 31-50, and 51-65 and to randomize patients to treatment within each age group.

18 - 30 12 13
31 - 50 23 23
51-65 6 7

It is not necessary to have the same number of patients within each age stratum. We do, however, want to have a balance in the number on each treatment within each age group. This is accomplished by blocking, in this case, within the age strata. Blocking is a restriction of the randomization process that results a balance of numbers of patients on each treatment after a prescribed number of randomizations. For example, blocks of 4 within these age strata would mean that after 4, 8, 12, etc. patients in a particular age group had entered the study, the numbers assigned to each treatment within that stratum would be equal.

If the numbers are large enough within a stratum, a planned subgroup analysis may be performed. In the example, the smaller numbers of patients in the upper and lower age groups would require care in the analyses of these sub-groups specifically. However, with the primary question as to the effect of treatment regardless of age, the pooled data in which each sub-group is represented in a balanced fashion would be utilized for the main analysis.

Even ineffective treatments can appear beneficial in some patients. This may be due to random fluctuations, or variability in the disease. If, however, the improvement is due to the patient’s expectation of a positive response, this is called a " placebo effect ". This is especially problematic when the outcome is subjective, such as pain or symptom assessment. The placebo effect is widely recognized and must be removed in any clinical trial. For example, rather than constructing a nonrandomized trial in which all patients receive an experimental therapy, it is better to randomize patients to receive either the experimental therapy or a placebo. A true placebo is an inert or inactive treatment that mimics the route of administration of the real treatment, e.g., a sugar pill.

Placebos are not acceptable ethically in many situations, e.g., in surgical trials. (Although there have been instances where 'sham' surgical procedures took place as the 'placebo' control.) When an accepted treatment already exists for a serious illness such as cancer, the control must be an active treatment. In other situations, a true placebo is not physically possible to attain. For example, a few trials investigating dimethyl sulfoxide (DMSO) for providing muscle pain relief were conducted in the 1970’s and 1980’s. DMSO is rubbed onto the area of muscle pain but leaves a garlicky taste in the mouth, so it was difficult to develop a placebo.

Treatment masking or blinding is an effective way to ensure objectivity of the person measuring the outcome variables. Masking is especially important when the measurements are subjective or based on self-assessment. Double-masked trials refer to studies in which both investigators and patients are masked to the treatment. Single-masked trials refer to the situation when only patients are masked. In some studies, statisticians are masked to treatment assignment when performing the initial statistical analyses, i.e., not knowing which group received the treatment and which is the control until analyses have been completed. Even a safety-monitoring committee may be masked to the identity of treatment A or B, until there is an observed trend or difference that should evoke a response from the monitors. In executing a masked trial great care will be taken to keep the treatment allocation schedule securely hidden from all except those with a need to know which medications are active and which are placebo. This could be limited to the producers of the study medications, and possibly the safety monitoring board before study completion. There is always a caveat for breaking the blind for a particular patient in an emergency situation.

As with placebos, masking, although highly desirable, is not always possible. For example, one could not mask a surgeon to the procedure he is to perform. Even so, some have gone to great lengths to achieve masking. For example, a few trials with cardiac pacemakers have consisted of every eligible patient undergoing a surgical procedure to be implanted with the device. The device was "turned on" in patients randomized to the treatment group and "turned off" in patients randomized to the control group. The surgeon was not aware of which devices would be activated.

Investigators often underestimate the importance of masking as a design feature. This is because they believe that biases are small in relation to the magnitude of the treatment effects (when the converse usually is true), or that they can compensate for their prejudice and subjectivity.

Confounding is the effect of other relevant factors on the outcome that may be incorrectly attributed to the difference between study groups.

Here is an example: An investigator plans to assign 10 patients to treatment and 10 patients to control. There will be a one-week follow-up on each patient. The first 10 patients will be assigned treatment on March 01 and the next 10 patients will be assigned control on March 15. The investigator may observe a significant difference between treatment and control, but is it due to different environmental conditions between early March and mid-March? The obvious way to correct this would be to randomize 5 patients to treatment and 5 patients to control on March 01, followed by another 5 patients to treatment and the 5 patients to control on March 15.

Validity Section  

A trial is said to possess internal validity if the observed difference in outcome between the study groups is real and not due to bias, chance, or confounding. Randomized, placebo-controlled, double-blinded clinical trials have high levels of internal validity.

External validity in a human trial refers to how well study results can be generalized to a broader population. External validity is irrelevant if internal validity is low. External validity in randomized clinical trials is enhanced by using broad eligibility criteria when recruiting patients .

Large simple and pragmatic trials emphasize external validity. A large simple trial attempts to discover small advantages of a treatment that is expected to be used in a large population. Large numbers of subjects are enrolled in a study with simplified design and management. There is an implicit assumption that the treatment effect is similar for all subjects with the simplified data collection. In a similar vein, a pragmatic trial emphasizes the effect of a treatment in practices outside academic medical centers and involves a broad range of clinical practices.

Studies of equivalency and noninferiority have different objectives than the usual trial which is designed to demonstrate superiority of a new treatment to a control. A study to demonstrate non-inferiority aims to show that a new treatment is not worse than an accepted treatment in terms of the primary response variable by more than a pre-specified margin. A study to demonstrate equivalence has the objective of demonstrating the response to the new treatment is within a prespecified margin in both directions. We will learn more about these studies when we explore sample size calculations.

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Statistics LibreTexts

Components of an experimental study design

  • Last updated
  • Save as PDF
  • Page ID 207

  • Debashis Paul
  • University of California, Davis

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

1.1 Study Design: basic concepts

1.2 factors, 1.3 treatments, 1.4 experimental units, 1.5 sample size and replicates, 1.6 randomization, 1.7 blocking, 1.8 measurements of response variables, contributors.

Usually the goal of a study is to find out the relationships between certain explanatory factors and the response variables. The design of a study thus consists of making decisions on the following:

  • The set of explanatory factors.
  • The set of response variables.
  • The set of treatments.
  • The set of experimental units.
  • The method of randomization and blocking.
  • Sample size and number of replications.
  • The outcome measurements on the experimental units - the response variables.

Factors are explanatory variables to be studied in an investigation.

1. In a study of the effects of colors and prices on sales of cars, the factors being studied are color (qualitative variable) and price (quantitative variable).

2. In an investigation of the effects of education on income, the factor being studied is education level (qualitative but ordinal).

Factor levels

Factor levels are the "values" of that factor in an experiment. For example, in the study involving color of cars, the factor car color could have four levels: red, black, blue and grey. In a design involving vaccination, the treatment could have two levels: vaccine and placebo.

Types of factors

  • Experimental factors: levels of the factor are assigned at random to the experimental units.
  • Observational factors: levels of the factor are characteristic of the experimental units and is not under the control of the investigators.
  • There could be observational factors in an experimental study.

Example: in the "new drug study" (refer to Handout 1), if we are also interested in the effects of age and gender on the recovery rate, then these observational factors; while the treatment (new drug or old drug) is an experimental factor.

  • In a single factor study, a treatment corresponds to a factor level; thus the number of treatments equals the number of different factor levels of that factor.
  • In a multi-factor study, a treatment corresponds to a combination of factor levels across different factors; thus the number of all possible treatments is the prodcut of the number of factor levels of different factors.
  • In the study of effects of education on income, each education level is a treatment (high school, college, advanced degree, etc).
  • In the study of effects of race and gender on income, each combination of race and gender is a treatment (Asian female; Hispanic male, etc).

Exercise: How many different treatments are there for the above examples?

Choice of treatments

Choice of treatments depends on the choice of: (i) the factors (which are the important factors);

(ii) levels of each factor.

  • For qualitative factors the levels are usually indicated by the nature of the factor.

Example: gender has two levels: female and male

  • For quantitative factors the choice of levels reflects the type of trend expected by the investigator.

Example: linear trend implies two levels; quadratic trend implies three levels. Usually 3 to 4 equally spaced levels are sufficient.

  • The range of the levels is also crucial. Usually prior knowledge is required for an effective choice of factors and treatments (refer to the "quick bread volume" example on page 650).
  • An experimental unit is the smallest unit of experimental material to which a treatment can be assigned.

Example: In a study of two retirement systems involving the 10 UC schools, we could ask if the basic unit should be an individual employee, a department, or a University.

Answer: The basic unit should be an entire University for practical feasibility.

  • Representativeness: the experimental units should be representative of the population about which a conclusion is going to be drawn.

Example: A study conducted surveys among 5,000 US college students, and found out that about 20% of them had uses marijuana at least once. If the goal of the study is the drug usage among Americans aging from 18 to 22, is this a good design?

  • Choosing a representative set of experimental units which fits teh purpose of your study is important.

Loosely speaking, sample size is the number of experimental units in the study.

  • Sample size is usually determined by the trade-off between statistical considerations such as power of tests, precision of estimations, and the availability of resources such as money, time, man power, technology etc.
  • In general, the larger teh sample size, the better it is for statistical inference; however, the costlier is the study.
  • An important consideration in an experimental design is how to assess power or precision as a function of the sample size (sample size planning/power calculation) ?

For many designed studies, the sample size is an integer multiple of the total number of treatments. This integer is the number of times each treatment being repeated and one complete repitition of all treatments (under similar experimental conditions) is called a complete replicate of the experiment.

  • Example: In a study of baking temperature on the volume of quick bread prepared from a package mix, four oven temperatures: low, medium, high and very high were tested by randomly assigning each temperature to 5 package mixes (all of the same brand). Thus the sample size is 20(= 4 \(\times\) 5), the number of treatments is 4 (4 levels of temperatures) and there are 5 complete replicates of the experiment.

Why replicates?

When a treatment is repeated under the same experimental conditions, any difference in the response from prior responses for the same treatment is due to random errors. Thus replication provides us some information about random errors. If the variation in random errors is relatively small compared to the total variation in the response, we would have evidence for treatment effect.

  • Randomization tends to average out between treatments whatever systematic effects may be present, apparent or hidden, so that the comparison between treatments measure only the pure treatment effect.
  • Randomization is necessary not only for the assignment of treatments to experimental units, but also for other stages of the experiment, where systematic errors may be present.

Example: In a study of light effects on plant growth rate, two treatments are considered: brighter environment vs. darker environment. 100 plants are randomly assigned to each treatment (all genetically identical). However, there is only one growth chamber which can grow 20 plants at one time. Therefore the 200 plants need to be grown in 10 different time slots.

In addition to randomizing the treatments, it is important to randomize the time slots also. This is because, the conditions of the growth chamber (such as humidity, temperature) might change over time. Therefore, growing all plants with brighter light treatment in the first 5 time slots and then growing all plants with darker light treatment in the last 5 time slots is not a good design.

In a blocked experiment , heterogenous experimental units (with known sources of heterogenity) are divided into homogenous subgroups, called blocks, and separate randomized experiments are conducted within each block.

  • Example: in a study of Vitamin C on cold prevention, 1000 children were recruited. Half of them were randomly chosen and were given Vitamin C in their diet and the other half got placebos. At the end of the study, the number of colds contracted by each child was recorded. (This is an example of a complete randomiz ed design (CRD). )
  • If we know (or have sufficient reason to believe) that gender may also influence the incidence of cold, then a more efficient way to conduct the study is through blocking on gender: 500 girls and 500 boys were recruited. Among the girls, 250 were randomly chosen and given Vitamin C and the other 250 were given placebo. Same is done for the 500 boys. (This is an example of a randomized block design (RCBD). )
  • By blocking, one removes the source of variation due to potential confounding factors (here it is gender), and thus improves the efficiency of the inference of treatment effect (here it is Vitamin C)
  • Randomization alone (as in CRD) does not assure that the same number of girls and boys will receive each treatment. Thus if there is a difference of cold incidence rate between genders, observed differences between treatment groups maybe observed even if there is indeed no treatment effect.

The issue of measurement bias arises due to unrecognizable differences in the evaluation process.

Example: The knowledge of the treatment of a patient may influence the judgement of the doctor. The source of measurement bias can be reduced to concealing the treatment assignment to both the subject and the evaluator (double-blind).

  • Anirudh Kandada (UCD)
  • Open access
  • Published: 18 June 2024

The effects of an educational intervention based on the protection motivation theory on the protective behaviors of emergency ward nurses against occupational hazards: a quasi-experimental study

  • Mohadeseh Nouri 1 ,
  • Saeed Ghasemi 2 ,
  • Sahar Dabaghi 2 &
  • Parvin Sarbakhsh 3  

BMC Nursing volume  23 , Article number:  409 ( 2024 ) Cite this article

59 Accesses

Metrics details

Emergency ward nurses face a variety of occupational hazards due to the nature of their occupational and professional duties, which can negatively affect their health. Therefore, this study aimed to evaluate the effects of an educational intervention based on the protection motivation theory on the protective behaviors of emergency ward nurses against occupational hazards in Tehran, Iran, in 2023.

The present quasi-experimental study was conducted with two intervention and control groups, using a pretest-posttest design. A total of 124 nurses working in the emergency wards of four hospitals (two hospitals for the intervention group and two hospitals for the control group by random assignment) were selected by multistage sampling method. The educational intervention based on the protection motivation theory was implemented for the intervention group for three weeks. The nurses of both groups completed a demographic questionnaire and the scale of emergency ward nurses’ protective behaviors against occupational hazards before, immediately, and one month after the intervention. Data analysis was performed using descriptive and inferential methods.

The two groups were similar in terms of demographic characteristics at the baseline ( p  > 0.05). Protective behaviors of emergency nurses against occupational hazards and their sub-scales (physical, chemical, biological, ergonomics, and psychosocial hazards) were higher in the intervention group than in the control group immediately and one month after the educational intervention. In addition, the measurement over time also showed the positive effect of time and educational intervention on the protective behaviors of emergency nurses against occupational hazards and their sub-scales in the intervention group.

These findings showed that the educational intervention based on the protection motivation theory can be effective and helpful in improving the protective behaviors of emergency ward nurses against occupational hazards and their sub-scales. Future studies can focus on a more specific design of this kind of intervention based on the type of occupational hazards and needs of nurses in different wards.

Peer Review reports

The most occupational hazards for HealthCare Workers (HCWs), including emergency ward nurses are physical, chemical, biological, ergonomic, and psychosocial hazards [ 1 , 2 , 3 ]. Emergency ward nurses face various occupational hazards while performing their duties, and the safety of nurses and patients depends on the nurses’ knowledge of these hazards and appropriate protective behavior [ 4 ].

Physical hazards include exposure to extreme temperatures, tripping, slipping, cuts, falling, various radiations, unusual noise, electric shock, fire, and explosions [ 1 , 2 , 3 ]. The results of one study in Egypt showed that most nurses (62.4%) had poor knowledge about physical occupational hazards [ 5 ].

Chemical hazards, including exposure to cleaning and disinfecting agents, sterilant materials, mercury, toxic drugs, pesticides, latex and laboratory chemicals and reagents; this hazard may lead to poisoning, allergic reactions, dermatitis, cancer, and maternal health effects, which may occur during compounding, unpacking, cleaning the environment, etc [ 1 , 2 , 3 ]. A systematic review study had shown that the incidence of occupational contact dermatitis for some groups of HCWs was high [ 6 ].

Biological hazards include exposure to blood-borne and air-borne pathogens; such as Hepatitis B virus (HBV), Hepatitis C virus (HCV) and Human immunodeficiency virus (HIV), tuberculosis, etc [ 1 , 2 , 3 , 4 ]. The results of another systematic review also showed a high prevalence of needle stick injuries among HCWs, and health services related to this regard should be improved [ 7 ].

Ergonomic hazards include the inappropriate design of the work environment, inappropriate position while working, and repetitive procedures, which may lead to musculoskeletal disorders [ 1 , 2 , 3 ]. In a study in Malaysia, almost all nurses (97.3%) had work-related musculoskeletal disorders during the past year, so this problem should be considered seriously [ 8 ]. In another study in Saudi Arabia, 85% of nurses participating in the study reported at least one musculoskeletal disorder, which was associated with factors such as hours of working and the weight of nurses [ 9 ].

Psychosocial hazards include stressful conditions, work environment violence, job strain, burnout, exhausting work shifts, long working hours, loss of reputation, being threatened and bullied by colleagues, interpersonal communication at the work environment, satisfaction with the job and imbalanced roles and responsibilities [ 1 , 2 , 3 ]. In a study, some problems and stressors faced by nurses working in the emergency ward were burnout, workplace violence, moral distress, chaotic work environment, etc [ 10 ]. The results of the study in the United States of America (USA) showed that the psychosocial job stress of emergency ward nurses was prevalent [ 11 ]. Another study in Kenya on emergency nurses also revealed a high prevalence of violence in the workplace; 81.7% and 73.2% for lifetime and one year respectively, and this is a significant problem [ 12 ].

Social and behavioral theories can be useful for designing educational interventions to improve the protective behaviors of HCWs against occupational hazards [ 13 ]. Protection motivation theory (PMT) is one of these theories, was introduced by Rogers in 1975, and since has been widely adopted as a framework for the intervention in health-related behavior [ 14 ]. The results of a study indicated that education based on the constructs of the PMT increased the protective behaviors of medical laboratories’ staff [ 15 ]. The results of another study, indicated that educational intervention based on the PMT increased the preventative behaviors of a group of hospital staff against respiratory infections [ 16 ]. Some other types of studies have been done on people with jobs and professions other than the health systems and HCWs; for example, the results of a study indicated the effectiveness of an educational intervention based on the PMT in promoting the protective behaviors of farmer’s ranchers against brucellosis [ 17 ] and the employees of governmental offices against COVID-19 [ 18 ]. These types of interventions sometimes were not effective in changing the protective and healthy behaviors of other people in other contexts [ 19 , 20 ]. Considering the above-mentioned literature; occupational hazards and protective behaviors of emergency nurses against them are important issues in health systems, and PMT is a tool for designing and implementing educational interventions that may promote the protective behaviors of these HCWs against occupational hazards and sufficient scientific evidences were not found in this field by research team; so this study aimed to evaluate the effects of an educational intervention based on the PMT on the protective behaviors of emergency ward nurses against occupational hazards.

Research design and setting

This quasi-experimental study was conducted on two groups, intervention and control, using the pretest-posttest design among emergency ward nurses of four educational hospitals (two hospitals for each group by random allocation) in Tehran, Iran, in 2023.

Sample size and sampling methods

The sampling method of this study was multistage. To prevent the transfer of information between the intervention and the control groups, randomization was performed at the hospital level. So, from 12 possible educational hospitals, because of the executive ability and study facilities, four hospitals were randomly selected by lottery and of them, two hospitals were randomly assigned for each intervention and control group. After estimating the number of the required nurses for each hospital, nurses who had inclusion criteria were selected using convenience sampling. The emergency nurses who had exclusion criteria were excluded from the study and replaced by other nurses from the same hospital. This process continued until data collection was completed. Ultimately, 31 nurses from each hospital, 62 nurses in each group, and a total of 124 nurses were enrolled (Fig.  1 ).

The total sample size for this cluster randomized study was based on 90% power, 95% confidence, estimation of the standard deviation and the effect size greater than or equal to 20% improvement in self-efficacy due to the educational intervention according to the similar study [ 21 ], considering two hospitals for intervention group and two hospitals for control group, and considering ICC = 0.2, the total sample size was calculated to be n  = 62 emergency nurses for each group (31 nurses had to be recruited from each hospital).

Inclusion criteria were as follows: verbal and written informed consent, desire to participate in the study, and having appropriate communication status to participate in the study. Exclusion criteria were failure to complete the questionnaires, missing more than one of the education sessions in the intervention group, translocation to other wards during the study, and participation in similar training courses.

figure 1

Consort diagram

Intervention group procedure

The educational content of the intervention used in this study covered almost all topics about occupational hazards for emergency ward nurses, prepared and extracted from the relevant literature [ 1 , 2 , 3 , 4 , 14 , 22 ] and the experiences of the research team. The initial educational content was evaluated by three experts outside the research team. These evaluators had ph.D. degrees in nursing and were faculty members of the Department of community health nursing of Shahid Beheshti University of Medical Sciences, whose opinions were re-evaluated and applied by the research team if needed. Finally, the educational content was confirmed by the three experts and the research team.

The educational intervention in this study was prepared based on constructs of the PMT (Protective behaviors, intention, perceived severity, perceived vulnerability, fear, response costs, rewards of maladaptive response, self-efficacy, and response efficacy) (Table  1 ).

The educational intervention in this study was implemented in three sessions (one session per week). At first, the educational content was presented face to face (lecture, Q&A, PowerPoints, PDF files), and then the PowerPoint slides and educational pamphlets were delivered to nurses in a way that was more convenient to them via their cellphones.

Control group procedure

The control group did not receive any particular intervention during the study; the educational content was provided to those who were willing to receive it only after completing the study.

Instruments

The instrument used to collect data consisted of two sections; a demographic characteristics form (13 items) and a scale for measuring emergency ward nurses’ protective behaviors against occupational hazards (39 items).

Demographic characteristics included age, sex, marital status, having children, education level (in nursing), work experience, types of work shifts, working in additional centers, working overtime, history of exposure to occupational hazards and diseases, suffering from underlying diseases, history of allergy to latex, and history of vaccination against potential occupational diseases.

The initial scale for measuring emergency ward nurses’ protective behaviors against occupational hazards was developed for this study by authors based on relevant literature [ 1 , 2 , 3 , 4 , 14 , 22 ] and the researchers’ experiences, and included 47 items. The initial scale’s face validity was assessed using qualitative and quantitative methods with ten nurses who had similar working conditions as the nurses participating in the study. The content validity was assessed using qualitative and quantitative methods such as the Content validity index (CVI) and Content validity ratio (CVR), by the participation of 15 occupational health experts and nursing professors and instructors. For the reliability of the scale, Cronbach’s alpha and Intraclass correlation coefficient (ICC) (a 2-week interval) were estimated by the participation of 20 nurses. Following this process out of the initial 47 items, 5 items were removed because CVR of items were less than 0.49 [ 23 ], and 3 items were removed due to covering the same concept according to the opinions of the experts and after the agreement of the research team. The item reduction process was carried out in a way such that the original content of the scale remained intact. The final scale included five sub-scales and 39 items, covering nurses’ protective behaviors against physical (items 1–6; scoring: 6–30), chemical (items 7–11; scoring: 5–25), biological (items 12–21; scoring: 10–50), ergonomics (items 22–26; scoring: 5–25), psychosocial (items 27–39; scoring: 13–65) and total hazards (items 1–39; scoring: 39–195). In order to better compare the subscales and the total score with each other, the mean score (1–5) of each was calculated. The items on the scale were scored based on a 5-point Likert scale (from Never (1) to Always (5)), and there was no reverse item. Higher scores indicated higher compliance with protective behaviors against occupational hazards ( Supplementary File ). All items obtained an impact score higher than 1.5, and the overall CVI was 0.96. After obtaining the necessary permits and providing some information about the objectives of the study, written informed consent was received from the participants. The nurses in both groups completed the demographic characteristics form and the scale designed to measure protective behaviors against occupational hazards before, immediately, and one month after the intervention. Cronbach’s alpha and ICC (a 2-week interval) of the scale among the 124 participants of this study were obtained as 0.930 and 0.832 respectively.

Data analysis

Data were analyzed using descriptive (mean, Standard Deviation (SD), Mean Difference (MD), frequency, and frequency percentage) and inferential methods (Chi-square (χ2) or Fisher’s exact test, independent t-test, Analysis of Variance (ANOVA) and repeated measures ANOVA) in SPSS software (version 26; IBM Corp., Armonk, NY, USA). The assumptions of the repeated measure ANOVA included assumptions of normally, homogeneity of variance, homogeneity of covariances (sphericity), and no significant outliers were tested for nurses’ protective behaviors variables. These assumptions were established for underlying variables except sphericity assumption for some of the variables, which was modified by the Greenhouse-Geisser Correction. In the final analysis, to assess the intervention effect, we used the random effects model to allow for clustering design by considering a random effect for the clusters in the analysis. The significance level was set at p  < 0.05.

The mean age of the participants was 33.79 ± 7.43 years, and the mean work experience was 8.55 ± 6.42 years. Most of the participants were female, married, and held Bachelor of Science (BSc) degrees in nursing. There were no statistically significant differences between the two groups in terms of demographic characteristics and groups were homogenous in terms of demographic variables, except for the types of work shifts (Table  2 ).

The results of the independent samples t-test showed that the mean scores of protective behaviors against ergonomic and psychosocial hazards were not statistically significant ( p  > 0.05) between the control and intervention groups before the intervention; however, the mean scores of protective behaviors against physical, chemical, biological and total hazards were significantly higher in the control group than in the intervention group at the baseline ( p  < 0.05). Immediately and one month after the educational intervention, the mean scores of protective behaviors in all dimensions were significantly higher in the intervention group than in the control group ( p  < 0.05); except for the physical hazard sub-scale measured immediately after the intervention (t = 1.342, p  = 0.182) (Table  3 ).

Intragroup comparison using the one-way repeated measure ANOVA showed a significant increase in the total mean score of protective behaviors and sub-scales in the intervention group over time, reflecting the impact of the educational intervention on the protective behaviors of nurses in the intervention group, while a declining trend was noticed in the control group over time (Table  3 ). Bonferroni post-hoc comparison procedure indicated that at the measurements of pre-intervention, immediately and one-month after the intervention, the total mean scores of protective behaviors against occupational hazards and all sub-scales were statistically significant differences in the intervention and control groups ( p  < 0.05); except for physical (MD = 0.075, p  = 0.089) and ergonomic hazards (MD = 0.023, p  = 1) measured at pre-intervention and immediately after the intervention in the control group, as well as for psychosocial hazards (MD = 0.046, p  = 0.056) in the control group and ergonomic hazards (MD=-0.071, p  = 0.461) in the intervention group measured immediately and one month after the intervention.

The present study aimed to evaluate the effects of an educational intervention based on the protection motivation theory on the protective behaviors of emergency ward nurses against occupational hazards. The findings showed that nurses in the intervention and control groups were similar in terms of demographic characteristics. Most nurses were female, married, without children, and had BSc degrees. Most of the participants just worked in one hospital, and had a history of vaccination against HBV and COVID-19. Most of the nurses had no history of allergy to latex, had no underlying disease, and had no history of exposure to occupational hazards and diseases. The results of this study indicated that the PMT-based educational intervention improved the emergency ward nurses’ protective behaviors against various types of occupational hazards (physical, chemical, biological, ergonomics, and psychosocial hazards) in the intervention group. The results of a study in Iran showed that training a standard guideline about the safe handling of antineoplastic drugs, effectively improved the knowledge and behaviors of chemotherapy ward nurses [ 24 ]. Another study in Iran, showed that efficacy, effectiveness and rewards were the most predictors constructs of PMT for adherence to safe injection guidelines among nurses, suggesting that educational interventions for nurses should be more focused on these constructs [ 25 ]. In the present study, we included the most important constructs of the PMT for preparing and delivering the education content to emergency ward nurses. Another study in India also revealed that educational workshops improved HCWs’ knowledge about occupational hazards [ 26 ]. A literature review study also highlighted the positive impacts of e-training programs on employees’ knowledge and behavior regarding occupational health and safety and reducing workplace injuries [ 27 ]. These findings are consistent with the present study regarding the impact of educational intervention on individuals’ protective behaviors against occupational hazards, also it should be denoted that some parts of the educational intervention in the present study were delivered virtually on mobile platforms. A study on the efficiency of web-based learning in preventing exposure to occupational hazards in a clinical nursing setting showed that this type of education could significantly boost knowledge, but no remarkable changes were seen with regard to attitudes and behaviors [ 28 ]. Regarding behavioral dimensions, our results differed from the above-mentioned survey, which could be due to differences in training methods and educational content used in these studies. The present study used multi-methods approaches for education such as face-to-face and virtual methods, whereas education in the above-mentioned study was purely web-based. It should also be noted that changes in behavior are not solely dependent on knowledge, and other factors such as workload, time availability, access to facilities, and self-efficacy may also be influential. For instance, a study identified that type of profession, self-efficacy and behavioral intention were related factors to HCWs’ protective behaviors against COVID-19 [ 22 ]. In the present study, education was based on the constructs of PMT, and various factors for changing protective behaviors were discussed with the participants. Anyway, education is considered an effective factor for changing the behaviors of people in other topics and contexts [ 15 , 16 , 17 , 18 ].

A study investigated the impact of an educational program on overall occupational safety and ergonomic, biological, radiation, and chemical hazards among nurses and other HCWs in India and verified the influence of this program in boosting knowledge regarding these hazards. The effect of this program on the knowledge of biological hazards was highest and for the radiation, and chemical hazards were lowest. The participants of the recent study suggested that psychosocial hazards should be added to educational programs [ 29 ]. In the present study, psychosocial hazards were also considered. We also observed that in the intervention group and one month after the intervention, the highest and lowest mean scores of protective behaviors belonged to biological and ergonomic hazards, respectively. The results of the present study were consistent with the findings of the above-mentioned study on biological hazards and inconsistent with regard to ergonomic hazards. This discrepancy may be related to factors such as the educational content, nurses’ self-efficacy and access to equipment for performing protective behaviors against occupational hazards.

There are several limitations to consider in this study. The construct validity of the scale of emergency nurses’ protective behaviors against occupational hazards was not investigated and verified. This scale was a self-report, so the data in some dimensions might not reflect the actual levels of nurses’ protective behaviors. Future studies can use more objective scales for evaluating these behaviors. Additionally, the participants in this study were selected from educational and public hospitals, which might limit the generalizability of results to nurses working in private hospitals. There could be some organizational factors such as rules and laws that were not evaluated in this study, so future studies can also pay attention to these factors. Finally, data collection was conducted immediately and one month after the intervention, so longer follow-ups (3–6 months) are recommended for future studies to determine the durability of protective behaviors and the long-term effects of the educational intervention.

The study results showed that the implementation of an educational intervention based on PMT constructs could be effective and valuable in increasing the protective behaviors of emergency nurses against occupational hazards. Education alone is insufficient to change nurses’ health behaviors against occupational hazards. More attention should be paid to other factors affecting health and protective behaviors, such as access to personal protective equipment (PPE), work conditions, facilities, organizational regulations, state rules and laws, workload, and time restrictions. Future research efforts can be focused on designing more specific educational interventions based on the needs of nurses and also include nurses from various hospital wards.

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Analysis of variance

Bachelor of Science

Content Validity Index

Content Validity Ratio

Hepatitis B virus

Hepatitis C virus

HealthCare Workers

Human immunodeficiency virus

Intraclass Correlation Coefficient

Mean Difference

Master of science

Protection Motivation Theory

Standard Deviation

Statistical Package for the Social Sciences

United States of America

Levy BS, Wegman DH, Baron SL, Sokas RK. Occupational and Environmental Health. 4th ed. In: Lipscomb JA. CHAPTER: 32 C Hazards for Healthcare Workers. Publisher: Oxford University Press; 2017. Pages 673–680. https://doi.org/10.1093/oso/9780190662677.003.0037 .

Che Huei L, Ya-Wen L, Chiu Ming Y, Li Chen H, Jong Yi W, Ming Hung L. Occupational health and safety hazards faced by healthcare professionals in Taiwan: a systematic review of risk factors and control strategies. SAGE Open Med. 2020;8:2050312120918999.

Article   PubMed   PubMed Central   Google Scholar  

World Health Organization (WHO). Occupational hazards in the health sector, https://www.who.int/tools/occupational-hazards-in-health-sector (accessed 13 February 2024).

Ramsay J, Denny F, Szirotnyak K, Thomas J, Corneliuson E, Paxton KL. Identifying nursing hazards in the emergency department: a new approach to nursing job hazard analysis. J Saf Res. 2006;37(1):63–74.

Article   Google Scholar  

El-Sallamy RM, Kabbash IA, El-Fatah SA, El-Feky A. Physical hazard safety awareness among healthcare workers in Tanta university hospitals, Egypt. Environ Sci Pollut Res Int. 2018;25(31):30826–38.

Article   PubMed   Google Scholar  

Larese Filon F, Pesce M, Paulo MS, Loney T, Modenese A, John SM, et al. Incidence of occupational contact dermatitis in healthcare workers: a systematic review. J Eur Acad Dermatol Venereol. 2021;35(6):1285–9.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Mengistu DA, Tolera ST, Demmu YM. Worldwide Prevalence of Occupational exposure to needle Stick Injury among Healthcare Workers: a systematic review and Meta-analysis. Can J Infect Dis Med Microbiol. 2021;2021:9019534.

Krishnan KS, Raju G, Shawkataly O. Prevalence of work-related Musculoskeletal disorders: psychological and physical risk factors. Int J Environ Res Public Health. 2021;18(17):9361.

Attar SM. Frequency and risk factors of musculoskeletal pain in nurses at a tertiary centre in Jeddah, Saudi Arabia: a cross sectional study. BMC Res Notes. 2014;7:61.

Rozo JA, Olson DM, Thu HS, Stutzman SE. Situational factors Associated with Burnout among Emergency Department nurses. Workplace Health Saf. 2017;65(6):262–5.

Bardhan R, Heaton K, Davis M, Chen P, Dickinson DA, Lungu CT. A Cross Sectional Study evaluating psychosocial job stress and health risk in Emergency Department nurses. Int J Environ Res Public Health. 2019;16(18):3243.

Kibunja BK, Musembi HM, Kimani RW, Gatimu SM. Prevalence and effect of Workplace Violence against Emergency Nurses at a Tertiary Hospital in Kenya: a cross-sectional study. Saf Health Work. 2021;12(2):249–54.

Guerin RJ, Sleet DA. Using behavioral theory to Enhance Occupational Safety and Health: applications to Health Care workers. Am J Lifestyle Med. 2020;15(3):269–78.

Conner M, Norman P. Predicting and changing Health Behavior, Research and practice with Social Cognition models. 4th ed. McGrawHill Global Education Holdings: LLC; 2015.

Google Scholar  

Hosseini Zijoud SS, Rahaei Z, Hekmatimoghaddam S, Zarei S, Sadeghian HA. Effect of education based on the protection motivation theory on the promotion of protective behaviors in medical laboratories’ staff in Yazd, Iran. Int Arch Health Sci. 2023;10(4):171–6.

Rakhshani T, Nikeghbal S, Kashfi SM, Kamyab A, Harsini PA, Jeihooni AK. Effect of educational intervention based on protection motivation theory on preventive behaviors of respiratory infections among hospital staff. Front Public Health. 2024;11:1326760.

Soleimanpour Hossein Abadi S, Mehri A, Rastaghi S, Hashemian M, Joveini H, Rakhshani MH, et al. Effectiveness of Educational intervention based on Protection Motivation Theory to Promotion of Preventive behaviors from Brucellosis among ranchers of Farmer. J Educ Community Health. 2021;8(1):11–9.

Matlabi M, Esmaeili R, Mohammadzadeh F, Hassanpour-Nejad H. The Effect of Educational intervention based on the Protection Motivation Theory in Promotion of Preventive behaviors against COVID-19. J Health Syst Res. 2022;18(1):30–8.

Boeka AG, Prentice-Dunn S, Lokken KL. Psychosocial predictors of intentions to comply with bariatric surgery guidelines. Psychol Health Med. 2010;15(2):188–97.

Bassett SF, Prapavessis H. A test of an adherence-enhancing adjunct to physiotherapy steeped in the protection motivation theory. Physiother Theory Pract. 2011;27(5):360–72.

Sadeghi R, Hashemi M, Khanjani N. The impact of educational intervention based on the health belief model on observing standard precautions among emergency center nurses in Sirjan, Iran. Health Educ Res. 2018;33(4):327–35.

Toghanian R, Ghasemi S, Hosseini M, Nasiri M. Protection behaviors and related factors against COVID-19 in the Healthcare Workers of the hospitals in Iran: a cross-sectional study. Iran J Nurs Midwifery Res. 2022;27(6):587–92.

Lawshe CH. Quantitative approach to content validity. Pers Psychol. 1975;28(4):563–75.

Nouri A, Seyed Javadi M, Iranijam E, Aghamohammadi M. Improving nurses’ performance in the safe handling of antineoplastic agents: a quasi-experimental study. BMC Nurs. 2021;20(1):247.

Karimi M, Khoramaki Z, Faradonbeh MR, Ghaedi M, Ashoori F, Asadollahi A. Predictors of hospital nursing staff’s adherence to safe injection guidelines: application of the protection motivation theory in Fars Province, Iran. BMC Nurs. 2024;23(1):25.

Khapre M, Agarwal S, Dhingra V, Singh V, Kathrotia R, Goyal B, et al. Comprehensive structured training on occupational health hazards and vaccination: a novel initiative toward employee safety. J Family Med Prim Care. 2022;11(7):3746–53.

Barati Jozan MM, Ghorbani BD, Khalid MS, Lotfata A, Tabesh H. Impact assessment of e-trainings in occupational safety and health: a literature review. BMC Public Health. 2023;23(1):1187.

Tung CY, Chang CC, Ming JL, Chao KP. Occupational hazards education for nursing staff through web-based learning. Int J Environ Res Public Health. 2014;11(12):13035–46.

Naithani M, Khapre M, Kathrotia R, Gupta PK, Dhingra VK, Rao S. Evaluation of Sensitization Program on Occupational Health Hazards for Nursing and Allied Health Care Workers in a Tertiary Health Care setting. Front Public Health. 2021;9:669179.

Download references

Acknowledgements

The authors would like to appreciate the Shahid Beheshti University of Medical Sciences (SBMU), the school of Nursing and Midwifery of the university, and the authorities of the four educational hospitals in Tehran, Iran (Imam Hossein, Shahid Modarres, Shohada-e-Tajrish, and Loghman Hakim) for their support, cooperation, and assistance throughout the study. We would also like to thank all the participants who took part in the study.

Shahid Beheshti University of Medical Sciences (SBMU), Tehran, Iran.

Author information

Authors and affiliations.

Student Research Committee, Department of Community Health Nursing, School of Nursing and Midwifery, Shahid Beheshti University of Medical Sciences, Tehran, Iran

Mohadeseh Nouri

Department of Community Health Nursing, School of Nursing and Midwifery, Shahid Beheshti University of Medical Sciences, Tehran, Iran

Saeed Ghasemi & Sahar Dabaghi

Department of Statistics and Epidemiology, Faculty of Health, Tabriz University of Medical Sciences, Tabriz, Iran

Parvin Sarbakhsh

You can also search for this author in PubMed   Google Scholar

Contributions

This manuscript was the results of the collaboration of all authors. Authors SG, MN, SD, and PS designed the study and wrote the study proposal. MN conducted data collection. PS, SG, MN, and SD analyzed the data. SG, MN, SD, and PS wrote the final draft of the manuscript and prepared tables. SG submitted the manuscript to the journal. All authors have read and approved the final manuscript.

Corresponding author

Correspondence to Saeed Ghasemi .

Ethics declarations

Ethics approval and consent to participate.

The necessary permits and approvals for this study were obtained from the Research Ethics Committees of School of Pharmacy and Nursing & Midwifery-Shahid Beheshti University of Medical Sciences (Approval ID: IR.SBMU.PHARMACY.REC.1401.195, Approval Date: 2022-12-06). The protocols were in accordance with the Declaration of Helsinki. Participants were provided with information about the research and its objectives, the confidentiality of their information, their right to withdraw from the study, and their access to the study findings. Written informed consent was obtained from all participants, and the necessary permissions were obtained from authorities before sampling.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

12912_2024_2053_MOESM1_ESM.docx

Supplementary Material 1: The online version contains a supplementary file (The Scale of emergency ward nurses’ protective behaviors against occupational hazards)

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Nouri, M., Ghasemi, S., Dabaghi, S. et al. The effects of an educational intervention based on the protection motivation theory on the protective behaviors of emergency ward nurses against occupational hazards: a quasi-experimental study. BMC Nurs 23 , 409 (2024). https://doi.org/10.1186/s12912-024-02053-1

Download citation

Received : 19 February 2024

Accepted : 30 May 2024

Published : 18 June 2024

DOI : https://doi.org/10.1186/s12912-024-02053-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Emergency wards
  • Health behaviors
  • Occupational health
  • Quasi-experimental study

BMC Nursing

ISSN: 1472-6955

an experimental design where the experimental units are randomly assigned

COMMENTS

  1. Completely Randomized Design

    Download reference work entry PDF. A completely randomized design is a type of experimental design where the experimental units are randomly assigned to the different treatments. It is used when the experimental units are believed to be "uniform;" that is, when there is no uncontrolled factor in the experiment.

  2. Guide to Experimental Design

    An experimental design where treatments aren't randomly assigned is called a quasi-experimental design. Between-subjects vs. within-subjects In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

  3. Experimental Design

    The completely randomized design is probably the simplest experimental design, in terms of data analysis and convenience. With this design, participants are randomly assigned to treatments. A completely randomized design for the Acme Experiment is shown in the table below. In this design, the experimenter randomly assigned participants to one ...

  4. Random Assignment in Experiments

    Random sampling (also called probability sampling or random selection) is a way of selecting members of a population to be included in your study. In contrast, random assignment is a way of sorting the sample participants into control and experimental groups. While random sampling is used in many types of studies, random assignment is only used ...

  5. Experimental Design: An Introduction

    The most common block design is the randomized complete block design, where each block has as many experimental units as there are treatments, and each treatment is randomly assigned to one unit in each block. Other types of block designs are incomplete block designs, where not every treatment can occur in each block. This poses a number of ...

  6. Randomized Block Design

    Download reference work entry PDF. A randomized block design is an experimental design where the experimental units are in groups called blocks. The treatments are randomly allocated to the experimental units inside each block. When all treatments appear at least once in each block, we have a completely randomized block design.

  7. Experimental Design for ANOVA

    Here are two ways that you might assign experimental units to groups: Independent groups design. Each experimental unit is randomly assigned to one, and only one, treatment group. This is also known as a between-subjects design. Repeated measures design. Experimental units are assigned to more than one treatment group.

  8. Experimental Design: Definition and Types

    An experimental design is a detailed plan for collecting and using data to identify causal relationships. Through careful planning, the design of experiments allows your data collection efforts to have a reasonable chance of detecting effects and testing hypotheses that answer your research questions. An experiment is a data collection ...

  9. Experimental Design

    In a block design, experimental subjects are first divided into homogeneous blocks before they are randomly assigned to a treatment group. If, for instance, an experimenter had reason to believe that age might be a significant factor in the effect of a given medication, he might choose to first divide the experimental subjects into age groups ...

  10. Rigor and Reproducibility in Experimental Design: Experimental designs

    A random number table or generator can be used to assign random numbers to experimental units (the unit or subject tested upon) so that any experimental unit has equal chances of being assigned to treatment or control. ... A randomized block design can be used when experimental units are heterogeneous in age or weight. Solution to Exercise 2. 1 ...

  11. ECO 351 Exam 2 Flashcards

    An experimental design where the experimental units are randomly assigned to the treatments is known as _____. a. systematic sampling b. completely randomized design c. random factor design d. factor block design. b. completely randomized design. The mean square is the sum of squares divided by _____. a.

  12. Experimental Design: Types, Examples & Methods

    Control: After the participants have been recruited, they should be randomly assigned to their groups. This should ensure the groups are similar, on average (reducing participant variables). 2. Repeated Measures Design. Repeated Measures design is an experimental design where the same participants participate in each independent variable condition.

  13. Experimental Design

    Replication: Experimental design allows researchers to replicate their experiments to ensure that the findings are consistent and reliable. Replication is important for establishing the validity and generalizability of the findings. Random assignment: Experimental design often involves randomly assigning participants to conditions. This helps ...

  14. Completely Randomized Design: The One-Factor Approach

    Completely Randomized Design (CRD) is a research methodology in which experimental units are randomly assigned to treatments without any systematic bias. CRD gained prominence in the early 20th century, largely attributed to the pioneering work of statistician Ronald A. Fisher. His method addressed the inherent variability in experimental units by randomly assigning treatments, thus countering ...

  15. 14.3: Design of Experiments via Random Design

    The simplified steps for random design include the following: Choose a number of experiments to run (NOTE: This may be tricky to pick a number because it is dependent upon the amount of signal recovery you want.) Assign to each variable a state based on a uniform sample. For instance, if there are 5 states, each state has a probability of 20%.

  16. 3.3

    In experimental design terminology, factors are variables that are controlled and varied during the course of the experiment. For example, treatment is a factor in a clinical trial with experimental units randomized to treatment. Another example is pressure and temperature as factors in a chemical experiment. Most clinical trials are structured ...

  17. Components of an experimental study design

    In a design involving vaccination, the treatment could have two levels: vaccine and placebo. Types of factors. Experimental factors: levels of the factor are assigned at random to the experimental units. Observational factors: levels of the factor are characteristic of the experimental units and is not under the control of the investigators.

  18. Solved An experimental design where the experimental units

    Question: An experimental design where the experimental units are randomly assigned to the treatments is known as _____. a. systematic sampling b. factor block design. An experimental design where the experimental units are randomly assigned to the treatments is known as _____. Here's the best way to solve it. d)completely randomized design.

  19. Chapter 13: Analysis of Variance and Experimental Design

    An experimental design where the experimental units are randomly assigned to the treatments is known as ? a factor block design ? random factor design ? completely randomized design ? none of the above; The number of times each experimental condition is observed in a factorial design is known as ...

  20. Module 12 Flashcards

    An experimental design where the experimental units are randomly assigned to the treatments is known as _____. completely randomized design. The number of times each experimental condition is observed in a factorial design is known as a(n) _____. replication.

  21. Solved An experimental design where the experimental units

    Question: An experimental design where the experimental units are randomly assigned to the treatments is known as a factor block design b.random factor design C. completely randomized design d. None of the answers is correct. There are 2 steps to solve this one.

  22. An experimental design where the experimental units are randomly

    An experimental design where the experimental units are randomly assigned to the treatments is known as a Completely Random Design (CRD). In a CRD, each experimental unit has an equal chance of receiving any particular treatment, which helps to ensure the independence of the treatment effects and the validity of the conclusions drawn from the ...

  23. The effects of an educational intervention based on the protection

    The present quasi-experimental study was conducted with two intervention and control groups, using a pretest-posttest design. A total of 124 nurses working in the emergency wards of four hospitals (two hospitals for the intervention group and two hospitals for the control group by random assignment) were selected by multistage sampling method.

  24. Solved An experimental design where the experimental units

    D. randomized treatment design. An experimental design where the experimental units are randomly assigned to the treatments is known as ? options: A. completely randomized design. B. random factor design. C. factor block design. D. randomized treatment design. There are 2 steps to solve this one.